The Mijingo Blog

Latest news, updates, free tutorials, and more from Mijingo.

The Pieces of Git

by Ryan Irelan

As far as the Git workflow is concerned, there are three pieces of Git that we should be aware of before moving forward with some slightly more complex (but totally doable!) explanation.

They are:

  • Repository
  • Index
  • Working Tree

Let’s cover each one in a bit more detail.

Repository

A repository is a collection of commits, and a record of what the project’s working tree looked like at one time. You can access the history of commits via the Git log.

There’s always a current starting point in a repository and that’s called the HEAD. A repository also contains tags and branches.

The job of the repository is to be the container that tracks the changes to your project files.

Working Tree

This is a directory on your file system that is associated with a repository.

You can think of this as the filesystem manifestation of the repository. It’s full of the files you edit, where you add new files, and from which you remove unneeded files. Any changes to the Working Tree are noted by the Index (see below), and show up as modified files.

Index

This is a middle area that sits between your Git repository and your Working Tree.

The Index keeps a list of the files Git is tracking and compares them to your Working Tree when you make changes. These changed files show up as modified before you bundle them up into a commit.

You might have heard this called the staging area where changes go before they are committed to the repository as commit objects.

If you ever use the -a flag when committing, then you are effectively bypassing the Index by turning your changes directly into a commit without staging them first.

Learning More About Git

To learn more about Git check out our Git courses, lessons and tutorials.

Learn more about Git

What is the Element API in Craft CMS?

by Ryan Irelan

The Element API is a first-party plugin by Pixel & Tonic that allows you to create an API for sharing your data from Craft CMS. The responses are formatted as JSON (what is JSON?).

Using the plugin you can create an API that exposes Craft Elements via a JSON-formatted responses. The API only is read-only. You cannot write to Craft using this API and there’s no authentication built in. Be careful what you share!

The Element API is made up of the following pieces:

  • The plugin package itself
  • The elementapi.php file you have to create
  • The code in the file that defines the endpoints and the exposed data

To use the Element API plugin you need to meet these requirements:

  • Craft CMS installed and populated with data you want to expose via the API
  • PHP 5.4 or later (different than the current requirement for Craft CMS)

Installing the Element API plugin is like installing any other Craft plugin: drag the plugin file to the plugins directory and then install the plugin via the Craft Control Panel.

Watch the Free Lesson

How to Create an API with Craft CMS

by Ryan Irelan

In a recent lesson, I walked through how to use the Element API Plugin from Pixel & Tonic to create a JSON API that shares your website’s data (only that data which you choose, of course) for consumption by another website or service.

The free lesson is a fundamental approach to building a read-only API using the default ElementTypes (entry, category, asset, etc) in Craft. As long as you follow my steps, you can get an API up and running quickly.

Toward the end of the lesson, we also learn how to expose your Craft Commerce order data via an API so you can have an outside tool (like a Goole Spreadsheet or reporting tool) ingest it and use it create some sort of report.

Watch the Lesson

Learn Craft Commerce

by Ryan Irelan

It used to be that doing e-commerce work was something I would talk a client out of or offload to another company or a hosted platform. I didn’t want to touch it. At all. Too much risk. But this caused me to lose business and it usually meant a less-than-ideal solution for my clients or customers. Thankfully, the e-commerce solutions have gotten a lot better.

One those solutions is Craft Commerce, a first party plugin for the Craft CMS.

With tools like Craft Commerce, most web developers can put together a powerful and flexible e-commerce system for any website. And the store is hosted right on the website, not on some third party platform you can’t control.

In my full-length course Fundamentals of Craft Commerce I teach you everything you need to know to get started using Craft Commerce.

 

This new course follows the same proven learning process you expect from my Mijingo courses. By the end of the course you will be ready to implement your first e-commerce website!

Start the Course for Free

Creating and Applying Patch Files in Git

by Ryan Irelan

In a previous article, I talked about how to use git-cherry-pick to pluck a commit out of a repository branch and apply it to another branch.

It’s a very handy tool to grab just what you need without pulling in a bunch of changes you don’t need or, more importantly, don’t want.

This time the situation is the same. We have a commit we want to pull out of a branch and apply to a different branch. But our solution will be different.

Instead of using git-cherry-pick we will create a patch file containing the changes and then import it. Git will replay the commit and add the changes to the repository as a new commit.

What is git-fomat-patch?

git-format-patch exports the commits as patch files, which can then be applied to another branch or cloned repository. The patch files represent a single commit and Git replays that commit when you import the patch file.

git-format-patch is the first step in a short process to get changes from one copy of a repository to another. The old style process, when Git was used locally only without a remote repository, was to email the patches to each other. This is handy if you only need to get someone a single commit without the need to merge branches and the overhead that goes with that.

The other step you have to take is to import the patch. There are a couple options for that but we’ll use the simplest one available.

Let’s create our patch file.

Using git-format-patch

I am on the repository the-commits, which is the sample repository I used in my Git version control courses. I have the experimental_features branch checked out.

This experimental_features branch has an important change in it that I want to bring to a feature branch I have going. This feature branch is going to be merged into the development branch (and eventually the master branch) so I only want to include non-experimental changes. Because of that I don’t want to do a merge because I’d like to not pull in the other features that are half-baked and would mess up my production-path branches.

Here’s the latest when I run git-log:


$ git log
commit 4c7d6765ed243b1dbb11d8ca9a28548561e1e2ef
Author: Ryan Irelan 
Date:   Wed Aug 24 08:08:59 2016 -0500

another experimental change that I don't want to allow out of this branch

commit 1ecb5853f53ef0a75a633ffef6c67efdea3560c4
Author: Ryan Irelan 
Date:   Mon Aug 22 12:25:10 2016 -0500

a nice change that i'd like to include on production

commit 4f33fb16f5155165e72b593a937c5482227d1041
Author: Ryan Irelan 
Date:   Mon Aug 22 12:23:54 2016 -0500

really messed up the content and markup and you really don't want to apply this commit to a production branch

commit e7d90143d157c2d672276a75fd2b87e9172bd135
Author: Ryan Irelan 
Date:   Mon Aug 22 12:21:33 2016 -0500

rolled out new alpha feature to test how comments work

The commit with the hash 1ecb5853f53ef0a75a633ffef6c67efdea3560c4 is the one I’d like to pull into my feature branch via a patch file.

We do that using the command git-format-patch. Here’s the command:


$ git format-patch a_big_feature_branch -o patches

We pass in the branch with which we want Git to compare against to create the patch files. Any commits that are in our current branch (experimental_features) but not in the a_big_feature_branch will be exported as patch files. One patch file per commit. We used the -o flag to specify the directory where we want those patches saved. If we leave that off, Git will save them to the current working directory.

When we run it we get this:


$ git format-patch a_big_feature_branch
patches/0001-rolled-out-new-alpha-feature-to-test-how-comments-wo.patch
patches/0002-really-messed-up-the-content-and-markup-and-you-real.patch
patches/0003-a-nice-change-that-i-d-like-to-include-on-production.patch
patches/0004-another-experimental-change-that-I-don-t-want-to-all.patch

Those four patch files (named sequentially and with a hyphenated version of the commit message excerpt) are the commits that are in the current branch but not the a_big_feature_branch.

Let’s look at the guts of one of them.


From 4c7d6765ed243b1dbb11d8ca9a28548561e1e2ef Mon Sep 17 00:00:00 2001
From: Ryan Irelan 
Date: Wed, 24 Aug 2016 08:08:59 -0500
Subject: [PATCH 4/4] another experimental change that I don't want to allow out of this branch

---
 index.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/index.html b/index.html
index f92d848..46e4eb2 100644
--- a/index.html
+++ b/index.html
@@ -9,7 +9,7 @@
   <!-- Set the viewport width to device width for mobile -->
   <meta name="viewport" content="width=device-width" />
 
 <title>Little Git &amp; The Commits</title>
 <title>Little Git &amp; The Commits FEATURING ELVIS BACK FROM THE DEAD</title>
 
   <!-- Included CSS Files (Uncompressed) -->
-- 
2.7.4 (Apple Git-66)

It looks like an email, doesn’t it? That is because all patch files are formatted to look like the UNIX mailbox format. The body of the email is the diff that shows which files have changed (in our case just index.html) and what those changes are. Using this file, Git will recreate the commit in our other branch.

Specifying a Single Commit

In this situation, I don’t need all of those patch files. All but one are commits I don’t want in my target branch. Let’s improve the git-format-patch command so it only creates a patch for the one commit we do want to apply.

Looking back at the log, I know that the commit I want to apply has the hash of 1ecb5853f53ef0a75a633ffef6c67efdea3560c4. We include that hash as an argument in the command, but precede it with a -1 so Git only formats the commit we specify (instead of the entire history since that commit).


$ git format-patch a_big_feature_branch -1 1ecb5853f53ef0a75a633ffef6c67efdea3560c4 -o patches 
  outgoing/0001-a-nice-change-that-i-d-like-to-include-on-production.patch

Now we get a single patch file, which is much safer because there’s no change we’ll accidentally apply patches of changes we don’t want!

We have the patch file, now how do we apply it to our branch?Using git-am.

What is git-am?

git-am is a command that allows you to apply patches to the current branch. The am stands for “apply (from a) mailbox” because it was created to apply emailed patches. The handy thing about git-am is that it applies the patch as a commit so we don’t have to do anything after running the command (no git-add, git-commit etc.).

The name git-am is a little strange in the context of how we’re using it but fear not: the result is exactly what we want.

Let’s apply a patch and see how it works.

Using git-am

The first thing we need to is switch over to our target branch. For this example we’ll move to the branch we compared against in the git-format-patch command.


$ git checkout a_big_feature_branch

After that we’re ready to apply the patch file with the commit we want to include.

Note: I’m working in the same repository on the same computer. When I switch branches, the patch file comes with me because it is still an untracked file. If I staged and committed the patch file then I’d need to find another way to make it accessible. You could do this by moving the patch file out of your repository to where you can access it when on the destination branch.

Because we refined the git-format-patch we only have one patch file in the patches directory:


patches/0001-a-nice-change-that-i-d-like-to-include-on-production.patch

To apply the patch to the current branch, we use git-am and pass in the name of the patch we want to apply.


$ git am patches/0001-a-nice-change-that-i-d-like-to-include-on-production.patch

And the we get confirmation that the patch was successfully applied:


Applying: a nice change that i'd like to include on production

Looking at the log now we see our change is replayed as a commit in the current branch:


$ git log
commit 69bb7eb757b2356e365934fdbea744877c3092bb
Author: Ryan Irelan 
Date:   Mon Aug 22 12:25:10 2016 -0500

a nice change that i'd like to include on production

And now our change is there!

Note that the new commit has a different hash because it is part of a different working tree than the one we formatted as a patch.

Learning More About Git

To learn more about Git check out our Git courses, lessons and tutorials.

Learn more about Git

Using git-cherry-pick

by Ryan Irelan

Recently I ran into a problem on a project where I was working on the wrong branch and committed changes. Those commits were supposed to go elsewhere and I now I need to get them into the correct branch!

There are a few options to solve this:

  • Merge the incorrect branch into the correct branch. But I don’t want to that because there are items in the incorrect branch that I don’t want. So, that’s out.

  • Recreate those changes in my working branch and just go on with my day. But that’s a waste of my time and I’m adamantly against redoing work!

  • Create a patch and then apply that patch to the new branch.

All solid options but there’s still something better:


$ git cherry-pick

Let’s review where we’re at and then how to solve the problem using git-cherry-pick.

The State of Things

I created a new commit in my repository, in the branch called some_other_feature. But that’s the wrong branch!


$ git branch
    develop
    master
    my_new_feature
*   some_other_feature
    stage

The new commit should be on the my_new_feature branch. I could merge the branches but the some_other_feature branch contains commits and changes that I don’t want in the other branch (they are not ready for merging into any upstream branches, like develop or master.

Here’s the commit I need to get into my_new_feature:


commit ec485b624e85b2cad930cf8b7c383a134748b057
Author: Ryan Irelan 
Date:   Fri Aug 19 10:44:47 2016 -0500

    new contact page

Using git-cherry-pick

The syntax of git-cherry-pick is this:


$ git cherry-pick [commit hash]

The first step is fetch the commit hash for the commits we want to cherry-pick. We can do that using git-log and then copying the hash (in full or just the last 7 characters will work) to our clipboard.

Next, we need to be on the branch where we want the changes to be (our destination branch).


$ git checkout my_new_feature

Now we can run the git-cherry-pick command and apply the commit to our destination branch.


$ git cherry-pick ec485b624e85b2cad930cf8b7c383a134748b057

This will return something like this:


[my_new_feature 1bf8955] new contact page
Date: Fri Aug 19 10:44:47 2016 -0500
1 file changed, 1 insertion(+)
create mode 100644 contact.html

If we look at our log for this branch, we now see our commit:


$ git log
commit 1bf8955d5e6ca71633cc57971379e86b9de41916
Author: Ryan Irelan 
Date:   Fri Aug 19 10:44:47 2016 -0500

    new contact page

What’s happening when we run git-cherry-pick?

  • Git is fetching the changes in the specified commit and replaying them in the current branch. The commits are not removed from the source branch; they remain intact.
  • Because this commit is being applied to a new branch and therefore has different contents it will get a different hash than the source commit.

With the problem solved, we are ready to move on with our development work!

Learning More About Git

To learn more about Git check out our Git courses, lessons and tutorials.

Learn more about Git

How I Create Courses

by Ryan Irelan

Over on my YouTube channel, I started a series on how I create courses. I’ll release one new video every week or so covering topics like my “second brain”, note taking, mind mapping, production, and more.

This YouTube-only series of videos isn’t a technical tutorial by any means. In fact, even if you’re not interested in course creation or online education you’ll probably still get something out of this.

Watch the first video on my “second brain”, and please subscribe to the channel to get the new videos automatically.

New Course: Flexible Twig Templates in Craft

by Ryan Irelan

A new course is now available for the Craft CMS series: Flexible Twig Templates in Craft.

In this course Ryan teaches the Flexible Template Stack, an approach put together by developer Anthony Colangelo. The Flexible Template Stack in Twig and Craft allows you to have reusable templates that can render content from any section of the site. This is a setup you can use over and over again on your Craft-powered projects.

Start the Course for Free

New Lesson: Installing Craft

by Ryan Irelan

I posted a new lesson on installing the Craft CMS. This lesson is an excerpt from my Up and Running with Craft course.

Watch the Lesson

New Course: Up and Running with Craft

by Ryan Irelan

Today I released a brand new course: Up and Running with Craft. This is a complete redo of its predecessor: Learning Craft.

The new course has the same goal: to help you learn the skills you need to start building websites with the Craft CMS. It does this over the course of nearly 4 hours and 21 videos.

Get Started Learning Craft

Craft Plugin Development Workbook

by Ryan Irelan

Ben Croker, the author of Craft Plugin Development, created a follow-up to his video course in the form of a workbook.

The digital workbook that builds on the plugin development course by gently guiding you along as you build another plugin on your own. Ben set up this workbook to build on and reinforce what you learned with him in the video course.

Let me take a moment to explain why a follow-up like this is important (for those of you who’d rather skip the explanation, you can go right to the workbook page).

When you learn something new, like plugin development for Craft, there are three stages that help the information you learn stay stored in your long term memory.

The first stage is watching and taking in the new material, like you would with a video course.

The second stage is watching again and engaging with the new material by, for example, coding along with the video and applying what’s being taught.

The final stage, and the one so many people skip, is applying the material you just learned again but on your own, using your own information and research to work around problems or challenges. It’s in this third and final step that real mastery starts and your brain stores the information in long term memory.

It’s the difference between understanding how Craft plugins are developed and knowing how to do it without hesitation.

Ben’s new workbook is the third step. He guides you a bit but the workbook nudges you toward solving problems on your own (with helpful links off to documentation) so you can loosen the training wheels and let them fall to the side.

The workbook is available now. Watch Ben’s introduction video to learn more.

 

Get the Workbook

Craft CMS Training for Teams

by Ryan Irelan

Today I started offering private on-site training for Craft CMS. This training class is designed for small teams (up to 15 people) who need to quickly get up-to-speed on how to build websites with Craft CMS.

You have a team. They need to learn Craft and start building or maintaining your website. Mijingo’s private, in-person Craft training class can help:

  • Get your team up-to-speed quickly and efficiently, while avoiding common mistakes and pitfalls.
  • Learn the skills necessary to manage an existing website running Craft, including how to expand and build on top of it.
  • Explore the power and affordability of using Craft CMS and leave behind other tools that break your spirit (or your budget).
  • The classroom training will provide you with the tried and true methods for building websites with Craft.

The class is ideal for:

  • college or university internal design and development teams
  • agency teams looking to quickly adopt Craft CMS as a new technology
  • embedded marketing development teams who have been tasked with using Craft and need to quickly roll out new sites with it
  • internal teams who just took on maintenance responsibility for a newly built site running Craft CMS

Of course your situation may be different. Get in touch to share it with me and learn more.

Design for Real Life

by Ryan Irelan

Over on my personal blog I wrote up a short review of the new book Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. Here’s a snippet:

The goal of the book is to “bring edge cases to the center”, as Anil Dash writes in the Introduction. Using case studies of how things can go wrong, and documenting the techniques we can use in our next project to improve your website, web application, or product, Sara and Eric helped me see a bit wider than my own life experience.

Read the entire review of Design for Real Life

Guide to Google AMP

by Ryan Irelan

Google AMP (officially known as the AMP Project) is a new way to deliver fast pages to mobile users through special HTML markup, lean HTML documents, and caching (via the AMP Cache).

But AMP is not without controversy, mostly because it requires a new HTML syntax to work properly and a fair amount of template work.

But the promise is a big one: faster pages and maybe better search rankings.

AMP stands for Accelerated Mobile Pages. It’s an open source framework with the goal of delivering faster pages to mobile users. Google has been pushing web page performance for several years (to learn more about it check out my Web Performance Testing course) but mostly wth tools that helped developers measure performance, not create fast pages.

However, Google AMP is an implementation that helps you create faster mobile pages. Instead of providing the tools to measure fast pages, Google is now offering a tool to create them, too.

AMP is a three-part system:

  • AMP HTML, a custom HTML syntax that Google parses. This is added into otherwise normal HTML documents.
  • AMP JS, a JavaScript library that is the engine that supports the HTML implementation
  • AMP Cache, a CDN that caches and serves AMP documents and assets (your pages and assets) via HTTP/2.

I put together a free lesson so we can “plug in to” AMP, learn what it is, how it works, and then step through a simple implementation together.

Watch the Lesson

Start a Course for Free

by Ryan Irelan

I just released a new feature that allows anyone (no account needed) to start a course for absolutely free. Now you can sample a course before committing to it.

How does this work?

Courses that have “free starts” are identified with a blue button on the course page.

Free Start

Click the button and you’re brought to a page that lists all of the parts of the course. If you’ve already learned with Mijingo then this page looks familiar to you.

Available videos are in orange. Click on one to start streaming the course.

Accessing free videos

At any time you can go back to the course page and purchase the entire course and keep on learning. It’s that simple!

This isn’t rolled out for all courses yet but there are several that have free starts, including the most popular courses here at Mijingo, like the Intermediate Git course or the new Web Performance Testing course.

Enjoy the “free starts”!

New Course: Git Under the Hood

by Ryan Irelan

A new course is now available in our Git version control series.

Git: Under the Hood digs in on some of the theory behind Git so we can understand exactly how Git works.

We use the low level “plumbing” commands in Git to manually create Git data objects (trees, blobs, commits) while doing the typical git-add git-commit workflow.

This course isn’t for everyone. If you use Git and like to tinker and see how things work, then this course is definitely for you.

Learn Git’s Plumbing

Why Should I Do Web Performance Testing?

by Ryan Irelan

You hear a lot about web performance testing but you’re still unsure why you should spend the extra time (and money) doing this type of work. How does it add value to you project? How can you convince your client or boss that it’s a good use of time and money?

There are a lot why you want to do performance testing but these three are the most important reasons. They are also broadly applicable to most projects.

The three performance testing reasons are:

  • Search Engine Optimization (SEO), specifically ranking in Google and how it weights page speed in rankings.
  • User experience. We will approach performance as a feature in the website just like the design would be for the functional features of the website would be.
  • Economic use of bandwidth and server resources. This one will be less of a focus for us because we won’t be doing any heavy-duty server load testing however it will be important to make sure that our sites are as performance and lean running as possible. High traffic sites with poor performance can expensive to run and maintain.

Let’s talk about each of these three reasons for performance testing a little bit more.

SEO and Google

More than five years ago, Google announced that it was going to eventually include page speed as part of its ranking algorithm for organic Google search results.

The reason for this is clear: slow pages tend to be abandoned by visitors rather quickly. If you look at a page that may take, let’s say five seconds to load, most users will abandon that page and not even try to pursue it again.

If that page was listed at the top of the rankings for a certain search term on Google, then that search results page will no longer be as valuable to the user as it would be if all of the results at the top of the list were fast accurate and relevant.

Google prides itself on providing accurate relevant and good search results. That’s one of the reasons that it became the most popular search engine on the web! If Google served results that are accurate and relevant but slow to load, causing the users to wait a long time to see the page, then that threatens the very thing that made Google so popular in the first place.

In fact, Google even tested out user behavior with slower search results returning.

Back in 2009 they tried a test where they slowed down the search results page so it served a slower page to some visitors. What they found was that their experiment “demonstrated that slowing down the search results page by 100 to 400 ms has a measurable impact on the number of searches per user.”

As Google slowed down the page speed the engagement of their users decreased. When you search Google you are after an answer—the search result or relevant page—and you want to get there as fast as you can. It’s not a surprise that slowing down the results page caused a drop off in the number of searches.

Who wants to sit and wait for a page to load? Do you find that enjoyable? I certainly don’t.

Google reaffirmed their belief that performance matters. And in a big way.

As part of the performance initiative, Google added the PageSpeed Insight, which is a freely available tool for measuring your page speed and getting recommendations for how to improve performance.

In summary:

  • Google measured slow performing search results pages and noticed negative visitor behavior.
  • Google gave us a tool to measure and improve our page speed.
  • Google now cares how fast its pages are, and yours, too.

User Experience & Project Process

Performance is a design feature. -Brad Frost

Just like with Google and the noticeable change in user engagement when they increase the page time for the search results, having a slow performing website or webpage also decreases the user experience on your site.

If we think of web performance as another part of the project, just like we would think about the user experience, information architecture, and the design, we are less inclined to neglect performance during the project pushing it off to be something that we just handle later on after the site is complete and implemented.

By making Web performance something that matters as part of the user experience and design, we are then forced to consider it very early on in the project.

In the old days, and by that I mean 5 to 10 years ago, a typical project methodology was the waterfall process. Each discipline would do their work and then pass it down the line to the next discipline. Decisions were not made inter-disciplinary, instead in a silo of each discipline.

Designers were working in a silo without any consideration or understanding of what it would take to develop the features that they designed. Developers were working in their silo without communicating with designers on implementation details that might impact their designs. This includes how a design decision would impact the implementation when done within the bounds of the agreed timeline and budget.

A certain feature, as designed, could require twice as many database queries or some API that will add overhead, which in turn could slow down the performance of the page. It could also mean that the developer would need extra time to implement proper caching and other performance enhancements on the back-end in order to make the feature acceptably performant.

But now, a trend in the work of people to whom I’m connected—and I’d argue this is an industry-wide trend—is that designers and developers are working closer together so that they are making the decisions on the features of the website in unison. Developers can help influence and inform the designers decisions for both visual design and user experience design. And designers can work closely with developers on the implementation of their designs.

This new trend is encouraging. It is not only encouraging because we are all more focused on working together as a team but also because it helps us create better performing websites at beginning of the project instead of waiting until the end, as spackled-on, last-minute consideration.

When you wait until the end to consider performance you have the weight of all of the decisions of the entire project on your shoulders pressing down and preventing you from really making it absolutely best decisions for performance.

When we wait until the end for performance improvements we usually need to pull in more technology and resources, some of which can be very expensive, instead of designing and implementing a performant website from the beginning.

Performance is both a design feature and a way of working.

Bandwidth and server resources

Just add another server. Add more RAM. -Someone, somewhere

We design, code, and build the new website and start testing it. If it’s not fast we look immediately at more server resources as the solution.

We increase memory, increase bandwidth, we offload assets to a CDN, we move the database to a dedicated server.

All of these solutions are expensive. And maybe not even necessary.

These solutions are especially expensive if you’re working on a site that gets high traffic and requires infrastructure beyond just one virtual server or some cheap shared hosting.

Just because a high traffic website will need some additional services to run properly doesn’t mean we can slack on performance.

In fact, the converse is true. The higher traffic the site the more careful we have to be about performance.

Increasing server resources is, of course, unavoidable if the website traffic requires it. But for a site that serves tens of thousands, hundreds of thousands, or even millions, of visitors in a month, making sound user experience and design decisions with performance in mind can literally save thousands of dollars in costs.

(Webpagetest.org has a small part of its test results that shows how expensive your site is in consuming resources.)

If you’re doing client work, you can literally save your clients a lot of money by keeping web performance in mind throughout the project. And, since client work is always done in the service of the clients needs and requirements, saving the money is definitely a good thing.

Those are the three reasons to web performance testing:

  • Search Engine Optimization (SEO)
  • User experience.
  • Economic use of bandwidth and server resources.

The three core pieces of launching a successful and sustainable website.

Ready to Start Web Performance Testing Your Current Project?

Learn everything you need to know to get up and running with web performance testing in our 2 hour video course.

Start Testing

New Course: Web Performance Testing

by Ryan Irelan

Web pages are getting bigger. In 2015, according to the HTTP Archive, the average web page increased in size by 16%.

2 MB pages are common now.

But assigning blame isn’t a solution. Hand-wringing and finger-pointing do nothing to improve the situation.

However, understanding how your pages perform_while you are developing them_ does.

If we can become more aware of how our feature, design, and development decisions impact the performance of the website, we can address problems before they impact the site’s user experience.

And that’s what we’re after in Web Performance Testing.

Start Learning

New Lesson: Using Git in Sublime Text

by Ryan Irelan

A new lesson is available now on how to use Git version control right inside of Sublime Text 3. The goal for this lesson is to learn one way to work a bit more efficiently while coding. There are a few packages available for connecting Git and Sublime Text. In the past I’ve used one called Sublime Text Git and it’s a popular choice. Recently I’ve started using Git Savvy and I really like it so I wanted to share it.

This lesson walks you through installing an using Git Savvy in Sublime Text 3.

Watch the Lesson

Installing Sublime Text Package Control

by Ryan Irelan

Installing packages (add-on functionality that isn't core to the Sublime Text application) in Sublime Text is faster using the Package Control tool.

Package Control isn't part of Sublime Text but an independent project that makes adding additional functionality to Sublime Text easier. If you like Package Control and find yourself depending on it for your work, consider saying thanks to the developer with a small donation to help cover server costs.

To install Package Control on Sublime Text 3, you need input a series of commands into the Sublime Text console. This is actually a Python console but for our needs we just need to paste in a series of commands and Sublime Text will take care of the rest.

Get the Python Code

From the Package Control installation instructions, we grab the Python code to install Package Control. Get the latest version from the site (I could past it here but the code changes with each release and I want to make sure you have the latest code).

Run the Code

To kick of the installation, we run the code in the Sublime Text console.

  1. Type `control-`` to open the console in the bottom portion of the Sublime Text application window.
  2. Paste in the code and press enter.
  3. Once complete and successful, restart Sublime Text.

Type shift-command-p to open the command palate and type:

package control

If you see options for Package Control (like Package Control: Install Package) then Package is properly installed.

Sometimes the tool doesn't install properly. If this happens to you, use the manual installation instructions instead.

Previous Page   Next Page

What are our customers saying?


"Just purchased your Flexible Twig course. Love it!"
Tyler Morrison
"Been enjoying @mijingo 's Learning Craft video tutorials. Feeling like I've got a good basic understanding of #craftcms Very impressive"
Laura Montgomery
"I bought your Craft Starter Pack a year and a half ago. Worth every dollar. In fact, I would've paid twice as much for it, because you saved me so much time."
Timothy Ingram
"Ben's knowledge of Craft combined with his relaxed and informal teaching style makes for a great learning experience."
Steve Abraham
"Ben puts a lot of thought into his teaching approach and has the ability to explain complex concepts in a way that just make sense"
Gareth Redfern
"Ben is great at taking a complex subject and breaking it down in a way that you can wrap your mind around. I thought that plugin development was something I would never understand, and happily Ben proved me wrong!"
Jonathan Melville
"I really appreciate all the videos and writing you have done. Your work has given me a jump start on my front end development business."
Shan Ricciardi

Perfect for Small Teams & Companies


Mijingo's courses are perfect as the training curriculum for both small teams and entire companies.

Our courses are offered in Team Packs (up to 5 people) and Company Packs (up to 25 people), so you can make one simple, fast purchase to train your entire staff.

Prices are listed with each course. Need more than 25 or something custom?

Send Your Requirements
Team Pack2-5 People
Company Pack6-25 People
Custom Pack25+ People