Countless reports published over the years have agreed on a central point: users are extremely sensitive to web performance. Even the smallest change in the website page weight or load time alters traffic noticeably. Today businesses must make sure that their websites work really fast on all devices.
These reports (1, 2,3, 4, 5) speak of fractions of a second increases in load time resulting in increased bounce rate and decreased revenues (100ms increase in latency = 1% reduction in sales according to Amazon). Google’s experiments in this area showed that user traffic took months to recover after they deliberately slowed page load times for certain users.
In a 2012 analysis Walmart found that when load times go from 1 second to 4 seconds, conversion rates decline sharply. For every 1 second improvement in page load time, they saw a 2% conversion increase.
Abandonment rates on phones and tablets are higher than on desktop. With the volume of mobile traffic surpassing desktop traffic, the problem is getting worse for poorly implemented or slow websites. Radware reported the abandonment rate for mobile shopping carts at 97% in 2014, compared with 70-75% for desktop carts.
The impact of poor performance on customer loyalty and returning visitors should also cause concern. A study carried out by Equation Research on behalf of Compuware found something similar – with 46% of mobile web users unlikely to return to a website they had trouble accessing in the past. Another Compuware survey, this time of tablet users, found that 33% are less likely to purchase from a company online if they experience site poor performance and 46% will go to competitor websites. Even more troubling, 35% are less likely to visit the problematic website on any platform. 66% described ‘poor performance’ as ‘slow load time’.
High speed mobile networks are rolling out rapidly but they are starting from a very low base. For most people 3G is the absolute best case, in which case the average page now takes 8-40 seconds to load, in perfect laboratory conditions. TCP’s slow-start congestion control strategy means that real life load times can be significantly slower than predicted by throughput predictions. Note also that 4G availability in a given region doesn’t mean people actually have access to it—some operators charge more for these plans.
Google uses page speed as a ranking signal. This means that faster pages rank higher. By doing this, Google is rewarding better user experience with a higher ranking, and demoting poorer user experience. Having an performant site means that you are not unnecessarily dropping in search result position.
Additionally Google rewards mobile-optimized sites with a mobile-friendly label, increasing the chances that mobile visitors will click on a search result.
Not only do bigger pages take longer to download, but they also cost more to download. Not all data packages are unlimited, and even in advanced markets monthly data limits of just a few hundred megabytes are common. What happens when a user’s data is used up? It either costs them more to browse, or they don’t browse at all. Nobody wins in this scenario!
Out-of-bundle and roaming data rates have a history of being extortionate. Everyone has heard a story, or has experienced first hand, an unexpected data bill running into the hundreds or thousands of dollars. Even a single page view of an unoptimized site can cost tens of dollars. An optimized site will minimize the cost to your visitors.
It is true to say that careful caching of resources linked from pages means that repeat visitors won’t have to fetch everything again but, to paraphrase an old saying, you never get a second chance to make a fast first impression!
In light of this evidence, it’s clear that web performance is absolutely crucial!
Creating performant device independent sites
Appreciating the impact of performance is the first step. The next step is to address it. And to address it, we must understand what causes poor performance and we must be able to measure it.
In tuning a web page for performance, we’re trying to minimize the time from when the first request for the page is made, to when the page is rendered in the browser. The stack of technologies behind the web is large and complex, and inefficiencies can be introduced at any level; many things are out of your hands. For example, you have no control over the user’s device, or over the quality of connection a device is using.
There are, however, some things that you do have control over. At the top of the list is page weight. Page weight is central to a performant website and has a huge impact on both user experience and market reach. By page weight, we intuitively mean the sum of the byte size of all the assets and resources sent from server to browser in order to render a page.
Measuring page weight can be tricky however; it’s next to impossible to come up with a single number for all but the simplest pages of today’s modern web. Different devices may receive different payloads; ads and social widgets might be included in iframes, which are technically separate pages but which contribute to the overall experience; and when exactly can you say a page has completed loading, when modern web apps may continue to asynchronously load data indefinitely? It’s telling that none of the most popular tools for measuring page weight can agree on their measurements. One thing that can be agreed on, though, is that bigger pages take longer to load, and so the goal of a performant site must be to minimise page weight, whatever your precise definition.
The number of HTTP requests required to render a page also impacts performance. Each separate resource included in a page requires a separate request across the network, and establishing a connection is one of the most time-consuming parts of delivering a web page. So it follows that this number should be minimized too: requests should only be made when necessary, and should be combined where possible as we’ll see later.
Another aspect to the performance story is that of perceived performance. That is, how fast it feels that a page loads. The idea here is that priority should be given to getting something in front of the user’s eyeballs as quickly as possible, and that things that won’t be immediately visible, or that don’t impact the initial load, can be de-prioritized.
While this technique won’t improve bottom-line download load times, or data usage, it will improve the user experience by providing useful content immediately.
So, how can a site be built that performs well across the multitude of devices and screen sizes that are out there? There are a number of approaches to this problem.
1. Adaptive web design
Adaptive web design is a technique that has been in use since the dawn of the mobile web, over 16 years ago. It relies on a device detection library or database installed on the web server (or a remote web service) to detect the device accessing the web site and return its capabilities. This set of capabilities allows the web developer to fine-tune the resulting page to match the device’s capabilities with a very high level of control. Due to the device detection involved, this technique of adapting to the device is sometimes called “browser sniffing”. Despite the claims of its detractors, device detection is extremely reliable and accurate, with good solutions typically reporting in excess of 99.5% accuracy in detecting devices in the wild.
The effectiveness and reliability of this technique speaks for itself: it is used by almost every major internet brand that takes its mobile presence seriously, including Google, Facebook, Amazon, Netflix, Youtube, eBay and Yahoo.
2. Responsive web design
If adaptive web design is about content adaptation, then responsive web design is about resolution independence. RWD is based on a set of design principles and techniques that allow a website to be flexible enough to work well at varying screen resolutions, including mobile devices. RWD has, at its core, three main techniques:
- A flexible grid—making sure that the underlying page grid scales nicely with screen resolution rather than using fixed pixel dimensions
- Flexible images—images that work well within a flexible grid
- CSS media queries—using CSS styling tailored to ranges of resolutions or types of device
By using these techniques it is possible to serve a single HTML document to a wide range of devices and expect a reasonable result: with a bit of hackery to support older browsers, sites built using this technique will typically work well on all desktop browsers and most smartphones. It can be considered more of a one-size-fits-all approach to web design.
RESS (Responsive design with Server Side components) is a technique that addresses some of the shortcomings of the RWD approach by combining responsive techniques with server side adaptation. It’s a best-of-both-worlds approach. The drawback is that it is more difficult to implement.
Generally, the RESS approach works by starting off with a server adapted page that is optimized for the user’s device. Having delivered this page to the browser, it can then be refined using client side responsive techniques to further tune the experience.
This approach gets around the problems associated with the one-size-fits-all approach of RWD, while making server side adapted pages more flexible to configuration differences and customizations on any individual device.