Performance tools provide a baseline, but don’t deliver the full story
Anyone following Google knows that the search giant is actively working to increase mobile-site speed across the web. In fact, Google is banging the drum, announcing that the next major algorithm update will, again, focus on site performance.
Because slow sites are rarely seen — as users abandon quickly — the push for a faster web benefits everyone, from searchers to retailers to advertisers. And the pressure is on, as greater performance correlates with greater revenue. This reality has led to the emergence of new technologies to power the web. The addition of frameworks like AMP and Progressive Web Apps, as well as pre-fetching and edge capabilities have made sub-second experiences possible, but also make measuring performance difficult.
Because widely used performance tools don’t capture the whole story, and in some instances omit speed-optimizing solutions, the reports often don’t align with real-user experiences. This isn’t a dirty secret, rather, a shift in how to measure performance.
Let’s put it this way, if Amazon.com – arguably the best shopping experience out there – does not pass the Core Web Vitals assessment, there’s more to the story.
What to look out for when measuring site speed
We all enjoy the simplicity of entering a URL into a lab measurement tool and waiting for the results. Tools like Lighthouse and Page Speed Insights are easy to use and, with just one click, return an aggregate score. While the results offer a good litmus test, they are limited, and often obfuscate real performance. Let’s break down what to be aware of.
Cold vs Warm Cache
Tools base their scores on a cold load, where the cache is empty, and the server is called for the first time. Unfortunately, websites that use PWA and service workers – features that promote speed – are not recognized. The service worker, for example, preloads assets and creates an incredibly fast experience. The service worker is used for returning users, and users that enter through organic search. In practice, the vast majority of site visitors benefit from the service work, however, lab-benchmarking tools don’t account for it.
Single Page Applications (SPAs)
It’s well understood that SPAs improve site speed. After an initial load, SPAs enable quick page transitions, while using less processing to render. Google’s Chrome User Experience Report (CrUX) and Google Analytics (GA) are examples of reporting tools – powered by real-world user data – that only capture the first page load in a browsing session. So while subsequent pages have fast-page speeds, it’s not accounted for by CrUX or GA.
A traditional penalty to site performance is total requests, which continues to be factored into the speed index score. However, techniques like pre-fetching, which increase requests also boost performance. Predictive prefetching, coupled with edge computing, lets websites stream in dynamic content to the browser before it’s even requested. This approach creates an appearance of instant, and is a perfect solution for complex, database-driven websites. So, while prefetching delivers sub-second page loads, in a controlled environment, like Lighthouse, it’s knocked.
The AMP framework is designed to reduce page-load times. And part of the magic of AMP is that entire pages discovered in SERPs are prefetched, including images above the fold, and pages are prerendered on the mobile device before the user clicks the SERP listing. Users get the effect of instant, as they can immediately consume content. While this feature is both unique to AMP and Google search, it’s a valuable technique that promotes speed, and not factored into the various performance evaluation tools.
In summary, testing tools provide a snapshot, but unfortunately rely on first-page load, rather than browsing speed and a user’s practical experience.
How to get the full story
Network-based metrics, like download size, total requests, and download time, continue to be weighted as part of overall performance scores. But Google is constantly changing things up, as the latest version of Lighthouse includes more perceptual metrics — what users actually experience — with the addition of Largest Contentful Paint, Total Blocking Time (TBT), and Cumulative Layout Shift (CLS). As you perform tests, keep the following guidelines in mind.
Perception is reality
Largest Contentful Paint (LCP), which tracks the largest graphical element, is a metric that tells an emotional story. LCP is an indication of when users believe to perceive that a page is rendered. You can argue this is the single most important metric, as it measures when content above the fold is rendered, letting users cognitively move forward.
Go beyond first-page load
Most performance tools focus on the first page load with a cold cache. Of course this is important, as Google favors and ranks faster sites higher. But it’s not the full story, as subsequent page loads are usually where users are left waiting. Once you’ve captured a user’s attention, you can’t lose them while they drill down from category to products and ultimately to checkout. Browsing transitions, which correlate to better conversions, also need to be measured. To do this, a series of field analytics and synthetic tests must be used to pinpoint transition speed.
Balance with Real User Measurement (RUM)
Synthetic tests should be counterbalanced with RUM data, which measure actual experiences by real users. While RUM data takes time to aggregate, it provides the most accurate measurement of a user’s experience.
We can help you isolate performance
While testing tools have limitations, there are methods to isolate and assess real-world performance. We use custom events added to Google Analytics, combined with various reporting tools, to track page-transition speeds in Single Page Applications.
We help you stay one-step ahead of Google and to deliver the speed and quality experience that the search giant is factoring to rank websites. To learn more, please email firstname.lastname@example.org.