Core Web Vitals Uses Real-World Traffic for Scoring

Core Web Vitals – Google’s upcoming search ranking signal – focuses on three specific areas of the user experience: loading, interactivity, and visual stability with specific metrics that can be measured and tracked.

Photo by Carlos Alberto Gómez Iñiguez

As a Google developer site described last year, “Core Web Vitals are the subset of Web Vitals that apply to all web pages, should be measured by all site owners, and will be surfaced across all Google tools. Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome.”

In order to pass the web vitals assessment, a page has to meet the 75th percentile over 28 days – meaning at least three out of four visitors experience the target level of performance or better. A page meeting the recommended targets for all three metrics passes the assessment. For new pages without 28 days of data, Google aggregates similar pages and computes scores based on those groupings.

Although the way Google calculates the web vitals assessment – effectively, looking at a month’s worth of actual site traffic — hints at that traffic volume might be a scoring factor. However, Google’s Senior Webmaster Trends Analyst John Mueller laid those concerns to rest in a statement to Search Engine Journal explaining, “So, for Core Web Vitals, the traffic to your site is not important as long as you … reach that threshold that we have data for your website.”

He added, “ … kind of the pure number of visitors to your site is not a factor when it comes to Core Web Vitals and generally not a factor for ranking either.”

Real-world UX data

What might be the most important aspect of Core Web Vitals to understand is the data used by Google for scoring is based on real-world, field data. This data comes from the Chrome User Experience Report, a Google report based on actual user visits and webpage interactions meaning its data isn’t based on simulated page loads or even real page visits by bots rather than people.

In practice, this means lab simulation tools such as Lighthouse can produce different results for metric values because while valuable, Lighthouse creates a snapshot of what a hypothetical site visitor’s user experience might be and provides developers with suggestions for areas to improve. The Chrome User Experience Report provides different metric results because it’s based on field data that’s based on how actual users experience the site.

And even though Core Web Vitals is a Google ranking signal, according to Mueller, the quality of the content is still vital.

“And the other thing is that relevance is still by far much more important,” he told Search Engine Journal. “We still require that relevance is something that should be kind of available on the site. It should make sense for us to show the site in the search results because, as you can imagine, a really fast website might be one that’s completely empty. But that’s not very useful for users.”