Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Last updated: January 28, 2026
Google’s Core Web Vitals (CWV) are a set of user‑experience metrics designed to quantify how real users experience the web. They are not abstract SEO scores, they are signals derived from how pages load, respond, and remain visually stable for users.
This guide is written as a practical, engineering‑led reference. Rather than focusing only on definitions or scores, it explains:
Throughout, the emphasis is on diagnosis and action, not just measurement.
Core Web Vitals are a subset of Google’s Page Experience signals. They focus on three aspects of user experience:
Google evaluates these using aggregated real‑user data over time. Individual page loads may vary, but Core Web Vitals reflect the overall experience users have in the real world.
What it measures: Loading experience
LCP measures how long it takes for the largest visible element (such as a hero image or main heading) to render within the viewport.
A slow LCP usually indicates server response delays, large media assets, or render‑blocking resources.
What it measures: Responsiveness
INP measures how quickly a page responds to user interactions, such as clicks or taps. It replaced First Input Delay (FID) as Google’s primary responsiveness metric.
Poor INP scores are commonly caused by heavy JavaScript execution or long tasks blocking the main thread.
What it measures: Visual stability
CLS measures how much the layout shifts unexpectedly during page load or interaction.
High CLS often results from images, fonts, or ads loading without reserved space.
Pages that load quickly, respond immediately, and remain visually stable are easier and more pleasant to use. Poor Core Web Vitals often correlate with frustration, misclicks, and abandonment.
Core Web Vitals are part of Google’s Page Experience signals. While they are not the sole ranking factor, consistently poor CWV performance can limit a page’s ability to compete in search results.
Performance issues frequently impact conversion rates, engagement, and retention. Improving Core Web Vitals often delivers benefits beyond SEO alone.
Core Web Vitals scores indicate how a page performed for users. They do not explain why it performed that way.
CWV scores are outcome metrics. They are useful for benchmarking and prioritisation, but meaningful improvements require diagnostic insight into what happens during page load and execution. This is where page speed monitoring and request waterfalls become essential.
Page speed monitoring provides repeatable tests that show how a page loads under controlled conditions. While page speed metrics are not identical to Core Web Vitals, they strongly correlate with them and are one of the most practical ways to identify performance bottlenecks.
For most teams, page speed monitoring is the fastest way to:
A page speed waterfall visualises every request made during page load and how long each takes. This turns performance issues into concrete, actionable problems.
A waterfall helps identify:
If the largest visual element appears late in the waterfall, it often explains a poor LCP score.
While INP is influenced by real user interactions, waterfalls often reveal contributing factors such as:
These patterns frequently correlate with responsiveness problems.
Waterfalls highlight resources that load late and cause layout shifts, including:
Once you understand where time is being spent during page load, the next step is deciding what to change. The table below maps each Core Web Vitals metric to common symptoms, likely causes, and the areas teams typically optimise.
| CWV metric | What you’ll see | Common causes | Where to look in the waterfall | Typical fixes |
|---|---|---|---|---|
| LCP | Main content appears late | Slow TTFB, large images, blocking CSS | Long initial request, late-loading hero asset | Image optimisation, caching, CSS prioritisation |
| INP | Page feels sluggish to interact | Heavy JS, long tasks, third-party scripts | Large JS bundles, long execution gaps | Code splitting, deferring scripts, reducing JS |
| CLS | Page jumps during load | Late fonts, ads, injected content | Resources loading after render | Reserve space, fix font loading, stabilise embeds |
This approach focuses on continuous improvement rather than chasing isolated metrics.
Many of these issues are visible immediately in a request waterfall.
No. Core Web Vitals are measured per UR. Different pages can have very different scores depending on content and complexity.
Yes. Google primarily evaluates Core Web Vitals using mobile user data, reflecting real‑world usage patterns.
Yes. Scores can change due to deployments, traffic patterns, infrastructure changes, or user behaviour.
No. Performance is an ongoing concern. Regular monitoring helps prevent regressions and maintain a good user experience.
Page speed monitoring tools, such as StatusCake, provide visibility into load behaviour and performance trends. Waterfall views make bottlenecks explicit, helping teams understand where changes will have the greatest impact on user experience.
The value lies not in the score itself, but in the ability to diagnose and improve performance over time.
Share this
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
Find out everything you need to know in our new uptime monitoring whitepaper 2021