Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Google has finally started unveiling its algorithm update, much to many website owners’ dismay. Unfortunately, we don’t have a choice in the matter. Instead, we have to just jump on board and make sure that our websites are in tip-top condition so that the search engine giant can’t find a reason to penalise us or drop us in rankings.
This refers to the average time the page takes to load when a customer clicks onto your website. You might not think that page speed is a big deal but you’ll be surprised at the drop-off rate on pages that are slow.
If you could get your website to load in an equivalent way to Usain Bolt running the 100m back in 2012, that would be the ultimate dream. Unfortunately, getting your page loading speed this quick is a difficult task, even for the most experienced developers and website owners there are.
So how do you manage page speed? There are a few ways to increase the speed so it meets Google’s now (very) high standards:
It’s very important to monitor page speed regularly and especially if there are any developmental changes that have been made which can affect it. If unmonitored, not only can it lead to a drop in traffic and page interaction but also lower Google rankings and ultimately, revenue. Ouch.
As a broader term, this measures customers that physically do whatever action it is that you want from them. An example would be if you are selling a service you can measure the number of purchases in relation to the number of visitors on the page. This is the same for businesses that have a subscription service where they can measure the number of new sign-ups.
It is important to always check this metric whenever a change has been made or a marketing promotion has gone out to see the impact it’s had on conversions. This can also be an indicator of a poor-performing website that would need work to be done.
The infamous bounce rate. We’d all love to say we have a 0% bounce rate, but unfortunately this is stuff that dreams are made of.
Bounce rate should be monitored with other metrics to help identify how customers interact with your website. If the bounce rate is very high (over 70%), then there’s a real problem that needs fixing. This level of bounce would inevitably have an impact on almost everything including your conversion rate.
Ways to reduce your bounce rate:
So, an error rate is a little more technical than the previous metrics on this list, but still, a very important one to monitor. Any errors on your website will impact how Google ranks you, as the user experience is greatly affected by on-page errors.
In a nutshell, this measures the request issue from the total number of requests that your website receives. It’s very difficult to prevent errors, but the best thing to do is to use an end-user monitoring tool that can show you what and when things go wrong for a user. What is the biggest error you don’t want to see? That infamous 404 pages.
What a great way to measure the interaction of customers on a specific page. This metric allows for Analytics to measure the best-performing pages against the worst so you can see why some pages perform well and why others do not.
Here at StatusCake, we always aim for over 1 minute 30 seconds because with this amount of time, we can safely assume 3 things:
How have we come to this conclusion?
The simple metric of measuring the time a customer spends on your website, and each page, ultimately shows the levels of interaction they have. For example, pages with a high average page spend might mean that visitors find value in the content or like the page layout so these are things that you can replicate on the poorer performing pages. The aim here is to try and get the visitors to spend as long as possible on the website and hopefully buy into the product or service being offered. So what can you do to keep them on a page?
For the ultimate website performance, use StatusCake’s website monitoring. I’m sure Google would be pleased.
Share this
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
Find out everything you need to know in our new uptime monitoring whitepaper 2021