Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



In this post, we’ll take you through the available options for managing sub-accounts, from managing permissions in app, to our new functionality which allows you to assign individual API keys to your users. First let’s take a look at our new API Key Management feature with support for multiple API Keys.
It’s now possible to add multiple API keys to an account, this means that for teams the same key will no longer need to be shared, and each member of staff/department can be assigned their own access token.
As well as being an improvement security wise, you can monitor the usage for each key in the API Key’s section of the user menu, to do this – and to add new keys for your team simply click here.
On our Business level plan it’s possible to add Sub-Users who have access to the in-app data, this allows them to view the information in a much more visual fashion when compared to the API. Sub-Users will be able to view the full settings and details of the tests on the “Main Account” depending on the permissions that they are assigned.
You can assign a few different parameters to your Sub-Users, first off you should decide whether they will have “view-only” or “full-edit” rights which will affect how they can interact with the test data from the main account.
Another handy option that can be set is tag based access, this means that you can assign a Sub-User to one or more test tags, and when they log in it’s only the tests under these tags that they will be able to see and interact with.
Share this
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
Find out everything you need to know in our new uptime monitoring whitepaper 2021