Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Gadget fiends got their first real-life sight of Google Glass in January this year when Google founder Sergey Brin was caught riding the New York subway wearing a pair of Google’s new glasses.
Since then the excitement about this cool new gadget has been tempered somewhat, by an equally loud backlash from some technology and privacy commentators who argue that we should be concerned about the new level of engagement and interactivity with our “real world” that Google Glass offers, not embracing it.
Google Glass, a pair of glasses with a miniaturized web-camera and browser, allows you to walk along the street – or indeed wherever you are – and like a fighter pilot, have a head-up display that allows you to surf the web, get email alerts and social media updates; much of which is voice controlled. All very exciting surely?
But what seems to have got everyone concerned is the ability for you to record everything that is going on around you. Of course we can already record the world around us with our smart-phones. Most breaking news events already rely on footage shot by members of the public – so called citizen journalists – who first reaction to any event tends to be to grab the smartphone first, and help later!
So surely the ability to simply record content can’t be the issue for privacy advocates? Nor surely does it stack up to say that when someone records on the smart phone it’s more “obvious” – you can spot who is filming you. I’m not sure that anyone wearing Google Glass is going to blend into the background – in the short term at least.
Perhaps it’s that people are more worried about the sinister way in which Google may act alone in using Google Glass data. After all Google is a data company. It lives and breathes data – and whilst it argues much of the time that the data it collects is simply used to improve its search engine algorithms, it does seem to have form for simply grabbing data – whether it has permission or not – with a view to storing it, even if it hasn’t at that point in time decided what it wants to do with it.
Critics of Google, and Google Glass, point to the way in which Google Books started scanning thousands upon thousands of books, many of which were still in copyright, without the permission of the author. They point to the Google Street View project. After the first Street View project got off the ground many home owners “removed” themselves from the project, however Google having just republished fresh images, appears to have ignored home owners “opt-outs.” That combined with Google’s intercepting of passwords and other personal and sensitive data from WIFI networks during Street View has put Google on a collision course with regulatory authorities in countries such as Germany, and more recently the UK’s Information Commissioner’s Office.
So the argument seems to be – it’s not about being filmed per se that people are concerned about – after all in the UK we have almost 2 million CCTV cameras (whilst CCTV is less common in the US it’s use is growing – New York has around 3,000 cameras and Chicago 10,000) – but that CCTV is perceived to be there to protect us from crime. We hope at least that if we’re caught by a CCTV camera going into Tesco, we’re not suddenly going to find that information appearing on our Facebook page, or marketing from Tesco or its competitors coming through our letter box like confetti.
And that is the crux of the issue for privacy campaigners. Will vast streams of information be used to market to us. Will we suddenly find that status updates about us appear on social monitoring, even though we’ve not posted them ourselves?
Beyond the fears of becoming targets for real-time personalized advertising, will Google Glass lead to a change in behavior? As Google Glass becomes more widespread will people think about their about when they’re in public for fear of being filmed? And if so is that a bad thing? Might we see more people queuing patiently in a store, more people giving up a seat for an elderly person on a train? That’s great – surely that makes Google Glass the nudge to a better society!?
Well maybe not quite that. But what’s for certain is that although they’ll almost certainly be headlines about individual cases of privacy breach from Google Glass, individuals in Europe claiming Google Glass breaches their right to privacy under Article 8 of the European Convention on Human Rights (otherwise known as pay-day for lawyers!), and the on-going battle between Google and government regulators, don’t think that it will spell the end of Google Glass. They’re here to stay – and will be hugely popular as prices comes down. And for many individuals, particularly those who are younger, sharing information and using social media is a deeply ingrained part of their lives. They’ll embrace Google Glass not reject it.
James Barnes, StatusCake.com
Share this
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
Find out everything you need to know in our new uptime monitoring whitepaper 2021