StatusCake

Solar Storms in 2015 Could Leave New York Without Power for Months

website downtime

A study released last week by researchers UK insurance company Lloyd’s of London and the US’s Atmospheric and Environmental Research painted an apocalyptic picture, the kind of thing of only seen in end of the world disaster movies, as it warned that large solar storms, could leave tens of millions of people in North America without electricity for months on end, if not years.

The Lloyd’s report: Solar Storm Risk to the North American Electric Grid investigated how solar storms, which give off huge blasts of plasma which once they enter the earth’s magnetic field can interfere with electrical equipment on the ground or even in the atmosphere such as satellites.

Such massive geomagnetic storms are thankfully relatively rare. Although there have been large solar storms in recent years – a storm in 1989 in Quebec left six million Canadians without power for nine hours – the massive storms that the report warns of are seen once every 150 years.

When was the last storm like this?

The last massive solar storm to hit the earth was back in 1859, and we’re now overdue another. The so called Carrington Event of 1859 is considered by many as one of the most severe storms on record, and which if it hit the US today would impact on some 20-40 million people, with blackouts last anywhere from a few weeks up until a couple of years. The report claims that the economic costs of such a storm could run into $2.6 trillion, with the knock-on effect in terms of the repercussions to the global economy far higher – leading to prolonged periods of recession and potentially civil unrest.

With solar storms recurring in cycles, researchers have predicted that the next massive storm could hit our planet in 2015. And with only a couple of years until such a possible event, governments and those companies who are most likely to be impacted, and looking at ways to manage the risk and reduce disruption should a massive storm strike.

Whilst the doomsday report primarily assesses the risk to electrical and power infrastructure, in a world driven by electronics – all of which could be rendered useless by the storm – the impact could be truly frightening. Global air travel would almost certain be grounded. Not only would the storm interfere with the control systems of the aeroplanes themselves, but the satellites that we all rely on for GPS and communications would almost certainly be knocked out.

Without electricity and communications hospitals would struggle to support patients whose lives depended on electrical equipment. Schools and offices would be unable to open.

Those areas most likely to be hit by such a solar storm include the UK, and within the US New York and Washington DC would almost certainly be affected.

Following that solar storm in 1989 the Canadian government spent £0.77bn on measures to block massive surges in electricity along its power lines – small change compared with the £8.72bn it cost to repair the system that was knocked out in just a couple of minutes. Prevention is clearly far better, and cheaper, than cure.

James Barnes, StatusCake.com

Share this

More from StatusCake

In the Age of AI, Operational Memory Matters Most During Incidents

7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Buy vs Build in the Age of AI (Part 2)

6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff

Buy vs Build in the Age of AI (Part 1)

5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.