One week ago, the United States and Israel launched coordinated strikes against Iran. We covered the initial attacks on this blog. Since then, the conflict has only escalated. U.S. Central Command reports striking over 3,000 targets and destroying 43 Iranian warships. At least 1,332 Iranian civilians have been killed. Hezbollah entered the war on March 2, expanding the fight into Lebanon. Six American servicemembers are dead. The UN estimates 330,000 people have been forcibly displaced across the Middle East. There is no ceasefire on the horizon.
There’s plenty of foreign policy commentary out there. What I want to focus on is what happened on March 2, because if you run IT infrastructure, it should have your full attention.
Drones Hit AWS
Iran’s retaliatory strikes didn’t just target military bases and oil ports. They hit data centers.
Three Amazon Web Services facilities were struck by drones in the first week of the war. Two availability zones in the UAE region (ME-CENTRAL-1) were directly hit. A third facility in Bahrain (ME-SOUTH-1) was damaged when a drone struck nearby. The attacks caused structural damage, knocked out power, triggered fire suppression systems, and the resulting water damage compounded the destruction. S3 storage in Dubai went down. AWS reported “high failure rates for data ingest and egress” across the region.
This was the first time a major cloud provider had data centers taken out by a military attack.
AWS told affected customers to “backup data and potentially migrate your workloads to alternate AWS Regions.” The Register reported that recovery took at least a day, requiring facility repairs, cooling system restoration, and coordination with local authorities. Fortune called the strikes a revelation of “the West’s Achilles heel.”
Iran’s IRGC reportedly targeted the Bahrain facility specifically because of Amazon’s contracts with the U.S. military. CNBC confirmed that Iranian state media framed the strike as retaliation against infrastructure supporting U.S. operations in the Gulf.
A thread on Reddit’s r/sysadmin put it bluntly: “This might be the first time a major cloud provider had data centers taken out by actual physical attacks. Made me realize I’ve been putting off testing our cross-region failover for way too long.”
That sysadmin isn’t alone.
Data Centers Are Physical Things
It’s easy to forget that “the cloud” is just someone else’s building full of servers. Mike Chapple, an IT professor at Notre Dame, told Fortune that data centers are “massive facilities that are hard to hide.” Their physical security is designed to stop intruders, not incoming ordnance.
AWS maintains three Middle Eastern regions: UAE, Bahrain, and Israel. Each region contains at least three availability zones within roughly 100 kilometers of each other. That geographic concentration is fine for low-latency performance. It’s a liability during a regional war.
Chapple also noted that losing a single data center is usually manageable because workloads balance across zones. But “the loss of multiple data centers within an availability zone could cause serious issues, as things could reach a point where there simply isn’t enough remaining capacity to handle all the work.” That’s exactly what happened in the UAE when two of three zones went down simultaneously.
I’m Not Expecting a War in Virginia
Let me be clear: I don’t think Iranian drones are coming for US-EAST-1. Honestly, I haven’t spent much time worrying about AWS availability at all. But maybe that’s the point—this is the kind of thing that should get a little more space in my thinking than it has.
But I wrote back in January that the cloud goes down sometimes. Software bugs, DNS failures, configuration errors, HVAC breakdowns—the cloud has always been vulnerable to these. What the UAE strikes proved is that the physical layer matters too, and that the failure modes we should be planning for aren’t limited to the ones in AWS’s post-incident reports.
Hurricanes hit the Southeast. Earthquakes hit the West Coast. Ice storms take out power grids. A fire at an OVHcloud data center in Strasbourg in 2021 permanently destroyed customer data. The specific threat doesn’t matter as much as the principle: if all your eggs are in one region, you have a single point of failure.
What to Actually Do About It
If you’re running workloads in a single region—or worse, a single availability zone—the UAE strikes are your wake-up call. Here’s where to start:
Know your tiers. Not every workload needs the same level of resilience. Your public website and your EHR don’t have the same recovery requirements. Classify your systems by how long you can afford to be down (RTO) and how much data you can afford to lose (RPO). Then match the DR strategy to the tier.
Multi-region isn’t optional for critical systems. AWS’s own disaster recovery whitepaper lays out four strategies ranging from cheap-and-slow (backup and restore) to expensive-and-fast (active-active). Most organizations land somewhere in the middle with pilot light or warm standby. The point is to have something in a second region that can take over when the first one can’t.
Test your failover. A DR plan that’s never been tested is a hope, not a plan. Simulate a regional failure. Find out what breaks. That Reddit sysadmin admitted that multi-region was “the ’eventually’ item on the backlog.” I suspect most of us could say the same.
Consider multi-cloud for your most critical workloads. This is the expensive option and I won’t pretend it’s simple. Running the same workload on AWS and Google Cloud introduces real complexity in networking, identity, and data replication. But if your business truly cannot survive the loss of a single provider, it might be the only honest answer. At minimum, make sure your backups exist outside the cloud provider that hosts your production systems.
Don’t forget DNS and identity. In the October 2025 AWS outage, a DNS failure in US-EAST-1 paralyzed over 3,500 companies. Your failover architecture is only as good as its weakest dependency. If your DNS, authentication, or certificate infrastructure is pinned to a single region or provider, that’s where your plan will break.
The Uncomfortable Truth
The cloud is still worth it. I said that in January and I’ll say it again. The engineering talent, redundancy, and monitoring that AWS, Azure, and Google bring to the table is beyond what any single organization can replicate. When the cloud goes down, it makes the news. When your on-premises server fails, it just makes your Thursday worse.
But “worth it” doesn’t mean “invulnerable.” The UAE strikes proved that data centers are physical infrastructure subject to physical threats. The 2025 outages proved that software failures can cascade across continents. And the lesson from both is the same: architect for failure, test your recovery, and don’t assume that someone else’s resilience is a substitute for your own planning.
Multi-region was always the “eventually” item on the backlog. Eventually just arrived.
