Amazon Cloud Outage in Middle East After Iranian Drone Strikes

Amazon Cloud Outage in Middle East After Iranian Drone Strikes

I was on a call when the alert lit up: sparks and a fire at an Amazon cloud facility in the United Arab Emirates. Within minutes, the AWS Management Console and CLI began returning errors and whole Availability Zones went dark. By the time the Health Dashboard posted an update, three facilities across two countries had been damaged.

I cover infrastructure failures so you don’t have to learn the hard way. You need clear, fast choices when a provider that holds your data is suddenly unstable, and I’ll walk you through what matters now.

A security alert read: sparks and a fire at an Amazon cloud facility in the UAE.

That alert was the first public sign that something had hit Amazon’s web infrastructure directly. Reports say drone strikes caused structural damage and blackouts at two UAE sites; a Bahrain site suffered water damage after nearby explosions triggered fire-suppression systems. Amazon’s Health Dashboard now lists the incident as Disrupted in the UAE and Impacted in Bahrain, and the company confirmed failures in two Availability Zones that are affecting the AWS Management Console and CLI.

This isn’t a routing blip. It’s a physical strike on the backbone that many applications and services rely on. For anyone running production on AWS Middle East, the risk is immediate: interrupted services, lost writes, and recovery windows that may stretch for hours or longer. The advice on the Health Dashboard is blunt—back up data, migrate workloads, and activate disaster recovery plans if you have them.

Think of a major cloud provider going dark like a fuse blown in a city grid: services that seemed immune become vulnerable in one instant.

Was Amazon hit by Iranian drones?

Short answer: Amazon has confirmed drone strikes caused damage to facilities, but the company declined to publicly name the attacker. Reuters and other outlets pressed for confirmation that Iran was behind the strikes; Amazon did not confirm or deny. Independent signals point to regional military activity and retaliatory strikes surrounding the U.S.-Israeli operations that began on February 28, while inside Iran internet connectivity has been heavily restricted—researcher Doug Madory has documented near-total cutoffs.

Amazon’s Health Dashboard now lists multiple sites as affected; the update expanded the scope of the outage.

The concrete detail from the dashboard is practical and unnerving: two Availability Zones are failed, the Management Console and CLI are disrupted, and the situation in the region is described as “unpredictable.” Amazon is telling customers to move copies of critical data out of the Middle East if they can. If your app depends on single-region redundancy, that message is not optional.

If you can, snapshot and replicate—fast. Use AWS tools like S3 Cross-Region Replication, RDS Read Replicas, or multi-region S3 buckets. If you run critical infrastructure through third parties—CDNs, SaaS platforms, identity providers—check their status pages right now and assume cascading impacts.

How should AWS customers respond?

You should act like your recovery plan is already being tested: take immediate backups, route traffic away from affected AZs and regions, and stage failover environments in another AWS region or a different cloud provider. If you maintain offline backups, verify integrity; if you rely on automated snapshots, confirm they completed before the outage. For teams without runbooks, prioritize data export and DNS failover steps that you can execute in the next 30–120 minutes.

On the ground, power and communications are fractured and attacks are mixing kinetic and cyber tactics.

Beyond the physical hits, a separate wave of cyber operations has targeted Iranian websites and apps, with attackers defacing sites and posting calls to action. That parallel activity raises the specter of mixed-mode campaigns: kinetic strikes to take systems offline and digital operations to shape the narrative. Inside Iran, researchers report heavy traffic filtering and whitelisting that lets only regime-approved traffic through.

When kinetic actions shake a data center, the damage is concrete; when data and trust are hit digitally, recovery becomes a credibility problem as much as a technical one. It’s like a fishbowl tipping onto a rack of hard drives—one mess breaks hardware, the other ruins reputations and trust.

Will outages spread beyond the Middle East?

Possibly. AWS regions are logically isolated, but many global services and integrations depend on specific zones and endpoints. CDN edges, authentication services, telemetry systems, payment processors, and partner APIs can all amplify outages. Monitor your service maps and dependency graphs—tools such as Datadog, New Relic, or your internal runbooks will surface failing integrations faster than waiting for vendors to post status updates.

Here’s what I would do if your workloads are affected: pause nonessential deployments, validate backups, orchestrate cross-region failover, and communicate clearly with users about expected impact and mitigation steps. If you manage customer data in the affected regions, prepare to answer questions from legal and compliance teams, and document every action for post-incident review. Sources covering this story include Reuters, CNBC, and posts from internet-reachability researchers like Doug Madory; keep an eye on Amazon’s Health Dashboard and official status pages for live updates.

We’re watching a regional confrontation that has spilled into the global internet and cloud supply chain—do you move everything now, or wait and watch the failover play out?