On October 20, 2025, Amazon Web Services experienced a whopping outage that brought huge sections of the internet down for approximately 15 hours, affecting millions of consumers worldwide and possibly costing companies $75 million an hour. The outage brought down big platforms such as Snapchat, Reddit, Ring, Alexa, Wordle, Fortnite, and a host of other services, illustrating just how reliant the internet is today on a couple of cloud computing giants.
What was the massive disruption
The disruption began around 3:11 AM ET when AWS first reported experiencing high error rates and latencies for certain services in its US-EAST-1 region, which is located in Northern Virginia. Amazon identified the cause to be an issue with a Domain Name System (DNS) resolution in DynamoDB, a significant cloud database service that stores user data and sensitive information for tens of millions of apps.
The DNS is the internet’s phone book, converting names of websites into numerical IP addresses that computers use to communicate. When it couldn’t translate, software couldn’t locate the correct server addresses for DynamoDB’s API and essentially cut off apps from their data. Amazon had the data safely stored, but nobody could retrieve it for hours, which caused massive blocks of the internet to lose temporary memory.
The problem cascaded through AWS infrastructure as the initial DNS issue was followed up by a matching loss of function in EC2’s internal subsystem for provisioning new instances. Network load balancer health checks subsequently failed, causing connectivity issues across different services like Lambda, DynamoDB, and CloudWatch.
The ripple effect across industries
As AWS hosts approximately 30 percent of the global cloud computing market, the outage was disastrous across the world. The finance sector was particularly affected as Coinbase, Robinhood, and Venmo saw interruptions hamper millions of users from making trades and managing assets. UK banking systems like Lloyds Bank and Bank of Scotland also suffered severe disruptions.
Gaming sites lost heavily as Fortnite, Roblox, and other cloud-based games crashed. Social media and messaging platforms like Snapchat, Signal, and Reddit were down. Not even critical services were spared, with airline websites disrupted, schools and colleges unable to access Canvas learning management systems, and UK government sites like Gov.uk crashed.
Amazon’s own infrastructure was not spared the chaos. The company’s e-commerce site, Prime Video, Alexa smart speakers, Ring doorbells, and Amazon Music all lost connectivity. Downdetector recorded over 6.5 million instances of malfunction on over 1,000 different businesses, with over one million instances being reported from within the United States alone.
The financial cost
Early estimates put the economic loss at astronomical figures. According to Tenscope, global businesses lost around $75 million an hour of business time during the outage, with Amazon suffering the maximum loss at around $72 million an hour. Other major players also had major losses like Snapchat at $611,986 per hour, Zoom at $532,580 per hour, and Roblox at $411,187 per hour.
The economic impact overall will be in the hundreds of millions, and even billions, of dollars when taking into account lost productivity, suspended business operations, and recovery processes. Most businesses carry insurance to protect against loss, but most policies, according to experts, only activate if an outage extends beyond eight hours, resulting in a significant gap between operational exposure and insurance trigger.
Amazon’s response and recovery timeline
AWS acknowledged the problem quickly, with engineers being sent out immediately to identify and correct the problems. At 5:01 AM ET, Amazon had pinpointed the DNS resolution issue with DynamoDB’s API as the primary culprit. The company took multiple simultaneous paths to accelerate recovery.
Around 6:35 AM ET, AWS indicated that the underlying DNS issue had been fully mitigated and most services were operating normally. The day continued with issues, however, as systems struggled to recover backlogs and achieve full functionality. At 6:01 PM ET, around 15 hours into the original issue, AWS indicated all services had normalized, though some services still processed message backlogs for several additional hours.
The vulnerability of cloud centralized infrastructure
The event highlighted the biggest vulnerability in modern internet infrastructure: excessive concentration in a small number of cloud providers. Northern Virginia’s US-EAST-1 zone is AWS’s largest and most mature data center cluster, and it has become the default option for the majority of firms worldwide. This clumping creates what experts call a “single point of failure” for tens of millions of vital services.
AWS US-EAST-1 region comprises 158 facilities with 2,544 megawatts of capacity and hosts infrastructure for more than 90 percent of Fortune 100 companies. When large cloud providers experience problems, the effects cause many unrelated services to fall simultaneously worldwide. When large cloud providers sneeze, the internet gets a cold, as one analyst averred.
Whereas centralized cloud facilities bring radical advantage in scalability and cost savings, outages like this expose the compromises of the system. Businesses gain the ability to offer global services without having to manage sophisticated infrastructure themselves, but they bear the risk that an area issue can cause simultaneous outages across an entire region.
Experts recommend that companies implement multi-cloud strategies and architect multi-regions to diversify and improve resilience. However, moving digital operations from one to another cloud provider remains a massive, costly, and risky endeavor, and all major cloud providers have had enormous outages. AWS committed to publishing a comprehensive post-event report outlining the nature of the cause of the event in great detail and lessons learned.
Read more: Good news for travelers – here’s how to store your passport on an Android phone
Read more: When does Daylight Saving Time end in 2025 – what day do the clocks go back?