On July 14, 2025, Cloudflare’s public DNS resolver 1.1.1.1 — used by millions of people worldwide — went dark for 62 minutes. The root cause wasn’t a cyberattack. It was a misconfiguration.
The Chain of Events
Cloudflare’s internal service topology system accidentally linked the 1.1.1.1 IP prefixes to a non-production service configuration. When a test location was added to this topology, it triggered a global BGP withdrawal of the 1.1.1.0/24 prefixes.
In plain terms: Cloudflare’s routers told the entire internet “we no longer serve 1.1.1.1.”
During the withdrawal window, Tata Communications India (AS4755) began advertising 1.1.1.0/24. From the outside, this looked like a BGP hijack — but it was actually a consequence of the outage, not its cause. With the legitimate route withdrawn, stale or opportunistic routes filled the vacuum.
Impact
- 62 minutes of global downtime (21:52 to 22:54 UTC)
- Millions of users lost DNS resolution
- Any service using 1.1.1.1 as its resolver was affected
Key Takeaways
- BGP is the fragile layer under DNS anycast — a single misconfiguration can withdraw routes globally.
- Don’t depend on a single public resolver — configure fallback resolvers and consider using a managed DNS service for your authoritative zones.
- Internal automation is a double-edged sword — topology changes need blast-radius controls.
- Monitor your resolver reachability — external monitoring would have detected this within seconds. Services like HostDNS provide built-in DNS health monitoring across multiple locations.
Sources: Cloudflare Incident Report, Kentik Analysis