What happened on 512K day? Will it Happen again?
The internet is a fascinating maze of highways, town streets, back roads, and dirt paths that is as delicate as it is complex. Chances are, while you're reading this blog right now, your laptop has sent an instruction 'pack' halfway across the world via one of these arterial paths which form the internet, and fetched results for you the same way.
An invisible presence worldwide, it is surprising that even today the internet is highly susceptible to a number of issues. One of the most evident reasons is the internet itself; the sheer size of this entity has grown exponentially, more than the think tank which conceptualized it in the 1980s had ever thought of. As of 2014, there were roughly 2.9 billion internet users and in 2019, the number had shot up to 4.13 billion.
The other key reason for internet vulnerability is the lack of a centralized governing and monitoring entity. In a world with more than 100 countries, it is impossible for one to be the watchdog and consequentially responsible for all the issues with the internet. While the US has some of the largest and most powerful hardware to support the internet, the technology and brilliance to improve and maintain it come from several other countries.
This means that an essential thing like the internet can potentially be hacked by a person who knows how to do so or mismanaged by someone knows too little about it. Surprisingly, one of the biggest glitches in the history of the internet, which occurred on 12th August 2014, was not caused by an ethical hacker or a cyber-terrorist. It happened simply because a router had a few thousand additional internets 'routes' added to one of its tables. This is comparable to water from five swollen rivers bursting upon a dam that was built to handle the water flow of one; the dam just shattered in some places and the flow of data was interrupted.
The Internet Model
The August 2014 event, also known as the 512k Day, was caused when prominent internet service provider (ISP) Verizon dumped roughly 15,000 new routes on a 'master' table. Gigantic routers house tables, which in turn contain routes packets of data need to follow to retrieve search results and display them to users. These large capacity tables are known as Broader Gateway Protocols (BGPs), to which routes are added and used by various ISPs around the globe. Storing routes on a global platform is essential because data searches can originate from multiple geographic locations, but need to traverse a pre-defined path so they are handled quickly and correctly. Roughly 512,000 routes were the norm in terms of BGP capacity during the pre-512K Day period.
While some BGPs had routes added close to its upper limit, the ceiling had never been breached until that fateful August day. As a result of the breach, the BGP table was unable to handle the excess routes, and therefore, searches from users were 'pinging' off servers, bouncing from one to another for a very long time, or the task could not be completed at all. A large number of people suffered from extremely slow internet, while for some nothing worked at all.
It is important to note that this failure led to a cascading negative effect on the servers of other ISPs too because the internet world consists of mutually benefiting players. Many of the hardware and storage facilities are shared, so when one BGP suffers, it means all other ISPs relying on it suffer too.
BGP hiccups have happened several times so far. Some other examples are when Pakistan Telecom incorrectly updated a BGP in their haste to implement a court order to block YouTube nationwide, and when a local Pennsylvania ISP advertised that they could provide the shortest and quickest access to Cloudflare, a host platform for several websites. Experts realized that the then-current capacity of even Tier 1 BGPs, such as the one that was impacted by the 512K breach in 2014, was insufficient to adequately and safely serve the burgeoning demands on ISPs. All providers scramble to provide the shortest path to IPv4 addresses all over the world, which results in more and more internet routes being added to their routers. So the capacity was increased from 512,000 to 768,000 routes.
The results so far have been promising, despite several reports of 'impending doom' doing rounds in the virtual world in 2019. Drilling down at what exactly the limit is, some experts have stated that each 'K' in 768k refers to 1024 bytes, so that translates into 768,432 routes. In a 2019 article published on Ripe Network Coordination Centre's website, the maximum number of routes reached was 784,851, and this is including peer ISPs trying to communicate between IPv4 and IPv6 addresses. Several experts also say that the hype surrounding 768K day is not real; even if it were to come true, BGP companies are now in a much better position to handle such events by:
a. Promptly decommissioning ancient routers with a large number of old routes
b. Creating shorter alternate routes for ISPs
c. Upgrading to newer routers equipped to handle a larger number of routes
d. Artificiality increasing the limit on routes temporarily
Several companies like Cisco have already taken preventive measures to avoid, or at least lessen the impact of 768K day if it were to occur, and have spoken about modern-day network routers that support 'millions of routes', while repeat offenders like Verizon are being asked to upgrade their networks on a periodic basis. It is anticipated that at best, small-town ISPs with archaic routers supporting IPv4 only would perhaps be the worst hit in such an event, and would certainly not manage to hamper the big players for a long duration.