csperkins.org

Evolving the Internet Through COVID-19 and Beyond

7 July 2020

Co-authored by Jari Arkko, Alissa Cooper, Tommy Pauly, and Colin Perkins

This article was originally published at CircleID

As we approach four months since the WHO declared COVID-19 to be a pandemic, and with lock-downs and other restrictions continuing in much of the world, it is worth reflecting on how the Internet has coped with the changes in its use, and on what lessons we can learn from these for the future of the network.

The people and companies that build and operate the Internet are always planning for more growth in Internet traffic. But the levels of growth seen since the start of the global COVID-19 pandemic are nothing that anyone had in their plans. Many residential and mobile networks and Internet Exchange Points reported 20% traffic growth or more in a matter of weeks as social distancing sent in-person activities online around the world. While nearly all kinds of traffic, including web browsing, video streaming, and online gaming have seen significant increases, real-time voice and video have seen the most staggering growth: jumps of more than 200% in traffic and daily conferencing minutes together with 20-fold increases in users of conferencing platforms.

By and large, the Internet has withstood the brunt of these traffic surges. While users have experienced brief outages, occasional video quality reduction, and connectivity problems now and again, on the whole, the Internet is delivering as the go-to global communications system allowing people to stay connected, carry on with their daily lives from their homes, and coordinate responses to the pandemic.

These are impressive results for a technology designed 50 years ago. But the robustness of the Internet in the face of huge traffic surges and shifting usage patterns is no accident. It is the result of continuous improvements in technology, made possible by its underlying flexible design. Some recent proposals have suggested that the Internet's architecture and underlying technologies are not fit for purpose, or will not be able to evolve to accommodate changing usage patterns in coming years. The Internet's resiliency in the face of recent traffic surges is just the latest and most obvious illustration of why such arguments should be viewed skeptically. The Internet has a unique model of evolution that has served the world well, continues to accelerate, and is well-positioned to meet the challenges and opportunities that the future holds.

The Internet is evolving faster today than ever before

As requirements and demands on networks have changed, the Internet's ability to continually evolve and rise to new challenges has proved to be one its greatest strengths. While many people still equate the Internet with "TCP/IP," the Internet has changed radically over the past several decades. We have seen a huge increase in scale, great improvements in performance and reliability, and major advances in security. Most remarkably, this evolution has been almost entirely seamless — users notice improvement in their online experience without any sense of the hard work that went into it behind the scenes.

Even from its early days, the Internet's fundamental structure has evolved. When the mappings of host names to addresses became impractical to store and use, the Domain Name System (DNS) was created. Then the original rigid class-based IP address space was made more flexible with Classless Inter-Domain Routing, which enabled greater scalability.

The pace at which major new innovations like these are integrated into the network has accelerated in recent years. For example, five years ago about 70% of web connections were not secured with encryption. They were vulnerable to observation by anyone who intercepted them. But a renewed focus on security and rapid changes that have made it easier to deploy and manage security certificates have accelerated encryption on the web to the point where just 20% or so of web connections are vulnerable today.

Security protocols are also being updated more rapidly to stay ahead of attacks and vulnerabilities. Transport Layer Security (TLS) is one of the foremost protocols used to encrypt application data on the Internet. The latest version, TLS 1.3, can cut connection setup time in half, and expands the amount of information that is protected during setup. The protocol was also carefully designed to ensure that it could be deployed on the Internet in the broadest way possible. After its finalization in 2018, there was more TLS 1.3 use in its first five months than TLS 1.2 saw in its first five years. Roughly one third of all traffic using TLS has upgraded from TLS 1.2 to TLS 1.3 in a period of 18 months.

Even the basic transport protocols used on the Internet are evolving. For decades, there has been a desire to add features to transports that the original TCP protocol could not support: multiplexed streams, faster setup time, built-in security, greater data efficiency, and the ability to use multiple paths. QUIC is a new protocol that supports all of those features, and is carefully designed with deployability and protocol evolution in mind. Even prior to finalization, initial versions of QUIC have become predominant in traffic flows from YouTube and mobile Facebook applications. QUIC is the foundation for the newest version of HTTP, HTTP/3. This means that faster and more resilient connections can be provided to existing HTTP applications, while opening up new capabilities for future development.

These are just a handful of examples. We have seen the Internet evolve through massive technological shifts, from the rise of cellular data networks and mobile broadband to the explosion of voice, video, and gaming online. We have seen the creation of massively distributed content delivery networks and cloud computing, the integration of streaming and conversational multimedia on the web platform, and the connection of billions of constrained "Internet of Things" devices to the network. And although some Internet systems have been deployed for decades, the pace of technological advancement on the network continues to accelerate.

Keys to successful evolution

This evolvability is an inherent feature of the Internet's design, not a by-product. The key to successfully evolving the Internet has been to leverage its foundational design principles while incorporating decades of experience that teach us how to successfully upgrade a network composed of billions of active nodes all while it is fully operational — a process more colloquially known as "changing the engines while in flight."

The Internet was explicitly designed as a general-purpose network. It is not tailored to a particular application or generation of technology. This is the very property that has allowed it to work well as physical networks have evolved from modems to fiber or 5G and adapt to traffic shifts like the ones caused by the current pandemic. Optimizations for particular applications have frequently been contemplated. For example, "next generation networking" efforts throughout the decades have insisted on the need for built-in, fine-grained quality-of-service mechanisms in order to support real-time applications like voice, video, and augmented reality. But in practice, those applications are flourishing like never before by capitalizing on the general-purpose Internet, optimizations in application design, increased bandwidth, and the availability of different tiers of Internet service.

Modularity goes hand-in-hand with general-purpose design. Internet networks and applications are built from modular building blocks that software developers, architects, network operators, and infrastructure providers combine in numerous different ways to suit their own needs while interoperating with the rest of the network. This means that when it comes time to develop a new innovation, there are abundant existing software stacks, libraries, management tools, and engineering experiences to leverage directly. Tools like IP, DNS, MPLS, HTTP, RTP, and TLS have been re-used so many times that their common usage and extension models and software support are widely understood.

The Internet was also designed for global reach. Endpoints throughout the Internet are capable of reaching each other using common systems of addresses and names even if their local networks use vastly different underlying technologies. Introducing new or replacement addressing or naming schemes intended for global reach therefore requires either complex systems of gateways to bridge between existing Internet-connected systems and new networks or an incentive structure that would cause the majority of existing nodes to abandon the Internet and join the new network. Neither of these offers an obvious path to seamless global interoperability. And gateways would likely constrain future evolution across all layers of the stack.

We have seen struggles over incentives play out with the decades-long advance of IPv6 deployment, as well as with other protocol upgrade designs like DNSSEC and BGPsec. Experience with the development and deployment of these protocols has shown that baking deployment incentives into the design of a protocol itself is key to widespread deployment. Understanding which actors in the industry will be motivated to invest in upgrades and having the protocol design place the onus on those actors is critical.

The TLS 1.3 and QUIC examples highlighted above took these lessons to heart. Both protocols bind security upgrades together with performance improvements, knowing that Internet businesses will invest to achieve better performance and thereby improve security in the process. QUIC likewise allows application developers to deploy without having to rely on operating system vendors or network operators to apply updates, easing the path to widespread adoption.

Testing network innovations at scale in parallel with designing network protocols is also crucial. In the last five years, every major new Internet protocol design effort has been accompanied by the parallel development of multiple (sometimes a dozen or more) independent implementations. This creates extremely valuable feedback loops between the people designing the protocols and the people writing the code, so that bugs or issues found in implementations can lead to quick changes to the design, and design changes can be quickly reflected in implementations. Tests of early implementations at scale help to motivate involvement in the design process from a broader range of application developers, network operators, equipment vendors, and users.

Finally, the Internet uses a collaborative model of development: designs are the product of a community working together. This ensures that protocols serve the multitude of Internet-connected entities, rather than serving a limited set of interests. This model also helps to validate that updates to the network can and will find their way into production-quality systems. Many academic research efforts focused on future Internet designs have missed this component, causing their efforts to falter even with brilliant ideas.

Challenges and opportunities

The Internet faces many technical challenges today and new challenges will continue to arise in the future. At the same time, technological advancements and societal changes create opportunities for the Internet to continue to evolve and meet new needs as they arise.

Security is a multifaceted challenge that has been and continues to be a major area of evolution. Encryption of many kinds of Internet traffic is at an all-time high, yet work remains to mitigate unintentional data leaks and operationalize encryption in core infrastructure services, such as DNS. Strong protections and mitigations are needed against threats as diverse as commercial surveillance, denial-of-service attacks, and malware — all in the context of an Internet that is increasingly connecting devices that are constrained by computational power and limited software development budgets. These aspects must be addressed without intensifying industry consolidation. All of these challenges are increasingly the focus of Internet protocol designers.

Performance will also continue to require improvements in order to meet increasing demands. Protocol designers are only just beginning to sort through how they might leverage the performance gains from QUIC and HTTP/3 for numerous future applications. Scaling up deployment of mechanisms such as active queue management (AQM) for reducing latency, increasing throughput, and managing traffic queues will be needed to handle an ever-changing mix of traffic flows. Innovative approaches such as information-centric networking, network coding, moving computation into the network, establishing common architectures between data centers and edge networks, decentralised infrastructure and integration of quantum technology are the focus of ongoing exploration to respond to current and future performance requirements.

The kinds of networks and devices that benefit from global Internet connectivity or local IP networking (or both) will continue to diversify, ranging from industrial to vehicular to agricultural settings and beyond. Technologies such as deterministic networking, which seeks to provide latency, loss, and reliability guarantees, and new protocols explicitly designed to account for intermittent connectivity and high mobility will all be in the mix as information technology and operational technology continue to converge.

Conclusion

The Internet of 2020 is vastly different from the one where TCP/IP originated, even though variants of those original protocols continue to provide global connectivity. The combination of a general-purpose design, modularity, global reach, and a collaborative engineering model with lessons learned about incentives, implementation, and testing at scale have produced the Internet's winning formula for evolution.

The Internet's unique approach to evolution positions it well as a technology to meet new challenges and seize new opportunities. The central role of the Internet in society, only underlined by the COVID-19 crisis, continues to increase. We hope to never again experience a crisis that causes such disruption and suffering throughout the world, but we are optimistic that, crisis or not, the Internet will continue to evolve to better serve the needs of its users.

About the authors

Author affiliations are provided for identification and do not imply organizational endorsement.

Jari Arkko is a member of the Internet Architecture Board and a Senior Expert with Ericsson Research.

Alissa Cooper is the Internet Engineering Task Force Chair and a Fellow at Cisco Systems.

Tommy Pauly is a member of the Internet Architecture Board and an Engineer at Apple.

Colin Perkins is the Internet Research Task Force Chair and an Associate Professor at the University of Glasgow.