Network Architecture in Light of Public/Private/Hybrid Cloud
Tom Daly discusses the future of network architectures in light of public/private/hybrid cloud, edge computing, Web 3.0 and the subversion of old routing rules.
Nov 17, 2021
SHARE
The migration to cloud networking might seem like the happy ending to a story that took more than 35 years to tell, from the beginnings of the enterprise networking evolution in the 1980s through the first 25 years of the commercial Internet and right up to the cusp of the cloud-native networking era.
But the plot thickens. Several countervailing forces continue to reshape how network architectures work and redefine our understanding of the cloud model.
We’ve come pretty far in dismantling the hierarchical rigidity that once characterized enterprise networking. Enterprises now have many choices about service providers, network technologies and customization that they didn’t have decades ago. The Internet brought more flexibility, changing how, when and where enterprise users could access the applications they needed.
The more recent cloud migration rendered the past even more unrecognizable, with the ability to migrate enterprise workloads to public cloud freeing enterprises from the legacy of expensive and inflexible on-premises hardware and software that had been required for computing, storage and networking needs.
Still, the cloud model itself is evolving quickly. Some enterprises use a public cloud or a mix of public clouds in a multi-cloud scenario. Others rely more on their own private clouds. The most recent shift is the concept of hybrid cloud, with enterprises allocating various workloads a mix of public and private clouds.
All of these changes and the related massive progression in how we write software and build systems that has occurred over 35-plus years brings us to the doorstep of the cloud-native networking era. But not to the end of our story. More change is coming.
Countervailing forces
Enterprise networking today is far different, with many more variables in play over a broader landscape, and much more flexibility in how those variables can be managed, than it was 35 years ago.
Yet elements of the old network hierarchy remain. In traditional data centers, we had core, aggregation and distribution layers--a hierarchical approach. As data centers have evolved, we have seen more modern architecture--spine-leaf and super-spine, for example--but these still work in a generally hierarchical way, with traffic aggregated from the server level and carried up to the core level, and some optimizations for west/east traffic contained in the datacenter.
With public cloud, we distributed the data center, but the hierarchical model is still in play, with everything getting aggregated, moving up a layer, getting aggregated again, moving up another layer to the cloud service provider’s core and their fat pipes. A lot of elements still have defined, constrained roles and places in the network.
In the next 10 years, however, a series of new technology movements could help us to blow up the old model for good.
Private/public/hybrid cloud
The adoption of public cloud introduced massive flexibility in computing for enterprises, but with trade offs in control, architecture, and cost management. There’s a movement afoot to reexamine the public cloud value proposition. For some, the cost of public cloud networking may be greater than previously understood, which could lead to repatriation of data from public to private cloud (although the extent to which this will happen remains to be seen.)
Even if there isn’t a mass exodus from public clouds, a questioning of the model should prompt enterprises to rethink their cloud strategies. More thoughtful evaluation of individual workloads will lead to better cloud decisions: Some workloads may be better managed on private clouds, while others are more affordable and more efficient in public clouds, all of which leads to the dawning of hybrid cloud networking, the era of flexibility--managing multiple workloads across multiple cloud environments.
This being said, there’s a major stumbling block implicit in the cloud model, being that cloud service providers dominantly interconnect to the internet at large by way of the core Tier 1 markets globally. This trade off in architecture in favor of flexibility can often manifest as latency or reliability challenges for today’s applications which are more stringent in their real-time requirements.
Edge computing
Another agent of change is edge computing, the notion of processing, analyzing and acting on data closer to where it’s collected and where action on it is needed.
Numerous industries and applications--automotive, robotics, IoT--are feeding a desire among enterprises to be more efficient about managing these workloads. Having pushed workloads in recent years to centralized locations--premises, data center or cloud--we’re set to move it all again to the edge.
Combine edge computing and hybrid cloud networking, and the old models get broken apart even more, with a single enterprise now doing some of its work in a central public cloud, other things in highly-optimized and secure private clouds and then allocating specific applications requiring very fast processing and absolutely minimum latency to the edge.
Web 3.0
All of this is happening on a commercial Internet with 25 years of evolution under its belt. The web touches enterprise networks all over the world, but for all its pervasiveness and immediacy, the Internet still exists as something of a hierarchy, too, centered around a small handful of carriers, cloud providers and content giants who hold most of the cards.
This has helped spur the emerging Web 3.0 movement, which acknowledges that the Internet has reached maximum centralization and needs to start decentralizing with the potential benefits of improving transparency, enabling greater user privacy protection and reducing reliance on centralized corporations that control so much of the web’s commerce.
Edge computing and hybrid cloud networking both align well with the Web 3.0 movement, as they are doing their own part to reverse the traditional flows that have fed other centralized hierarchies. However, for decentralization to succeed we need more than transparency. We also need new approaches to how traffic gets routed through networks.
Rerouting networking
Networking has long been guided by protocols that help information get from one point to another across networks. These sets of rules, which include TCP/IP, the Border Gateway Protocol (BGP) and the Domain Name System (DNS), among others, were built for the edge-to-core hierarchical age of networking.
BGP, for example, works by aggregating network control messages and forwarding them to routers that do more aggregation and forwarding to more routers, repeating the process all the way to a Tier 1 operator that can connect to anywhere on the Internet. Sound familiar? It’s another hierarchy.
That means a transmission originating from one connection on one network and traveling to another connection on a different network may travel a very long, very convoluted and inefficient northward then southward route even if the two points are physically very close to one another. The transmission goes from the edge to the core of one network and from the core to the edge of the other. The protocols still being used don’t have visibility into the best possible route.
The DNS has a similar topology, from the top level root (also known as ".") through the top level domains (e.g. ".com", ".net", ".ai") to domains ("lightyear.ai") down to hosts ("www.lightyear.ai"). Except for some advanced DNS solutions, such as those used by content delivery networks (CDN), there’s very little insight as to how different layers of the system can coordinate and/or optimize with each other.
New technologies like SD-WAN are increasingly capable of subverting and bypassing these old rules, enabling visibility across networks and creating overlays of tunnels that can help enterprises get information from Point A to Point B in a much faster and more direct way. Ultimately, just as network architecture needs to be rethought in light of edge computing, hybrid cloud networking and the Web 3.0 movement, how we route traffic along the global internet architecture also needs to be updated
So, what’s next?
With all those hierarchies in various states of flux, it’s fair to ask: What’s it all leading to? Decentralization of vertical network architecture hierarchies translates into a flattening of networks, something that resembles more of a matrix. A matrix-like network architecture grants visibility into and awareness of what’s on the network, no matter how dispersed, and the best, most direct and efficient transmission path between any two points. Nothing is at the top getting special treatment, which means nothing is at the bottom either. As an industry, we did this in the datacenter as we adopted technologies like spine-leaf and VXLAN. In cloud, we see this through innovations like Network as a Service enabled via BGP eVPN. And on the Web, we see developments in protocols like QUIC, HTTP/3.0, and DoH leading to similar decentralization. The elements driving the endpoints of the internet, like the servers in the cloud, the technology available on mobile and desktop clients, and some optimizations in between (like SD-WAN) are driving evolution, but the underlying global network architecture needs to evolve as well.
As with anything, the big challenge with migrating to more matrix-like network architectures is change, rewiring our human reasoning is naturally more attuned to hierarchical thinking, the organizing of things into groups, classes and tiers. Looking at one giant picture and trying to see the alignment of every small pixel hurts our eyes and our brains. This will heighten the importance of automation, artificial intelligence and machine learning as the network hierarchies continue to break down and the matrix that stretches out before us. Change, by necessity, will be incremental, but forces like edge computing, the cloud evolution, Web 3.0 and the subversion of old routing rules will move things along. The story continues to unfold.
About the Author
Tom Daly is an experienced technologist with a passion for Internet infrastructure. Tom is currently serving as a Board Member and Advisor to Big Network, a startup focused on fixing the Internet architecture problems discussed in this post. Most recently, Tom was the SVP of Infrastructure at Fastly responsible for global datacenter deployment, interconnect strategy, and capacity planning. Prior to Fastly, Tom Co-Founded and served as the Chief Technology Officer of Dyn, responsible for the architecture and deployment of the company's enterprise DNS platform. Tom also serves as an advisor to Lightyear and other fast growing startups.
Want to learn more about how Lightyear can help you?
Let us show you the product and discuss specifics on how it might be helpful.
Not ready to buy?
Stay up to date on our product, straight to your inbox every month.