Since the 1960s we’ve seen two paradigmatic shifts in computing architecture. The first in the early 80s from large, centralized mainframe systems to client/server architecture and most recently from client/server architecture to the highly-distributed, elastic model we know today as cloud. Compute and storage resources are now accessed on demand from billions of end points and this only promises to increase, perhaps exponentially, within the next decade as the Internet of Everything (IoE) comes online (it is estimated today that there are 10 billion physical objects connected to the Internet, by 2020 that number is supposed to increase beyond 50 billion). However, as the computing stack has become more distributed, elastic and flexible, the network topology those compute and storage resources are connected with hasn’t fundamentally changed since the first packet switched network – the U.S Department of Defense funded ARPANET – was created in 1969.
In the early 80s the TCP/IP protocol – the networking model and a set of communications protocols used for the Internet and similar networks – was formalized to provide end-to-end connectivity, specifying how data should be formatted, addressed, transmitted, routed and received at the destination. This network functionality was organized into four abstraction layers are used to sort all related protocols according to the scope of networking involved.
Figure 1: Internet Protocol Abstraction Layers
Yet while the underlying gear – and correspondingly performance and capacity of networks – has improved steadily over the years, the actual network topology has remained pretty much identical. Problems have arisen, however, as compute has become more distributed and dynamic. There is a mismatch between compute resources and the underlying network which is only exacerbated as more and more access points come online flooding the network.
The industry answer to this problem has universally been the idea of software-defined networking. The concept is relatively straightforward: extract the control plane (the brain) of each individual network switch/router and put in the cloud as a controller device. The result is a software layer that captures all the intelligence needed and then, in turn, manages commoditized hardware, making networks programmable. The network becomes an elastic resource that, along with compute and storage, is provisioned as needed by IT through a single pane (at least that’s the vision of IT today).
Software-defined networking, however, is an incremental innovation, that, in effect, puts a software layer on top of existing network topologies. Networks become smarter and more nimble, but I would argue that they are not sufficently adapted to reflect what will be the computing reality of tomorrow – characterized by unprecedented scale and distribution. That is why the Bitcoin protocol is so exciting and potentially disruptive.
Bitcoin Protocol – Decentralized Global Ledger
not going to spend a lot of time explaining Bitcoin – the cryptocurrency itself or the protocol – simply because this deserves a blog post of its own and there are resources and folks out there who have done a much better job than I can ever do explaining it (like here, here and here). What I do want to talk about is how potentially disruptive bitcoin can be to network design more broadly.
Most basically, a Bitcoin is a value store unit of the Bitcoin electronic payment network. The network is based on the blockchain model, an innovative idea of a decentralized public ledger database of all transactions. The blockchain uses the fact that there are many copies of the ledger that are broadly distributed and combines cryptography to ensure that once a transaction has been recorded in the blockchain that transaction cannot be changed after the fact. This creates a distributed, trust-less P2P network that changes how we think about security, packet flow, etc. What makes this special is that any sort of asset – not just currencies – currently managed by a central authority could theoretically be implemented in a blockchain model: equities, derivatives, contracts, genome, patents, identities, etc. The blockchain itself becomes a platform for distributed apps, and this is a radically different approach to designing networks.
Most see the opportunity Bitcoin as a fifth protocol for the exchange of value on top the IP stack. However, combined with the power of software-defined networking and new ideas re-emerging about mesh networks (a network topology where each node serves as a router/switch forwarding data to other nodes), we could be entering a time where traditional network topology is wholly transformed. And the broader macro theme at play here is the decentralization of everything. We’re seeing it through crowdsourcing, crowdfunding, computing resources, etc. It only makes sense that our network design principals change in accordance.