Containers and the Search for the Killer App

VisiCalc for
early PCs. E-mail for the Internet. SMS for mobile. Every major tech platform
we’ve seen has had the benefit of a “killer application” that transformed it
from “toy” or “cool project” into an indispensable, mainstream product. 

Now that that
we’re in the midst of what looks to be another major platform shift in the datacenter – this time with the layer of abstraction
moving from physical infrastructure via the hypervisor to the OS via
containerization – talk has centered around Linux containers and whether they
represent a paradigm shift in how we build and deploy applications or if they are
simply another instrument in the DevOps toolkit.

The relevant analog
for mapping out the fate of containerization seems to be virtualization. Charting
VMware’s history provides a hint of how
container adoption and ecosystem development may unfold, but it’s far from a
perfect corollary.

In 1999,
VMware Workstation which let developers run multiple virtual machines
with different operating systems locally. This solved an acute developer pain
around building applications that would work across different OS’s and
environments. A couple years later the company entered the server market with
ESX and vMotion which enabled live migration; a fancy way of saying you could
move running VMs between physical hosts without taking the whole application
down. The VMware toolchain quickly spread through dev/test as developers could
now build and test applications for different environments and then deploy them
with a few clicks confident they wouldn’t break production (assuming proper
config files were installed; hence the rise of config management tools like Chef,
Puppet, etc.). In addition to this grass-roots, bottoms-up adoption,
virtualization benefited from CIO-led, top-down initiatives to eliminate IT
sprawl, improve server utilization and consolidate datacenters. The result,
depending on who you ask today, is that anywhere from 75-90% of x86 workloads
are virtualized.

Hardware
virtualization, then, literally digitized the analog task of racking servers.
It represented a step-function improvement over how IT could be provisioned and
administered and how applications could be tested and deployed.

Now we’re
seeing similar developer-led adoption of containerization, and sure enough there
are myriad reasons why adopting Linux containers makes sense: from enabling
application portability across compute and cloud infrastructures, to
streamlining your deployment pipeline to liberating your organization from the
VMware tax. But as we sit here today,
many (including myself) contend that containers don’t represent as radical a
step-function improvement over the tools used to solve similar problems as VMs
did in the early 2000s. Nor is there a similar top-down, CTO-/CIO-led initiative
to catalyze adoption. Consequently, what we’re looking for is the killer
application that unlocks the value of containers for the mass-market. 

What might
those killer apps be? Here are three likely candidates:

“Dropbox-ized”
dev environments

One of the most nagging engineering pains is provisioning and replicating
developer environments across the org and then maintaining parity between those
environments with test and production. Containers offer a way to encapsulate
code with all of its dependencies allowing it to run the same irrespective of
underlying infrastructure. Because containers share the kernel user space, they
offer a more lightweight alternative to VM-based solutions like Vagrant, thereby letting devs code/build/test
every few minutes without the virtualization overhead. Consequently, orgs can
create isolated and repeatable dev environments that are synced through the
development lifecycle without resorting to cloud IDEs which have been the bane
of many devs’ existences.

Continuous
deployment
– As every
company becomes a software company at its core, faster release cycles become a
source of competitive advantage. This was highlighted in the recent Puppet
Labs State of DevOps

report, where it was revealed that “high performing IT organizations” deploy
code 30x faster and have 200x shorter lead times leading to 60x fewer failures
than their “low-performing” peers. It’s no surprise, then, that organizations
are embracing continuous delivery practices in earnest. Containers, because of
their inherent portability, are an enabler of this software deployment model. Instead
of complex scripting to package and deploy application services and
infrastructure, with containers scripts shrink to a couple lines to push or
pull the relevant image to the right endpoint server and CI/CD becomes
radically simplified.

Microservices – Microservices architecture refers to a
development practice of building an application as a suite of modular, self-contained
services each running its own process with a minimal amount of centralized
management. Microservices itself is a means not an end, enabling greater agility (entire
applications don’t need be taken down during change cycles), speed-to-market
and code manageability. Containers, offering lightweight isolation, are the key
enabling technology for this development paradigm.

Ultimately,
containerization allows companies of all sizes to write better software faster. But as with any platform shift, there is a learning curve and broad adoption is
a function of ecosystem maturity. We’re just now beginning to see the emergence
of best practices and standards via organizations like the Open Container Initiative and the Cloud Native Computing Foundation. The next step is for a
hardened management toolchain to emerge which will allow devs and companies to
begin building out powerful use cases. And it’s with those applications that we
will start to unlock the power of container technology for the masses.