through SF last week and the feeling is that we are at the apex of the hype
cycle. Fear not, we at Redpoint are here to (attempt to) distill signal from
noise. Here’s a recap of the top story-lines as we see them along with some
down with OCP…?!
What happened: Docker and CoreOS got on stage, kissed and
made up and announced the Open Container Project (‘OCP’). OCP is a non-profit governance
structure, formed under the Linux Foundation, for the purpose of creating open
industry standards around container formats and runtime. You may remember back
in December ’14 CoreOS made headlines by announcing
rkt, an implementation of
appC, the company’s own container image format, runtime and discovery mechanism,
which, in contrast to Docker’s libcontainer, was open, both technologically and in
its development methodology. Then in May at CoreOS Fest, CoreOS’s inaugural conference, for appC
appeared to be gaining steam and image format fragmentation seemed inevitable.
Instead, a mere seven weeks later, it appears
Docker and CoreOS are willing to put aside differences to work together (and
with the likes of Google, Amazon, Microsoft, Red Hat, and Intel) towards an
open container spec.
Our take: The big winner is the broader container
ecosystem. There are at least half dozen credible alternatives to Docker’s
libcontainer emerging, and while competition is generally a good thing, the
introduction of multiple different image formats creates ecosystem
fragmentation which constrains customer adoption and broader momentum.
Consolidation around the OCP spec will ensure interoperability while enabling
vendors to continue innovating at runtime. More importantly, by agreeing on
low-level standards, the community can move on to solve higher-order problems
around namespaces, security, syscalls, storage and more. Finally, the loser in
all this appears to be the media now that there’s, at very least, a ceasefire
in the Docker-CoreOS war.
Network and more dashed startup dreams
What happened: In early March of this year Docker acquired Socketplane to bolster its networking chops and the
fruits of that acquisition were displayed in a new product release called Docker Network, a native, distributed multi-host
networking solution. Developers will now be able to establish the topology of
the network and connect discrete Dockerized services into a distributed
application. Moreover, Docker has developed set of commands that enable devs to
inspect, audit and change topology on the fly – pretty slick.
Our take: The oft-forgotten element to enabling
application portability is the network – it doesn’t matter if your code can be
executed in any compute substrate if services can’t communicate across
disparate network infrastructures. Docker’s “Overlay Driver” brings a
software-defined network directly onto the application itself and allows
developers to preserve network configurations as containers are ported across and
between datacenters. The broader industry implication here is that Docker is
continuing to platform by filling in gaps in the container stack. The
implication for startups? You will NOT build a large, durable business by
simply wrapping the Docker API and plugging holes.
and the UNIX-ification of Docker
What happened: Docker finally capitulated to industry
demands and announced a swappable plug-in architecture and SDK which will allow
developers to more easily integrate their code and 3rd-party tools
with Docker. The two main extension points featured were network plugins (allowing
third-party container networking solutions to connect containers to container
networks) and volume plug-ins (allowing third-party container data management
solutions to provide data volumes for containers which operate on stateful
applications) with several more expected soon.
Our take: For a year now there’s been an uneasy tension between Docker and the developer
community as Docker became less a modular component for others to build on top
of and more a platform for building applications in and of itself. The
prevailing fear was that in Docker’s quest to platform, it would cannibalize
much of the ecosystem, create lock-in and stifle innovation. Docker’s party
line has always been that “batteries are included, but swappable,” implying you
can use Docker tooling out of the box or swap in whatever networking overlay,
orchestrator, scheduler, etc. that works best for you. The plug-ins announcement is a step in that
direction as it appears Docker is finally not only talking the UNIX philosophy
talk, but walking the walk.
What happened: Whether it’s called “containers as a
service,” “container platform,” “microservices platform” or plain old “PaaS”,
it’s clear that this is the noisiest segment of the market. We counted no less
than 10 vendors on the conference floor touting their flavor of management
Our take: Everything old is new again. The evolution
of container management is analogous to that of cloud management platforms
(“CMPs”) when virtualization began invading the datacenter. There were dozens
of CMPs founded between 2006 and 2010 the likes of Rightscale, Cloud.com,
Makara, Nimbula, etc. Several have since been acquired for good, but far from
great, outcomes, and the sea is still awash in CMP vendors competing feature
for feature. Correspondingly as the compute abstraction layer moves from the
server (hypervisor) to the OS (container engine), a new breed of management
platform is emerging to provision, orchestrate and scale systems and
applications. Will the exit environment this time around mirror the previous
- * * * *
Stepping out of the echo-chamber, the big question remains around adoption. There
are some technological gating factors that will inhibit enterprise deployments
in the short-term – namely persistence, security and management – but the overwhelming
constraint holding back containers appears to be general lack of expertise and
established best practices. The good news is that these are “when” not
“if” issues that pertain to ecosystem maturity, and the steps taken by Docker
last week will only help accelerate that process.
With the groundwork laid,
we see an exciting year ahead for the container community. The inevitability of container adoption only
feels more inevitable now. There are
many hard problems to solve, but hopefully (fingers crossed) there is now more
alignment within the community.
Startups and enterprises alike can begin, in earnest,
the real work required to drive broad adoption of this technology in datacenters. Hopefully we will look back a year from now
and feel like this was the year that the technology moved beyond the hype phase
to real adoption.