Containers and the Chasm: The State of Container Adoption

Geoffrey Moore’s
technology adoption lifecycle is gospel for tech marketers. The model describes
the path of diffusion for discontinuous innovations and explains how ecosystems
emerge and coalesce around IT winners.

Moore and his
work have been top of mind the last couple years now as we’ve observed the rise
of and hype around Linux containers.

Containers are
the next evolutionary leap forward from hypervisor-based hardware virtualization,
offering a way to package an application with a complete filesystem that holds
everything it needs to run: code, runtime, system tools and libraries. Containers
are smaller, faster, more portable and more developer-friendly than their
virtual machine predecessors. In that, containerization represents a paradigm
shift in how systems and applications are built, deployed and managed.

In the tech
sector, paradigm shift is a euphemism for opportunity, and, appropriately, we’ve
seen a flood of companies come to market with their flavors of tools and
platforms Continue reading “Containers and the Chasm: The State of Container Adoption”

The ‘Cloud-Native’ Landscape

Over the last
several years, we’ve seen the emergence new application architecture – dubbed
“cloud native” – that is highly distributed, elastic and composable with the
container as the modular compute abstraction. With that, a new breed of tools
has emerged to help deploy, manage and scale these applications. Cluster
management, service discovery, scheduling, etc. – terms that previously were
unknown or, at best, reserved for the realm of high-performance computing – are
now becoming part of every IT organization’s lexicon. As the pace of innovation
continues at breakneck speed, a taxonomy to help understand the elements of
this new stack is helpful.

The “Cloud-Native”
Ecosystem presentation is the consequence of many conversations with
developers, CIOs and founders who are playing a critical role in shaping this
new application paradigm. It attempts to define the discreet components of the
cloud-native stack and calls out the vendors, products and projects that
Continue reading “The ‘Cloud-Native’ Landscape”

Containers and the Search for the Killer App

VisiCalc for
early PCs. E-mail for the Internet. SMS for mobile. Every major tech platform
we’ve seen has had the benefit of a “killer application” that transformed it
from “toy” or “cool project” into an indispensable, mainstream product. 

Now that that
we’re in the midst of what looks to be another major platform shift in the datacenter – this time with the layer of abstraction
moving from physical infrastructure via the hypervisor to the OS via
containerization – talk has centered around Linux containers and whether they
represent a paradigm shift in how we build and deploy applications or if they are
simply another instrument in the DevOps toolkit.

The relevant analog
for mapping out the fate of containerization seems to be virtualization. Charting
VMware’s history provides a hint of how
container adoption and ecosystem development may unfold, but it’s far from a
perfect corollary.

In 1999,
VMware Continue reading “Containers and the Search for the Killer App”

Market-Makers, Surfers and 10x’ers: A Model for Investing in Enterprise IT

Warren
Buffet’s right-hand man and Vice Chairman of Berkshire Hathaway, Charlie
Munger, credits much of his and the Oracle of Omaha’s success to an adherence
to mental models, particularly in their power to guide investment decisions. Munger,
in his 1994 commencement address at USC Marshall School of Business, elaborated:

…the first rule is that you
can’t really know anything if you just remember isolated facts and try and bang
‘em back. If the facts don’t hang together on a latticework of theory, you
don’t have them in a usable form.

You’ve got to have models in
your head. And you’ve got to array your experience—both vicarious and direct—on
this latticework of models…

Mental models
help investors make heads or tails of fact patterns to problem-solve quickly;
something that’s become increasingly important as the velocity of companies
formed and funded has accelerated to breakneck speed.

Most models tend to be deductive,

image
image
image
image

Continue reading “Market-Makers, Surfers and 10x’ers: A Model for Investing in Enterprise IT”

DockerCon 2015: Outside the Echo-chamber

DockerCon tore
through SF last week and the feeling is that we are at the apex of the hype
cycle. Fear not, we at Redpoint are here to (attempt to) distill signal from
noise. Here’s a recap of the top story-lines as we see them along with some
thoughts… 

image

You
down with OCP…?!

What happened: Docker and CoreOS got on stage, kissed and
made up and announced the Open Container Project (‘OCP’). OCP is a non-profit governance
structure, formed under the Linux Foundation, for the purpose of creating open
industry standards around container formats and runtime. You may remember back
in December ’14 CoreOS made headlines by announcing
rkt
, an implementation of
appC, the company’s own container image format, runtime and discovery mechanism,
which, in contrast to Docker’s libcontainer, was open, both technologically and in
its development methodology. Then in May at CoreOS Fest, CoreOS’s inaugural conference, Continue reading “DockerCon 2015: Outside the Echo-chamber”

Hello from Redpoint

image

A pause from
our regularly scheduled programming to announce that I’m thrilled to have
joined Redpoint
Ventures
as a Principal
in their early stage group. The move is a homecoming for me – I was raised in
the South Bay and went to college in Berkeley (Go Bears!) before shipping out east
for a little over five years – and I can’t be more excited to be back!

When I began
the conversation with the team at Redpoint, I was already familiar with the
firm’s track record with them having backed multiple category-defining
companies from A(rista) to Z(uora) with everything including Juniper Networks, Netflix,
Pure Storage, Stripe, Twilio and many more in between. But track record in VC is
derivative – it’s a byproduct of the people and culture of the firm – and the
more I got to know the team and their inner-workings, I found a firm Continue reading “Hello from Redpoint”

CoreOS Fest Post-Mortem

Last week I attended
the inaugural CoreOS Fest.
It was a fantastic event which brought together some of the best minds in
distributed systems and celebrated the vibrant open source community CoreOS has
fostered. Given this somewhat seminal moment, I thought it’d be a good
opportunity to share a few observations from the conference and reflect on the
state of industry, so here goes:

The
pace of innovation in enterprise IT has never been faster.
It’s been just over two years since the
initial release of Docker and it’s amazing how quickly an
ecosystem has coalesced around Linux containers and, more broadly, distributed
systems infrastructure. By any metric – contributors and contributions to open
source repos, companies founded and funded, support and partnerships from
incumbent infrastructure vendors, etc. – there is a now a fully-formed,
distributed computing stack and accompanying value chain. This rapid innovation
cycle has compressed release cycles.

Continue reading “CoreOS Fest Post-Mortem”

The Case for Microservices in the Enterprise

Since my last post exploring the platform shift happening in today’s datacenter, the question I’ve been asked most often is, “sure, microservices, distributed architectures and containerization might make sense for the Google’s and Facebook’s of the world, but what about everyone else who doesn’t operate at Web scale?”

It’s true that the never-before-seen scale requirements thrust upon this generation’s consumer Internet companies necessitated a redesign of applications and their underlying systems, but scale is only part of the explanation. Rather, the wholesale
changes seen at the infrastructure and code levels and across the software
development lifecycle were born out of the fundamental desire to win, which
ultimately comes down to delivering the best software to millions (or billions)
of end-users fastest.

Scalability
is top-of-mind, but just as important is creating organizational and software development
structures that support innovation, agility and resilience. After all, I would
argue that Facebook became

image
image

Continue reading “The Case for Microservices in the Enterprise”

Warehouse Computing and the Evolution of the Datacenter: A Layman’s Guide

You may not have noticed, but we’re in the midst of another massive platform shift in enterprise computing. We can debate chicken or egg, but I believe this most recent transformation is being driven primarily by requirements placed on modern applications; requirements that are the result of the on-demand, always-on computing paradigm predicated by cloud and mobile. Simply, applications need to be scalable, available and performant enough to reach millions, if not billions, of connected devices and end-users. Infrastructure must mirror these specifications, in kind.

Historically, systems design has ebbed and flowed between periods of aggregation (centralized) and disaggregation (distributed) of compute resources. The most recent evolution, from client/server to virtualized, cloud infrastructure was driven largely by a desire to contain costs and consolidate IT around standards (x86 instruction set, Windows and Linux) form factors (first blade servers, then VMs) and physical locations (emergence of sprawling datacenters and giant cloud vendors). Now we’re seeing the pendulum swing back. Why?

A strong first principle is the notion that infrastructure is beholden to the application. Today, many applications are being built as large-scale distributed systems, composed of dozens (or even thousands) of services running across many physical and virtual machines and often across multiple datacenters. In this paradigm, virtualization – which really dealt with the problem of low physical server utilization – doesn’t make much sense. In a highly distributed, service-oriented world, VMs come with too much overhead (read more on this here). Instead of slicing and dicing compute, network and storage, the better solution becomes to aggregate all machines and present them to the application as a pool of programmable resources with hardware-agnostic software that manages isolation, resource allocation, scheduling, orchestration etc. In this world, the datacenter becomes one giant, warehouse computer controlled by a software brain.

image

However, the fact of the matter is that building, deploying and maintaining distributed applications is a highly technical feat. It requires a rethinking of the way applications treat and interact with other applications, databases, storage and network. Moreover, it requires a new toolkit that is central to solving the coordination and orchestration challenges of running systems that span across multiple machines, datacenters and time zones. To help understand what’s taking place, let’s deconstruct this new stack and, along the way, define some other key terms. Note that this is in no way a static, absolute taxonomy, but rather a simplified way to understand the layers that make up today’s application stack.

Layer 1: Physical Infrastructure – Actual servers, switches, routers and storage arrays that occupy the datacenter. This area was dominated by legacy OEMs (EMC, Cisco, HP, IBM, Dell) who are now giving way to low-cost ‘whitebox’ ODMs.

Vendors/Products:

image

Layer 2: Virtualized

image
image
image
image
image
image

Continue reading “Warehouse Computing and the Evolution of the Datacenter: A Layman’s Guide”

The Most Important SaaS Metric Nobody Talks About: Time-to-Value (‘TtV’)

In a world where applications are delivered via cloud and distributed across billions of Internet-connected end-points, we’ve seen barriers to entry, adoption and innovation compress byan order of magnitude or two, if not crushed altogether. Compound this by advances in application and data portability and the implication for technology
vendors competing in this global, all-you-can-eat software buffet is that customers’
switching costs are rapidly approaching zero. In this environment it’s all
about the best product, with the fastest time-to-value and near zero TCO. And
it’s this second point – time-to-value (TtV) – that I want to dig in on a bit
because it tends to be the one glossed over most often.

I’ll start
with an anecdote …

A portfolio
company of ours delivers a SaaS platform that competes with legacy, on-prem
offerings from large infrastructure software vendors. In its early days the
company had fallen into the enterprise sales trap: spending weeks, if not
months, with individual customers doing bespoke integration and support work.
About a year in when we finally decided to open up the beta to everyone, sign-ups
shot up, but activity in new accounts was effectively nil. What was going on?

Simply,
customers didn’t know what to do with the software once in their hands.
Spending months with large accounts did inform some fundamental product choices,
but at the cost of self-service. Our product was feature-bloated, on-boarding
flow was clunky and the integration API was neglected and poorly documented.

In a move
that, I believe, ultimately saved the company, we decided to create a dedicated
on-boarding automation team within product. Sure enough, in the months that
followed, usage spiked and the company was off to the races.

The takeaway
is that highest priority should be given to building software that just works,
and that means focusing relentlessly on reducing or eliminating altogether the
time investment to fully deploy your solution in production. Ideally, you want
customers to derive full value from your offering in mere minutes, if not
seconds. To do so, treat on-boarding as a
wholesale product within your offering and devote engineering resources to it
.
Find religion about optimizing TtV!

Below is by
no means a complete list, but instead a few lessons that I’ve taken away from
my experience with our portfolio that many SaaS companies should internalize in
their product and go-to-market strategies to help optimize TtV:

Simplicity
wins…be feature-complete, not feature-rich:
This is a fairly obvious but subtle
point that often evades even the most talented product teams: the defining
characteristic of a simple (read: good) product is not the abundance of features but rather the relevance of those features to its users. This stands in stark
contrast to the old Continue reading “The Most Important SaaS Metric Nobody Talks About: Time-to-Value (‘TtV’)”

The Container Wars have Started and We Should be Paying Close Attention

On Monday, the guys at CoreOS announced Rocket, a command-line tool for working with App Container, the company’s own container image format, runtime and discovery mechanism. It was the first major competitive blow levied against Docker. The news within the news was that Rocket’s format and runtime promises to be completely open, which is in contrast to the approach Docker has taken, having shown consternation around publishing or agreeing on a spec /standard around their container technology.

Docker co-founder and CTO, Solomon Hykes, responded with a Tweetstorm, ending with this tweet:

Docker, Inc. finds itself in the always thorny position of a company balancing its responsibilities as the steward of an open source project and as a profit-making entity. Invariably as Docker guides the community towards its one version of a universal truth and builds out a fully-featured enterprise management stack, some devs will get left behind – that’s simply the cost of doing business for a company behind an open source project as opposed to one that’s commercializing a foundation-led project, say like Cloudera has done with Hadoop or Red Hat with Linux. Clearly, CoreOS saw an opportunity to pounce, and I’m sure we’ll see more vendors come to market with competing container technologies.

If we’re to chart out how this evolves, the closest analog is the virtualization market. VMware was the clear winner as it was first to successfully commercialize its hypervisor technology, taking the ESX – which actually had its roots in open source – and building a full management stack around it which became vSphere. VMware came to dominate the market with Xen (Citrix), KVM and Microsoft picking up relative scraps.

The hypervisor helped commoditize the OS and physical infrastructure, giving way to the cloud-based architectures we have today. Similarly now, we’re in the midst of another platform shift. Docker offers a higher level of abstraction, taking the OS, virtual machine, physical machine and infrastructure provider, and commoditizing it all. The difference this time around is that in concert with a platform shift (VMs to containers), we’re also seeing an architectural shift in the way applications are built (evolving from monoliths to systems that are distributed, highly available and modular). Martin Fowler provides a fantastic overview of distributed systems and micro-services architecture here.

image

The thing is, building, running and scaling distributed applications is HARD and wholly requires a re-thinking of the tools and systems we’ve had in the place for the last decade, with underlying container technology serving as the modular component for Continue reading “The Container Wars have Started and We Should be Paying Close Attention”

The Developer-Driven Economy: How the Developer has become the Organization’s Most Influential Power Broker

[Last Tuesday RRE made public our investment in Bowery, a company that we believe is doing to developer environments what Dropbox did to file storage and sync. When I joined RRE I promised to blog about some themes, trends and companies I’m most excited about, so I figured given last week’s announcement, now would be as good a time as ever to get the first one out, so here goes…]

In 2003 Nicholas Carr published a piece in the Harvard Business Review which famously promulgated IT doesn’t matter. Carr predicted that as we leave a world defined by scarcity of IT resources and enter one where “the core functions of IT – data storage, data processing, and data transport – have become available and affordable to all,” technology will cease being a source of competitive advantage for businesses altogether. Many at the time believed Carr was forecasting the death of IT in the enterprise.

Sure enough, it took maybe 5-7 years to get to Carr’s dystopian reality as cloud, open source and mobile crushed barriers to innovation, distribution and adoption of IT. However, a funny thing has happened: instead of IT ceasing to matter it has become table stakes. With ubiquitous access to infrastructure, software and software development tools in particular, we’re seeing software eat the world.

As technology has shifted from enabler of business process, to enabler of product or service, to the very product or service itself, we’ve seen IT transform from a cost center that is adjunct to core business to a profit center that is its lifeblood.

It was during this transformation that the power dynamic within the workplace flipped from the c-suite to the basement; from the employees clad in Armani suits with tie clips to those wearing hoodies with sandals. Consequently, we’ve seen the developer become the most prized resource in the modern organization, and it is this trend that lends itself to an enormous investment opportunity.

Devs as the Go-to-Market

Developers are your innovators and early adopters; they are the gatekeepers for new technologies and often decide which new tech succeeds and which fails either directly or indirectly (look no further than iOS and Windows – win the developer, win the platform war). Appropriately they have become the most powerful distribution channel for IT in the enterprise.

Devs often become catalysts for social communities which help spread new products and technologies organically, and their API-driven tools create the potential for powerful two-sided platforms. There are network effects to be taken advantage of here.

Moreover, the trend towards DevOps (and now BizDevOps) – characterized by the convergence of software development cycles and IT operations (and all business activities) – has effectively blurred Continue reading “The Developer-Driven Economy: How the Developer has become the Organization’s Most Influential Power Broker”

Back To Where It Started…RRE

This post is slightly overdue, and I can promise my return to blogging as a VC will involve much more interesting subjects than myself, but nonetheless I can’t be more excited to announce that I’ve rejoined RRE as a Senior Associate.

I originally joined RRE as an Analyst back in the Spring of 2010 and have been fortunate enough to work alongside and learn from some of the smartest, most thoughtful and down-to-earth folks in the VC business. RRE is a unique place; a firm that has no ego, truly puts the interest of the entrepreneur first and places an inordinate amount of trust in its junior people to develop their own perspective and learn by doing. As an Analyst, I was incredibly lucky to have had the chance to source and help lead investments in companies such as WhipTail Technologies and Datadog and be afforded the type of responsibility that far outstripped my experience, let alone title. So when Jim and Stu extended the offer to come back – after a two year stint getting my MBA – to the firm, the decision was a no-brainer.

I’ll be spending my time developing and shaping RRE’s thinking around enterprise technologies. I’ve said this before, but is has never been a more exciting yet daunting time to be an early stage enterprise investor. The tech stack is being wholly re-written and infrastructure re-designed to support the massive scale of what soon will be hundreds of billions of connected devices. Mobile and cloud have fundamentally changed the game, crushing barriers to entry, innovation and adoption. The rate of technological change never been faster – today’s winners can become tomorrow’s losers in the blink of an eye – making venture investing incredibly fun but also incredibly challenging. Personally, I couldn’t be more excited to take on this challenge.

With that said, some themes I’ll be digging in on in the next 12 months (and I’ll follow up with a blog post on each in the coming weeks) include:

  • Intelligent SaaS – As predicative analytics, machine learning and AI become more pervasive, software will move from enabling smarter decisioning by humans to making the decisions on its own
  • DevOps 2.0 – If DevOps 1.0 was characterized by convergence of software development and IT operations, DevOps 2.0 will be about making ops virtually invisible thanks to higher levels of automation and abstraction
  • The Post-Hypervisor Datacenter – Software-defined everything + containers + cluster/resource managers (Mesos, CoreOS, etc.)
  • Security – This one is self-explanatory, but in particular looking for deterministic approaches to malware detection, cryptography and mobile and API security solutions

So if you’re a passionate entrepreneur trying to solve a deeply technical Continue reading “Back To Where It Started…RRE”

The Swinging IT Pendulum: Thoughts on Where We Are and Where We’re Going

The technology stack is a dynamic, highly interconnected organism where changes in one part reverberate throughout the entire technology value chain. Historically, IT has ebbed and flowed between centralized vs. distributed design and proprietary, verticalized vs. open, heterogeneous architectures. In the last several years, we’ve seen a rebellion against the tech old guard – those companies which won in the client/server era with proprietary infrastructure sold through rigid perpetual licensing models (see: Microsoft, Oracle, etc.). Cloud, mobile and open source have fundamentally changed the game, disrupting the way technology is developed, adopted and deployed thereby depressing barriers to entry, innovation and adoption. Consequently, proprietary, verticalized tech stacks have, in recent years, given way to open, flexible, heterogeneous architectures – there is no disputing this phenomenon. However, during the Cloud Services break-out session of the #digHBS Summit, Steven Martin, General Manager of Microsoft Azure, confidently advocated the view that the pendulum is steadily swinging back towards verticalized solutions. I wasn’t surprised by the perspective, given Microsoft’s past and future has and does depend on a one-throat-to-choke IT model, but it did get me thinking about what’s the next evolution of IT beyond cloud.

The tech sector has always been characterized by discontinuous innovations – that is, innovations which are not built on top of existing standards or infrastructure – that give rise to entirely new markets each supported by unique value chains that standardize and then coalesce around one dominant player. Semiconductors, PCs, relational databases, local area networks (LANs) are all examples of such innovations that spawned the modern day tech giant – in the case of the innovations cited above those corresponding giants would be Intel, Microsoft, Oracle and Cisco, respectively. According to Geoffrey Moore, many of these innovations followed a predictable adoption cycle that informed product, sales and marketing and financing decisions.

Figure 1: Technology Adoption Lifecycle

Winners in this model were afforded tremendous competitive advantages as they were able to erect seemingly insurmountable barriers to entry and enforce punitively high switching costs on their customers and the entire value chain. However, around 2006/2007, the model started to change dramatically due to three prevailing forces: SaaS, mobile and the proliferation of open source software.

These three forces – in conjunction with customers’ growing wariness of closed, proprietary architectures and increased sensitivity to vendor lock-in – have fundamentally reshaped the technology stack. There are now seemingly infinite permutations of applications that can be built from choices in underlying infrastructure, operating systems, databases, application servers, programming languages/frameworks, developer tools all bound together with middleware and management tools from your vendor du jour (or open source variant). What that has created is an increasingly open, highly heterogeneous and highly complex stack Continue reading “The Swinging IT Pendulum: Thoughts on Where We Are and Where We’re Going”

Evolution of Network Design: Blockchains and the Decentralization of Everything

Since the 1960s we’ve seen two paradigmatic shifts in computing architecture. The first in the early 80s from large, centralized mainframe systems to client/server architecture and most recently from client/server architecture to the highly-distributed, elastic model we know today as cloud. Compute and storage resources are now accessed on demand from billions of end points and this only promises to increase, perhaps exponentially, within the next decade as the Internet of Everything (IoE) comes online (it is estimated today that there are 10 billion physical objects connected to the Internet, by 2020 that number is supposed to increase beyond 50 billion).  However, as the computing stack has become more distributed, elastic and flexible, the network topology those compute and storage resources are connected with hasn’t fundamentally changed since the first packet switched network – the U.S Department of Defense funded ARPANET – was created in 1969.

In the early 80s the TCP/IP protocol – the networking model and a set of communications protocols used for the Internet and similar networks – was formalized to provide end-to-end connectivity, specifying how data should be formatted, addressed, transmitted, routed and received at the destination. This network functionality was organized into four abstraction layers are used to sort all related protocols according to the scope of networking involved.

Figure 1: Internet Protocol Abstraction Layers

TCP-IP Stack

Yet while the underlying gear – and correspondingly performance and capacity of networks – has improved steadily over the years, the actual network topology has remained pretty much identical. Problems have arisen, however, as compute has become more distributed and dynamic. There is a mismatch between compute resources and the underlying network which is only exacerbated as more and more access points come online flooding the network.

The industry answer to this problem has universally been the idea of software-defined networking. The concept is relatively straightforward: extract the control plane (the brain) of each individual network switch/router and put in the cloud as a controller device. The result is a software layer that captures all the intelligence needed and then, in turn, manages commoditized hardware, making networks programmable. The network becomes an elastic resource that, along with compute and storage, is provisioned as needed by IT through a single pane (at least that’s the vision of IT today).

Software-defined networking, however, is an incremental innovation, that, in effect, puts a software layer on top of existing network topologies. Networks become smarter and more nimble, but I would argue that they are not sufficently adapted to reflect what will be the computing reality of tomorrow – characterized by unprecedented scale and distribution. That is why the Bitcoin protocol is so exciting and potentially disruptive.

Bitcoin Protocol – Decentralized Global Ledger
Continue reading “Evolution of Network Design: Blockchains and the Decentralization of Everything”

A Final Word on Facebook / WhatsApp and What it Really Says about Mobile-Social

By now we’re all familiar with the numbers – $19B total consideration ($12B stock, $4B cash, $3B RSUs) implying a price of $42 per user or a staggering $594M per WhatsApp engineer. We’ve heard the rationale – Facebook NEEDED to do this to maintain its share of engagement and activity in the mobile ecosystem. What were they to do? Stand by with their flat-lining Facebook Messenger product as private messaging becomes more valuable and core to the mobile computing experience or go out and buy the leader in the market? WhatsApp, after all, has a rich mobile conversation and sees over 500M photos shared daily – more than Snapchat, Instagram and Facebook.

Ultimately, the most interesting implication of this deal is that it marks an important point in time as the shift towards the mobile computing paradigm accelerates. Only now are we starting to wrap our heads around the differences in designing for mobile vs. native Web. There has been a lot of carnage – A LOT – both by entrepreneurs and investors who have failed to appreciate the scope and scale of just how different our interaction with applications on-the-go is than that same interaction behind our desks.

Whereas native Web apps favor rich, feature-loaded products, mobile trends tend to favor single-purpose, specialized apps. Designing for the Web, a good app is one such that time-on-site is maximized, whereas the opposite is true for mobile: I want to create an app that is incredibly efficient at getting the user exactly what he/she needs – whether that’s a posting or viewing a photo, sending a message, or reserving a car ride. We see this with winners: Instagram, WhatsApp, Uber, etc. The experience is incredibly straight-forward – only a few clicks get me the action I seek. Whereas the losers (or those struggling now), are those companies who have built beautiful, feature-loaded experiences but are still struggling to identify what they really are to their users and growth is decelerating in kind (see: Path, Foursquare, Color Labs).

This creates an ecosystem that is highly distributed – as what Ben Evans of Andreessen Horowitz refers to as a “systemic plurality of options” – and not dominated by a handful of applications or portals like on the Web. The most profound implication, then, is that the winner-takes-all dynamics of social on the desktop do not appear to apply on mobile, and if there are winner-takes-all dynamics for mobile social it’s not yet clear what they are.

The reason for this, at least in large part, is because the phone, in and of itself, is the unifying social fabric. The smartphone, featuring our address book, our photos and voice capabilities, is the Continue reading “A Final Word on Facebook / WhatsApp and What it Really Says about Mobile-Social”

Investing in World Eaten by Software

In 10 years, there will only be 2 kinds of companies: those in the technology business, and those no longer in business.

– Aaron Levie, CEO of Box (Twitter, May 2, 2013)

Today is both the most lucrative and challenging time to be a technology investor. We are in the midst of a generational technology shift from the client/server architecture of the last 25 years to a new IT paradigm characterized by openness, flexibility and heterogeneity all delivered as an on-demand service which can be accessed from billions of endpoints, most of which fit in our pockets. This shift has fundamentally made us rethink how applications are developed, delivered, consumed and monetized and has functionally democratized IT, equipping end-users and organizations of all sizes with enterprise-grade technologies that are now remaking their lives and businesses, respectively.

Technological gains of the last several years – powerful distributed compute and storage, enterprise-grade SaaS and pervasiveness of Internet-enabled access points – leave businesses with an enormous opportunity to create tools designed to disrupt established markets and advance new ones. Further, as we have raced up the technological learning curve in the last few decades, today’s engineers have eschewed out-of-the-box proprietary solutions with rigid licensing contracts in favor of flexible, open source and largely heterogeneous architectures with consumption-based pay-as-you-go pricing.

What this all implies is that barriers to entry, innovation, adoption and distribution have been virtually (pun intended) shattered. IT – and every industry that it touches (and will soon consume) – has become a true meritocracy driven by the most basic Darwinian principle: the best (e.g. most aligned with customer needs and expectations) product wins. Older tech giants who won their fortunes in the client/server days – Intel, Microsoft, Oracle, HP and even Cisco – are struggling to find their way in a world where underlying infrastructure is rapidly commoditizing (thanks to software-defined anything) and cloud and mobile are the new normal. Meanwhile, the playing field has never been more level for upstarts competing for share of corporate and consumer wallets. Companies are employing user centric, cloud + mobile-first strategies to achieve unprecedented scale leading to burgeoning riches (see: Dropbox, Snapchat, GitHub, WhatsApp, MongoDB, Box, New Relic and the list goes on and on).

This rapid (and accelerating) rate of change we see in IT creates enormous opportunity to achieve out-sized returns for investors but also implies that risk has never been greater. In a world of fickle end-user preferences and requirements, failing to innovate is the quickest way to lose; today’s disruptor is tomorrow’s disruptee.

Because of this phenomenon, I’ve heard multiple people describe this as the single best and single worst time to be an IT investor – particularly Continue reading “Investing in World Eaten by Software”

The Death of the Tech Giant

How the rise of open, flexible, heterogeneous architectures have reshaped the technology stack and distorted winner-take-all dynamics in IT 

Traditionally, tech has been characterized by discontinuous innovation – that is, innovation which is not built on top existing standards or infrastructure – giving rise to entirely new markets each supported by unique value chains that standardize and then coalesce around one dominant player. Semiconductors, PCs, relational databases, local area networks (LANs) are all examples of such innovations that spawned, what many refer to as, the modern day tech giant – in the case of the innovations cited above those corresponding giants would be Intel, Microsoft, Oracle and Cisco, respectively. These companies, whose products became the standard around which entire new supply chains were formed, fundamentally built and shaped the technology stack and as a result were able to establish protective moats around their businesses. Accordingly, these companies were afforded tremendous competitive advantages as they were able to erect seemingly insurmountable barriers to entry and enforce punitively high switching costs on the entire technology value chain.

However today, these companies are facing unique sets of challenges which, in turn, are curtailing growth and compressing margins. Indeed, each of the four companies cited above is trading at or near historic lows (on a price/earnings basis). There are myriad secular reasons that help explain why, arguably, the four most dominant tech companies in the last 2-3 decades are struggling, however, I want to put forth a broader, macro-rooted explanation: simply that, as we continue to move closer to an IT model that is characterized by flexible, open, highly heterogeneous architectures, the tech paradigm shifts away from a winner-take-all dynamic to one such that no single player exerts a disproportionate amount of force on a particular market.

Before examining what’s different today, it’s helpful to understand historically how these tech giants came to dominance. Traditionally, a discontinuous innovation would spur a period of hyper-growth that coincided with mass market adoption of the new technology. During this time, several companies would come to market with competing offerings, yet in an effort scale rapidly, market stakeholders generally would standardize around a product from a single vendor, building compatible systems and getting a whole new set of product and service providers up to speed to build a new value chain. This act of standardization, catapulted a single company into a position of overwhelmingly dominant competitive advantage, as seen with Intel’s x86 chip architecture, Microsoft’s Windows operating system, the Oracle Database and Cisco’s TCP/IP network routers.

So, what’s changed recently? I argue that there isn’t one principle catalyst for this shift, but rather many small evolutions in the way technology is developed, procured and deployed that have distorted Continue reading “The Death of the Tech Giant”

What I’m Excited About for 2012 (and Beyond)

In keeping with compulsory end-of-year traditions, I wanted to share what I’m most excited about as an early-stage technology investor heading into 2012 and beyond. If 2011 was all about VCs losing their heads in the consumer Internet craze, I think 2012 will be about the re-emergence of enterprise investing which has been a black sheep for many venture capitalists since around 2008. (Perhaps nothing makes it clearer that enterprise investing is on its way back than this Pulitzer-worthy piece of journalism from TechCrunch: http://tcrn.ch/tx0S6b).

So, with no further ado, here are several themes that I’ll be spending my time exploring in the coming weeks, months and years:

Democratization of Big Data
I view the democratization of Big Data as two things: 1) giving any organization – from enterprise to startup – a cost-effective way to harness the power of the data it generates and 2) to empower any user within the organization with tools that make it easier for him or her to glean insights from that data.

The first statement reflects the point that startups today generate petabytes of both structured and unstructured data but do not have the financial and/or technical wherewithal to deploy multi-million dollar Netezza, Vertica, Greenplum, Teradata, etc. appliances and data warehouses. Hadoop offers some benefits (e.g. runs on commodity hardware) but it’s still costly and complex to deploy and manage. Amazon Elastic MapReduce offers access to Hadoop clusters as-a-Serivce, but outside of that, smaller companies today don’t have affordable, out-of-the-box or on-demand solutions to process, analyze and then gain insights from the data they generate. In 2012 this will start to change.

The second point reflects that for an organization to make sense of their data they often are forced to make at least two or three specialized, expensive hires: a Java Engineer to set up the Hadoop infrastructure, a Data Engineer to write the data processing algorithms and then a Data Scientist to generate insights from the processed data. This is too much overhead. That’s why I’m excited about products like Platfora that unlock Hadoop and enable Business Analysts with little or no technical expertise to derive insight from massive sets of data.

Cloud (Security of)
“Cloud” has been a buzzword for more than five years now, but really only in the last two have organizations started to realize the immense potential from this paradigmatic shift in how compute resources are structured and utilized. What’s slowed adoption, however, has been concern over security, compliance and governance of data in the cloud. Incumbent network security and security software vendors have not innovated adequately to meet the demands of this new architecture, but in the last year or two there seems Continue reading “What I’m Excited About for 2012 (and Beyond)”

-The New Yorker

-The New Yorker