This post is by Howard Yu from HBR.org
Click here to view on the original site: Original Post
Last week, machine learning took a big leap forward when Google’s AlphaGo, a machine algorithm, beat the world champion, Lee Sedol, in the game Go. An ancient Chinese board game that dates back nearly 3,000 years, Go is played on a 19-by-19 square grid, with each player trying capture the opponent’s territory. Unlike Western chess that has around 40 turns in a game, Go can go up to 200. The number of possible outcomes quickly compounds to a bewildering range of 10,761 — more than the total number of atoms in the entire observable universe. It was thought it would take at least another 10 years before a machine could beat a human in Go.
What’s most remarkable is that AlphaGo turns out to be a machine that can improve its performance every day, without the direct supervision of a human programmer. That’s like an aircraft that can fly faster
faster without the help of an engineer. How can that be possible?
The past, present and future of machine learning
Structured data. When machine learning first took off, it was used to predict how we click, buy, lie, or die. Machines improved the way companies email, call, offer a discount, recommend a product, show an ad, inspect flaws, and approve loans. Under the hood of machine learning is statistical data mining that uncovers previously unknown patterns and recommends real-time actions. The downside to this approach is that it is context dependent. This is why most algorithms were built for a single purpose, like Deep Blue, which beat former chess grandmaster Garry Kasparov, but wasn’t useful for anything else. For these first-generation machines, learning was made possible only through constant monitoring by computer scientists and statisticians. Data had to be labeled, and goals set. The same program design couldn’t be used for different problems, and the algorithm couldn’t understand unstructured data expressed in natural human language.
Natural languages. When IBM Watson beat former champions Ken Jennings and Brad Rutter in the game show Jeopardy! in February 2011, it became clear that machine learning could go beyond this single-minded focus and handle unstructured, ambiguous data. In addition to factual knowledge on a wide variety of topics, competing on Jeopardy! requires the ability to understand subtle meaning, irony, riddles, slang, metaphors, jokes, puns, and other language complexities. Meaning is dependent on what has been said before, the topic itself, and how it is being discussed. At the end of the two-day Jeopardy! tournament, Watson amassed $77,147 in prize money, more than three times that of its human opponents. Jennings, who came in second, later said, “Just as factory jobs were eliminated in the 20th century by new assembly line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines.”
Currently, rather than replacing experts, Watson enhances their work. For example, the algorithm provides research and clinical recommendations to oncologists, who only need to describe a patient’s symptoms to Watson in plainspoken English, via an iPad application. Even though it doesn’t rely on encoded rules, IBM Watson requires close monitoring by domain experts to provide data and evaluate its performance. Before Watson went live for oncologists, it had been manually fed 25,000 test-case scenarios; 1,500 real-life cases; 605,000 pieces of medical evidence; and 2 million pages of text. Nurses had spent more than 14,700 hours meticulously training the algorithm. All this took time, money, and dedication.
Deep learning. Before AlphaGo played against a human, Google researchers had been developing it to play video games — Space Invaders, Breakout, Pong, and others. AlphaGo was programmed to seek positive rewards in the form of scores and continually improve its system by playing millions of games against tweaked versions of itself. The algorithm was able to master each game by trial and error — pressing different buttons randomly at first, then figuring out an appropriate strategy and applying it without making any mistakes. AlphaGo achieved this because it is based on a deep neural network — a network of hardware and software that mimics the web of neurons in the human brain. This idea is not new; it has been discussed among computer scientists for more than 20 years. But due to the advance of computing power, deep learning has become practical, and AlphaGo was the first to achieve such a stunning mimicry of intuitive thinking.
The impact on big companies
AlphaGo proves that the rise of machines capable of learning with minimum supervision from human experts and programmers is inevitable. And as IBM Watson has shown, machines will also absorb large amount of information and data in any format, structured or not, across a wide array of sources. The cost of implementation will continue to fall. The coordination of business transactions within and outside an organization will speed up, and in the process, eliminate organizational friction and facilitate market collaboration.
For these reasons, big companies with the conventional advantage of being vertically integrated will be the first to go. Traditional propositions like “one-stop shop” or “supply chain optimization” will become commonplace, easily achievable by small players or new entrants in a number of industries.
Consider Electronic Data Interchange (EDI) and other large-scale inter-organizational systems that enable real-time communication with suppliers, customers, and logistics specialists. Systems made by SAP and Oracle are bulky, expensive, difficult to use, and hard to integrate. Historically, only big firms such as Walmart or Best Buy have had enough sway, and bargaining power, to force their suppliers to adopt them. Once put in place, these systems still require a bastion of specialists to constantly monitor, tweak, and recommend managerial actions as well as cascade them throughout an organization.
In contrast, the team that developed AlphaGo had fewer than 50 people. The program itself is relatively lightweight, requires little human intervention once set up, and is deployable across different problem domains.
It is easy to imagine a world where self-taught algorithms will play a much bigger role in coordinating economic transactions; AlphaGo simply shows us what is possible in the near future. With instantaneous adjustment, automatic optimization, and continuous improvement all quietly managed by unsupervised algorithms, the redundancy of production facilities and wastage in the supply chain should become headaches of the past. Freed from the pressure to vertically integrate and with far fewer resources needed for organizational coordination, smaller players will be able to specialize in best-in-class services and deliver extremely customized solutions in real time when specific demands arise.
The questions looming for large, nontech companies are: 1) What are the core competencies of my organization when size no longer matters? 2) How much of my organization’s managerial expertise is entirely dedicated to market coordination? 3) If new competitors replicate these capabilities by replacing human experts with machine algorithms, what is my cost structure compared to theirs? 4) Going forward, what new offering can I extend if product distribution is democratized? 5) Can I partner with new players to recombine my current capabilities to enter new markets?
Most exciting to me is the even larger prospect of embedding AlphaGo-like machine learning into the backbone of the global economy. When algorithms that never stop learning link loosely related companies, NGOs, and government agencies, they may enable the emergence of new ecosystems that tackle the most difficult societal problems — those currently fumbled by the complexity and fragmentation of players in the field — like energy, transportation, health care, and education.