Thinking About AI: Part 2 – Structural Risks
This post is by Continuations by Albert Wenger from Continuations by Albert Wenger
Yesterday I wrote a post on where we are with artificial intelligence by providing some history and foundational ideas around neural network size. Today I want to start in on risks from artificial intelligence. These fall broadly into two categories: existential and structural. Existential risk is about AI wiping out most or all of humanity. Structural risk is about AI aggravating existing problems, such as wealth and power inequality in the world. Today’s post is about structural risks.
Structural risks of AI have been with us for quite some time. A great example of these is the Youtube recommendation algorithm. The algorithm, as far as we know, optimizes for engagement because Youtube’s primary monetization are ads. This means the algorithm is more likely to surface videos that have an emotional hook than ones that require the viewer to think. It will also pick content that emphasizes the same point of view, instead of surfacing opposing views. And finally it will tend to recommend videos that have already demonstrated engagement over those that have not, giving rise to a “rich getting richer” effect in influence.
With the current progress it may look at first like these structural risks will just explode. Start using models everywhere and wind up having bias risk, “rich get richer” risk, wrong objective function risk, etc. everywhere. This is a completely legitimate concern and I don’t want to dismiss it.
On the other hand there are also new opportunities that come from potentially giving broad access to (Read more...)