Uncertainty Wednesday: Fat Tails

So last Uncertainty Wednesday we encountered a random variable that does not have an expected value. Now if you read that post you might ask, was this just an artificially constructed example or do random variables like that actually occur? Well the example I gave was an extreme form of a power law, which are distributions increasingly found in the economy as we transition to a digital world. Due to network effects, the winning company in a space has many times the size of the runner-up and there is a long tail of smaller competitors. The distribution of views on a site such as Youtube similarly follows a power law. So increasingly does the wealth distribution.

Here is another example of a distribution that at first glance looks like it ought to have an expected value:

image

Just eyeballing this it would seem that the expected value is 0. But

image

well, wrong. In fact, this distribution, known as the Cauchy Distribution does not have an expected value (it does not have a variance either!).

Now you might have noticed that this looks a lot like the Normal Distribution, which we had encountered earlier. That had a well defined expected value and variance, so what gives? Well consider the following graph which compares the two distributions:

image

You can see that the Normal Distribution has more probability concentrated right around the 0 and then declines very rapidly. The Cauchy Distribution by contrast declines less rapidly in probability in the tails. It is an example of a so called fat tailed distribution.

In the Cauchy Distribution if we tried to form the expected value for outcomes above 0 the infinite sum goes to positive infinity and below 0 it goes to negative infinity. The two do not offset each other, instead the sum is not defined. So this is what both last Uncertainty Wednesday’s example and today’s example have in common: extreme events have sufficiently high probability that the expected value is not defined. Next Wednesday we will see some practical implications of this for observed sample means and what we can learn from them.