Expectation is such an ubiquitous construct in probability that it’s worth looking at the various situations in which it appears. It’s one of those constructs that was so cheap to invent but pays off time and again.
Consider a random variable and asking the following question.
We can already write this as an expectation
where is the indicator function. If
we can make the following substitution
By dividing by
, the random variable takes on values
whenever
and thus we end up with the above inequality. Further, if
is a non-negative random variable we can continue
This gives us one version of the Chebyshev’s inequality
### Example
As an example, I have computed the chebyshev approximations of the probabilities of a die taking on value greater than for two different dies.
ghci> :l Stats.hs
ghci> let fair_die = replicate 6 (1/6 :: Rational)
ghci> let die2 = map (/21) [6,5..1 :: Rational]
ghci> let var = [1..6]
ghci> let error dist xs ys = e dist (zipWith (\x y -> (x-y)^2) xs ys)
ghci>
ghci> print $ map (1-) (cumul fair_die)
[1 % 1,5 % 6,2 % 3,1 % 2,1 % 3,1 % 6,0 % 1]
ghci> print $ map (e fair_die var /) var
[7 % 2,7 % 4,7 % 6,7 % 8,7 % 10,7 % 12]
ghci>
ghci> let error_fair = error fair_die (map (1-) (cumul fair_die)) $ map (e fair_die var /) var
ghci> let error_die2 = error die2 (map (1-) (cumul die2)) $ map (e die2 var /) var
ghci> error_fair > error_die2
True
As you can see, the upper bound is pretty poor. But we see that the bound is closer when we consider probabilities closer to the tail. I’ll come back to this and a couple of other versions of the Chebyshev’s inequality at a later time.
Pingback: What’s Real About Probability? (11/365) | Latent observations
Pingback: Chebyshev’s Inequality For Bounding From Below? (12/365) | Latent observations