## How are mean and variance related? (2/365)

I’ve recently been working through the problems in the book Probability by A. N. Shiryayev. I’ll probably end up posting some answers to interesting exercises here as I go through them. Regrettably, although I did take many math courses at university, I never took one on probability.

There is a related gripe to be had here which I may explore in a later post but the gist of it is that when it comes to machine learning research even chapter 2 in a book on foundations of probability seems unapplicable in this field. The major technical aspects in machine learning seem to only require the tools of optimization theory and not much else. Nevertheless, given that probability is our best understanding of uncertainty, it’s worth studying in some depth and working through this book is my attempt.

The most basic way to characterize uncertainty in a set of points $x_1, \dots, x_n \in \mathbb{R}$ that you may have observed from some system is with a single point $\mu \in \mathbb{R}$. You then quantify this point $\mu$ by stating its variance against the $x_i$. There is not just one way to characterize this variance but a common approach is to define variance as

$\displaystyle \frac{1}{N}\sum_{i=1}^N (x_i - \mu)^2$

More generally, if $X$ is a set of points and $f : X \rightarrow \mathbb{R}$ is a density function (i.e. weights such that the sum over $X$ is 1) we write the variance as the following weighted sum

$\displaystyle \int_{x \in X} (x - \mu)^2 f(x)$

Now why is this an good/interesting way to characterize a summary point $\mu \in \mathbb{R}$? Now that we have a way to characterize a special point we can ask what is the best such point? And you should see that such a point should minimize the variance

$\displaystyle \underset{\mu}{\arg \min} \int_{x \in X} (x - \mu)^2 f(x)$

Now optimize it by taking the derivative

\begin{aligned} 0 &= \int_{x \in X} 2(x - \mu) f(x) \\ 0 &= 2\int_{x \in X} xf(x) - 2\mu \int_{x \in X} f(x) \\ \mu &= \int_{x \in X} x f(x) \end{aligned}

We see that the special point $\mu$ has a clean form and is nothing more than the weighted mean we all know and that is the relationship between mean and variance. If you try to use a different definition of variance, say, raising the difference to a power of $4$ instead of $2$ you will not end up with a simple result as this. You might consider taking the absolute value of the difference instead but I will come to this in the next post.

So, let me write what we have seen in the notation commonly used in probability. The variance we defined is written as $\mathbf{E}_f(x - \mu)^2$, and the mean as $\mathbf{E}_fx$, and we did the following optimization

\begin{aligned} &\underset{\mu}{\arg \min} \mathbf{E}_f(x-\mu)^2 \\ 0 &= \mathbf{E}_f 2(x - \mu) \text{ by linearity of E}\\ 0 &= 2\mathbf{E}_f x - 2\mu \text{ by linearity of E}\\ \mu &= \mathbf{E}_f x \end{aligned}

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.