##
Example 8-13
Section* *

Consider two probability mass functions. The first:

x |
3 | 4 | 5 |
---|---|---|---|

f(x) |
0.3 | 0.4 | 0.3 |

And, the second:

y |
1 | 2 | 6 | 8 |
---|---|---|---|---|

f(y) |
0.4 | 0.1 | 0.3 | 0.2 |

It is a straightforward calculation to show that the mean of \(X\) and the mean of \(Y\) are the same:

\(\mu_X=E(X) = 3(0.3)+4(0.4)+5(0.3)=4\)

\(\mu_Y=E(Y)=1(0.4)+2(0.1)+6(0.3)+8(0.2)=4\)

Let's draw a picture that illustrates the two p.m.f.s and their means.

Again, the pictures illustrate (at least) two things:

- The \(X\) and \(Y\) means are at the fulcrums in which their axes don't tilt ("a balanced seesaw").
- The second p.m.f. exhibits greater variability than the first p.m.f.

That second point suggests that the means of \(X\) and \(Y\) are not sufficient in summarizing their probability distributions. Hence, the following definition!

**Definition.** When \(u(X)=(X-\mu)^2\), the expectation of \(u(X)\):

\(E[u(X)]=E[(X-\mu)^2]=\sum\limits_{x\in S} (x-\mu)^2 f(x)\)

is called the **variance of \(X\)**, and is denoted as \(\text{Var}(X)\) or \(\sigma^2\) ("sigma-squared"). The variance of \(X\) can also be called the **second moment of \(X\)**** about the mean \(\mu\)**.

The positive square root of the variance is called the **standard deviation of \(X\)**, and is denoted \(\sigma\) ("sigma"). That is:

\(\sigma=\sqrt{Var(X)}=\sqrt{\sigma^2}\)

Although most students understand that \(\mu=E(X)\) is, in some sense, a measure of the middle of the distribution of \(X\), it is much more difficult to get a feeling for the meaning of the variance and the standard deviation. The next example (hopefully) illustrates how the variance and standard deviation quantifies the spread or dispersion of the values in the support \(S\).

##
Example 8-14
Section* *

Let's return to the probability mass functions of the previous example. The first:

x |
3 | 4 | 5 |
---|---|---|---|

f(x) |
0.3 | 0.4 | 0.3 |

And, the second:

y |
1 | 2 | 6 | 8 |
---|---|---|---|---|

f(y) |
0.4 | 0.1 | 0.3 | 0.2 |

What is the variance and standard deviation of \(X\)? How does it compare to the variance and standard deviation of \(Y\)?

#### Solution

The variance of \(X\) is calculated as:

\(\sigma^2_X=E[(X-\mu)^2]=(3-4)^2(0.3)+(4-4)^2(0.4)+(5-4)^2(0.3)=0.6\)

And, therefore, the standard deviation of \(X\) is:

\(\sigma_X=\sqrt{0.6}=0.77\)

Now, the variance of \(Y\) is calculated as:

\(\sigma_Y^2=E[(Y-\mu)^2]=(1-4)^2(0.4)+(2-4)^2(0.1)+(6-4)^2(0.3)+(8-4)^2(0.2)=8.4\)

And, therefore, the standard deviation of \(Y\) is:

\(\sigma_Y=\sqrt{8.4}=2.9\)

As you can see, the expected variation in the random variable \(Y\), as quantified by its variance and standard deviation, is much larger than the expected variation in the random variable \(X\). Given the p.m.f.s of the two random variables, this result should not be surprising.

As you might have noticed, the formula for the variance of a discrete random variable can be quite cumbersome to use. Fortunately, there is a slightly easier-to-work-with alternative formula.

An easier way to calculate the **variance** of a random variable \(X\) is:

\(\sigma^2=Var(X)=E(X^2)-\mu^2\)

##### Proof

##
Example 8-15
Section* *

Use the alternative formula to verify that the variance of the random variable \(X\) with the following probability mass function:

x |
3 | 4 | 5 |
---|---|---|---|

f(x) |
0.3 | 0.4 | 0.3 |

is 0.6, as we calculated earlier.

#### Solution

First, we need to calculate the expected value of \(X^2\):

\(E(X^2)=3^2(0.3)+4^2(0.4)+5^2(0.3)=16.6\)

Earlier, we determined that \(\mu\), the mean of \(X\), is 4. Therefore, using the shortcut formula for the variance, we verify that indeed the variance of \(X\) is 0.6:

\(\sigma^2_X=E(X^2)-\mu^2=16.6-4^2=0.6\)

##
Example 8-16
Section* *

Suppose the random variable \(X\) follows the uniform distribution on the first \(m\) positive integers. That is, suppose the p.m.f. of \(X\) is:

\(f(x)=\dfrac{1}{m}\) for \(x=1, 2, 3, \ldots, m\)

What is the variance of \(X\)?

#### Solution

On the previous page, we determined that the mean of the discrete uniform random variable \(X\) is:

\(\mu=E(X)=\dfrac{m+1}{2}\)

If we can calculate \(E(X^2)\), we can use the shortcut formula to calculate the variance of \(X\). Let's do that:

The following theorem can be useful in calculating the mean and variance of a random variable \(Y\) that is a linear function of a random variable \(X\).

If the mean and variance of the random variable \(X\) is:

\(\mu_X\) and \(\sigma^2_X\)

respectively, then the mean, variance and standard deviation of the random variable \(Y=aX+b\) is:

\begin{array}{lcl} \mu_Y &=& a\mu_X+b\\ \sigma^2_Y &=& a^2 \sigma^2_X\\ \sigma_Y &=& |a|\sigma_X \end{array}

##### Proof

##
Example 8-17
Section* *

The mean temperature in Victoria, B.C. is 50 degrees Fahrenheit with standard deviation 8 degrees Fahrenheit. What is the mean temperature in degrees Celsius? What is the standard deviation in degrees Celsius?

#### Solution

First, recall that the conversion from Fahrenheit (F) to Celsius (C) is:

\(C=\dfrac{5}{9}(F-32)\)

Therefore, the mean temperature in degrees Celsius is calculated as:

\(\mu_C=E(C)=E\left[\dfrac{5}{9}F-\dfrac{160}{9}\right]= \dfrac{5}{9}E(F)-\dfrac{160}{9}=\dfrac{5}{9}(50)-\dfrac{160}{9}=\dfrac{250-160}{9}=\dfrac{90}{9}=10\)

And, the standard deviation in degrees Celsius is calculated as:

\(\sigma_C=|\dfrac{5}{9}|\sigma_F=\dfrac{5}{9}(8)=\dfrac{40}{9}=4.44\)