Lesson 15: Exponential, Gamma and ChiSquare Distributions
Lesson 15: Exponential, Gamma and ChiSquare DistributionsOverview
In this Chapter, we investigate the probability distributions of continuous random variables that are so important to the field of statistics that they are given special names. They are:
 the uniform distribution (Lesson 14)
 the exponential distribution
 the gamma distribution
 the chisquare distribution
 the normal distribution
In this lesson, we will investigate the probability distribution of the waiting time, \(X\), until the first event of an approximate Poisson process occurs. We will learn that the probability distribution of \(X\) is the exponential distribution with mean \(\theta=\dfrac{1}{\lambda}\). In this lesson, we investigate the waiting time, \(W\), until the \(\alpha^{th}\) (that is, "alpha"th) event occurs. As we'll soon learn, that distribution is known as the gamma distribution. After investigating the gamma distribution, we'll take a look at a special case of the gamma distribution, a distribution known as the chisquare distribution.
Objectives
 To learn a formal definition of the probability density function of a (continuous) exponential random variable.
 To learn key properties of an exponential random variable, such as the mean, variance, and moment generating function.
 To understand the steps involved in each of the proofs in the lesson.
 To be able to apply the methods learned in the lesson to new problems. To understand the motivation and derivation of the probability density function of a (continuous) gamma random variable.
 To understand the effect that the parameters \(\alpha\) and \(\theta\) have on the shape of the gamma probability density function.
 To learn a formal definition of the gamma function.
 To learn a formal definition of the probability density function of a gamma random variable.
 To learn key properties of a gamma random variable, such as the mean, variance, and moment generating function.
 To learn a formal definition of the probability density function of a chisquare random variable.
 To understand the relationship between a gamma random variable and a chisquare random variable.
 To learn key properties of a chisquare random variable, such as the mean, variance, and moment generating function.
 To learn how to read a chisquare value or a chisquare probability off of a typical chisquare cumulative probability table.
 To understand the steps involved in each of the proofs in the lesson.
 To be able to apply the methods learned in the lesson to new problems.
15.1  Exponential Distributions
15.1  Exponential DistributionsExample 151
Suppose \(X\), following an (approximate) Poisson process, equals the number of customers arriving at a bank in an interval of length 1. If \(\lambda\), the mean number of customers arriving in an interval of length 1, is 6, say, then we might observe something like this:
In this particular representation, seven (7) customers arrived in the unit interval. Previously, our focus would have been on the discrete random variable \(X\), the number of customers arriving. As the picture suggests, however, we could alternatively be interested in the continuous random variable \(W\), the waiting time until the first customer arrives. Let's push this a bit further to see if we can find \(F(w)\), the cumulative distribution function of \(W\):
Now, to find the probability density function \(f(w)\), all we need to do is differentiate \(F(w)\). Doing so, we get:
\(f(w)=F'(w)=e^{\lambda w}(\lambda)=\lambda e^{\lambda w}\)
for \(0<w<\infty\). Typically, though we "reparameterize" before defining the "official" probability density function. If \(\lambda\) (the Greek letter "lambda") equals the mean number of events in an interval, and \(\theta\) (the Greek letter "theta") equals the mean waiting time until the first customer arrives, then:
\(\theta=\dfrac{1}{\lambda}\) and \(\lambda=\dfrac{1}{\theta}\)
For example, suppose the mean number of customers to arrive at a bank in a 1hour interval is 10. Then, the average (waiting) time until the first customer is \(\frac{1}{10}\) of an hour, or 6 minutes.
Let's now formally define the probability density function we have just derived.
 Exponential Distribution

The continuous random variable \(X\) follows an exponential distribution if its probability density function is:
\(f(x)=\dfrac{1}{\theta} e^{x/\theta}\)
for \(\theta>0\) and \(x\ge 0\).
Because there are an infinite number of possible constants \(\theta\), there are an infinite number of possible exponential distributions. That's why this page is called Exponential Distributions (with an s!) and not Exponential Distribution (with no s!).
15.2  Exponential Properties
15.2  Exponential PropertiesHere, we present and prove four key properties of an exponential random variable.
Theorem
The exponential probability density function:
\(f(x)=\dfrac{1}{\theta} e^{x/\theta}\)
for \(x\ge 0\) and \(\theta>0\) is a valid probability density function.
Proof
Theorem
The moment generating function of an exponential random variable \(X\) with parameter \(\theta\) is:
\(M(t)=\dfrac{1}{1\theta t}\)
for \(t<\frac{1}{\theta}\).
Proof
\(M(t)=E(e^{tX})=\int_0^\infty e^{tx} \left(\dfrac{1}{\theta}\right) e^{x/\theta} dx\)
Simplifying and rewriting the integral as a limit, we have:
\(M(t)=\dfrac{1}{\theta}\lim\limits_{b \to \infty} \int_0^b e^{x(t1/\theta)} dx\)
Integrating, we have:
\(M(t)=\dfrac{1}{\theta}\lim\limits_{b \to \infty} \left[ \dfrac{1}{t1/\theta} e^{x(t1/\theta)} \right]^{x=b}_{x=0}\)
Evaluating at \(x=0\) and \(x=b\), we have:
\(M(t)=\dfrac{1}{\theta}\lim\limits_{b \to \infty} \left[ \dfrac{1}{t1/\theta} e^{b(t1/\theta)}  \dfrac{1}{t1/\theta} \right]=\dfrac{1}{\theta}\lim\limits_{b \to \infty} \left\{ \left(\dfrac{1}{t1/\theta}\right) e^{b(t1/\theta)} \right\}\dfrac{1}{t1/\theta}\)
Now, the limit approaches 0 provided \(t\frac{1}{\theta}<0\), that is, provided \(t<\frac{1}{\theta}\), and so we have:
\(M(t)=\dfrac{1}{\theta} \left(0\dfrac{1}{t1/\theta}\right)\)
Simplifying more:
\(M(t)=\dfrac{1}{\theta} \left(\dfrac{1}{\dfrac{\theta t1}{\theta}}\right)=\dfrac{1}{\theta}\left(\dfrac{\theta}{\theta t1}\right)=\dfrac{1}{\theta t1}\)
and finally:
\(M(t)=\dfrac{1}{1\theta t}\)
provided \(t<\frac{1}{\theta}\), as was to be proved.
Theorem
The mean of an exponential random variable \(X\) with parameter \(\theta\) is:
\(\mu=E(X)=\theta\)
Proof
Theorem
The variance of an exponential random variable \(X\) with parameter \(\theta\) is:
\(\sigma^2=Var(X)=\theta^2\)
Proof
15.3  Exponential Examples
15.3  Exponential ExamplesExample 152
Students arrive at a local bar and restaurant according to an approximate Poisson process at a mean rate of 30 students per hour. What is the probability that the bouncer has to wait more than 3 minutes to card the next student?
Solution
If we let \(X\) equal the number of students, then the Poisson mean \(\lambda\) is 30 students per 60 minutes, or \(\dfrac{1}{2}\) student per minute! Now, if we let \(W\) denote the (waiting) time between students, we can expect that there would be, on average, \(\theta=\dfrac{1}{\lambda}=2\) minutes between arriving students. Because \(W\) is (assumed to be) exponentially distributed with mean \(\theta=2\), its probability density function is:
\(f(w)=\dfrac{1}{2} e^{w/2}\)
for \(w\ge 0\). Now, we just need to find the area under the curve, and greater than 3, to find the desired probability:
Example 153
The number of miles that a particular car can run before its battery wears out is exponentially distributed with an average of 10,000 miles. The owner of the car needs to take a 5000mile trip. What is the probability that he will be able to complete the trip without having to replace the car battery?
Solution
At first glance, it might seem that a vital piece of information is missing. It seems that we should need to know how many miles the battery in question already has on it before we can answer the question! Hmmm.... or do we? Well, let's let \(X\) denote the number of miles that the car can run before its battery wears out. Now, suppose the following is true:
\(P(X>x+yX>x)=P(X>y)\)
If it is true, it would tell us that the probability that the car battery wears out in more than \(y=5000\) miles doesn't matter if the car battery was already running for \(x=0\) miles or \(x=1000\) miles or \(x=15000\) miles. Now, we are given that \(X\) is exponentially distributed. It turns out that the above statement is true for the exponential distribution (you will be asked to prove it for homework)! It is for this reason that we say that the exponential distribution is "memoryless."
It can also be shown (do you want to show that one too?) that if \(X\) is exponentially distributed with mean \(\theta\), then:
\(P(X>k)=e^{k/\theta}\)
Therefore, the probability in question is simply:
\(P(X>5000)=e^{5000/10000}=e^{1/2}\approx 0.604\)
We'll leave it to the gentleman in question to decide whether that probability is large enough to give him comfort that he won't be stranded somewhere along a remote desert highway!
15.4  Gamma Distributions
15.4  Gamma DistributionsThe Situation
In the previous lesson, we learned that in an approximate Poisson process with mean \(\lambda\), the waiting time \(X\) until the first event occurs follows an exponential distribution with mean \(\theta=\frac{1}{\lambda}\). We now let \(W\) denote the waiting time until the \(\alpha^{th}\) event occurs and find the distribution of \(W\). We could represent the situation as follows:
Derivation of the Probability Density Function
Just as we did in our work with deriving the exponential distribution, our strategy here is going to be to first find the cumulative distribution function \(F(w)\) and then differentiate it to get the probability density function \(f(w)\). Now, for \(w>0\) and \(\lambda>0\), the definition of the cumulative distribution function gives us:
\(F(w)=P(W\le w)\)
The rule of complementary events tells us then that:
\(F(w)=1P(W> w)\)
Now, the waiting time \(W\) is greater than some value \(w\) only if there are fewer than \(\alpha\) events in the interval \([0,w]\). That is:
\(F(w)=1P(\text{fewer than }\alpha\text{ events in } [0,w]) \)
A more specific way of writing that is:
\(F(w)=1P(\text{0 events or 1 event or ... or }(\alpha1)\text{ events in } [0,w]) \)
Those mutually exclusive "ors" mean that we need to add up the probabilities of having 0 events occurring in the interval \([0,w]\), 1 event occurring in the interval \([0,w]\), ..., up to \((\alpha1)\) events in \([0,w]\). Well, that just involves using the probability mass function of a Poisson random variable with mean \(\lambda w\). That is:
\(F(w)=1\sum\limits_{k=0}^{\alpha1} \dfrac{(\lambda w)^k e^{\lambda w}}{k!}\)
Now, we could leave \(F(w)\) well enough alone and begin the process of differentiating it, but it turns out that the differentiation goes much smoother if we rewrite \(F(w)\) as follows:
\(F(w)=1e^{\lambda w}\sum\limits_{k=1}^{\alpha1} \dfrac{1}{k!} \left[(\lambda w)^k e^{\lambda w}\right]\)
As you can see, we merely pulled the \(k=0\) out of the summation and rewrote the probability mass function so that it would be easier to administer the product rule for differentiation.
Now, let's do that differentiation! We need to differentiate \(F(w)\) with respect to \(w\) to get the probability density function \(f(w)\). Using the product rule, and what we know about the derivative of \(e^{\lambda w}\) and \((\lambda w)^k\), we get:
\(f(w)=F'(w)=\lambda e^{\lambda w} \sum\limits_{k=1}^{\alpha1} \dfrac{1}{k!} \left[(\lambda w)^k \cdot (\lambda e^{\lambda w})+ e^{\lambda w} \cdot k(\lambda w)^{k1} \cdot \lambda \right]\)
Pulling \(\lambda e^{\lambda w}\) out of the summation, and dividing \(k\) by \(k!\) (to get \( \frac{1}{(k1)!}\)) in the second term in the summation, we get that \(f(w)\) equals:
\(=\lambda e^{\lambda w}+\lambda e^{\lambda w}\left[\sum\limits_{k=1}^{\alpha1} \left\{ \dfrac{(\lambda w)^k}{k!}\dfrac{(\lambda w)^{k1}}{(k1)!} \right\}\right]\)
Evaluating the terms in the summation at \(k=1, k=2\), up to \(k=\alpha1\), we get that \(f(w)\) equals:
\(=\lambda e^{\lambda w}+\lambda e^{\lambda w}\left[(\lambda w1)+\left(\dfrac{(\lambda w)^2}{2!}\lambda w\right)+\cdots+\left(\dfrac{(\lambda w)^{\alpha1}}{(\alpha1)!}\dfrac{(\lambda w)^{\alpha2}}{(\alpha2)!}\right)\right]\)
Do some (lots of!) crossing out (\(\lambda w \lambda w =0\), for example), and a bit more simplifying to get that \(f(w)\) equals:
\(=\lambda e^{\lambda w}+\lambda e^{\lambda w}\left[1+\dfrac{(\lambda w)^{\alpha1}}{(\alpha1)!}\right]=\lambda e^{\lambda w}\lambda e^{\lambda w}+\dfrac{\lambda e^{\lambda w} (\lambda w)^{\alpha1}}{(\alpha1)!}\)
And since \(\lambda e^{\lambda w}=\lambda e^{\lambda w}=0\), we get that \(f(w)\) equals:
\(=\dfrac{\lambda e^{\lambda w} (\lambda w)^{\alpha1}}{(\alpha1)!}\)
Are we there yet? Almost! We just need to reparameterize (if \(\theta=\frac{1}{\lambda}\), then \(\lambda=\frac{1}{\theta}\)). Doing so, we get that the probability density function of \(W\), the waiting time until the \(\alpha^{th}\) event occurs, is:
\(f(w)=\dfrac{1}{(\alpha1)! \theta^\alpha} e^{w/\theta} w^{\alpha1}\)
for \(w>0, \theta>0\), and \(\alpha>0\).
Effect of \(\theta\) and \(\alpha\) on the Distribution
Recall that \(\theta\) is the mean waiting time until the first event, and \(\alpha\) is the number of events for which you are waiting to occur. It makes sense then that for fixed \(\alpha\), as \(\theta\) increases, the probability "moves to the right," as illustrated here with \(\alpha\)fixed at 3, and \(\theta\) increasing from 1 to 2 to 3:
The plots illustrate, for example, that if we are waiting for \(\alpha=3\) events to occur, we have a greater probability of our waiting time \(X\) being large if our mean waiting time until the first event is large (\(\theta=3\), say) than if it is small (\(\theta=1\), say).
It also makes sense that for fixed \(\theta\), as \(\alpha\) increases, the probability "moves to the right," as illustrated here with \(\theta\)fixed at 3, and \(\alpha\) increasing from 1 to 2 to 3
The plots illustrate, for example, that if the mean waiting time until the first event is \(\theta=3\), then we have a greater probability of our waiting time \(X\) being large if we are waiting for more events to occur (\(\alpha=3\), say) than fewer (\(\alpha=1\), say).
15.5  The Gamma Function
15.5  The Gamma FunctionAn Aside
 The gamma function, denoted \(\Gamma(t)\), is defined, for \(t>0\), by:
\(\Gamma(t)=\int_0^\infty y^{t1} e^{y} dy\)
We'll primarily use the definition in order to help us prove the two theorems that follow.
Theorem
Provided \(t>1\):
\(\Gamma(t)=(t1) \times \Gamma(t1) \)
Proof
We'll useintegration by parts with:
\(u=y^{t1}\) and \(dv=e^{y}dy\)
to get:
\(du=(t1)y^{t2}\) and \(v=e^{y}\)
Then, the integration by parts gives us:
\(\Gamma(t)=\lim\limits_{b \to \infty} \left[y^{t1}e^{y}\right]^{y=b}_{y=0} + (t1)\int_0^\infty y^{t2}e^{y}dy\)
Evaluating at \(y=b\)and \(y=0\)for the first term, and using the definition of the gamma function (provided \(t1>0\)) for the second term, we have:
\(\Gamma(t)=\lim\limits_{b \to \infty} \left[\dfrac{b^{t1}}{e^b}\right]+(t1)\Gamma(t1)\)
Now, if we were to be lazy, we would just wave our hands, and say that the first term goes to 0, and therefore:
\(\Gamma(t)=(t1) \times \Gamma(t1)\)
provided \(t>1\), as was to be proved.
Let's not be too lazy though! Taking the limit as \(b\)goes to infinity for that first term, we get infinity over infinity. Ugh! Maybe we should have left well enough alone! We can take the exponent and the natural log of the numerator without changing the limit. Doing so, we get:
\(\lim\limits_{b \to \infty} \left[\dfrac{b^{t1}}{e^b}\right] =\lim\limits_{b \to \infty} \left\{\dfrac{\text{exp}[(t1) \ln b]}{\text{exp}(b)}\right\}\)
Then, because both the numerator and denominator are exponents, we can write the limit as:
\(\lim\limits_{b \to \infty} \left[\dfrac{b^{t1}}{e^b}\right] =\lim\limits_{b \to \infty}\{\text{exp}[(t1) \ln bb]\}\)
Manipulating the limit a bit more, so that we can easily apply L'Hôpital's Rule, we get:
\(\lim\limits_{b \to \infty} \left[\dfrac{b^{t1}}{e^b}\right] =\lim\limits_{b \to \infty} \left\{\text{exp}\left[(t1)b\left(\dfrac{ \ln b}{b}1\right)\right]\right\}\)
Now, let's take the limit as \(b\)goes to infinity:
Okay, our proof is now officially complete! We have shown what we set out to show. Maybe next time, I'll just wave my hands when I need a limit to go to 0.
Theorem
If \(t=n\), a positive integer, then:
\(\Gamma(n)=(n1)!\)
Proof
Using the previous theorem:
\begin{align} \Gamma(n) &= (n1)\Gamma(n1)\\ &= (n1)(n2)\Gamma(n2)\\ &= (n1)(n2)(n3)\cdots (2)(1)\Gamma(1) \end{align}
And, since by the definition of the gamma function:
\(\Gamma(1)=\int_0^\infty y^{11}e^{y} dy=\int_0^\infty e^{y} dy=1\)
we have:
\(\Gamma(n)=(n1)!\)
as was to be proved.
15.6  Gamma Properties
15.6  Gamma PropertiesHere, after formally defining the gamma distribution (we haven't done that yet?!), we present and prove (well, sort of!) three key properties of the gamma distribution.
 Gamma Distribution

A continuous random variable \(X\) follows a gamma distribution with parameters \(\theta>0\) and \(\alpha>0\) if its probability density function is:
\(f(x)=\dfrac{1}{\Gamma(\alpha)\theta^\alpha} x^{\alpha1} e^{x/\theta}\)
for \(x>0\).
Before we get to the three theorems and proofs, two notes:

We consider \(\alpha>0\) a positive integer if the derivation of the p.d.f. is motivated by waiting times until α events. But the p.d.f. is actually a valid p.d.f. for any \(\alpha>0\) (since \(\Gamma(\alpha)\) is defined for all positive \(\alpha\)).

The gamma p.d.f. reaffirms that the exponential distribution is just a special case of the gamma distribution. That is, when you put \(\alpha=1\) into the gamma p.d.f., you get the exponential p.d.f.
Theorem
The moment generating function of a gamma random variable is:
\(M(t)=\dfrac{1}{(1\theta t)^\alpha}\)
for \(t<\frac{1}{\theta}\).
Proof
By definition, the moment generating function \(M(t)\) of a gamma random variable is:
\(M(t)=E(e^{tX})=\int_0^\infty \dfrac{1}{\Gamma(\alpha)\theta^\alpha}e^{x/\theta} x^{\alpha1} e^{tx}dx\)
Collecting like terms, we get:
\(M(t)=E(e^{tX})=\int_0^\infty \dfrac{1}{\Gamma(\alpha)\theta^\alpha}e^{x\left(\frac{1}{\theta}t\right)} x^{\alpha1} dx\)
Now, let's use the change of variable technique with:
\(y=x\left(\dfrac{1}{\theta}t\right)\)
Rearranging, we get:
\(x=\dfrac{\theta}{1\theta t}y\) and therefore \(dx=\dfrac{\theta}{1\theta t}dy\)
Now, making the substitutions for \(x\) and \(dx\) into our integral, we get:
Theorem
The mean of a gamma random variable is:
\(\mu=E(X)=\alpha \theta\)
Proof
The proof is left for you as an exercise.
Theorem
The variance of a gamma random variable is:
\(\sigma^2=Var(X)=\alpha \theta^2\)
Proof
This proof is also left for you as an exercise.
15.7  A Gamma Example
15.7  A Gamma ExampleExample 154
Engineers designing the next generation of space shuttles plan to include two fuel pumps —one active, the other in reserve. If the primary pump malfunctions, the second is automatically brought on line. Suppose a typical mission is expected to require that fuel be pumped for at most 50 hours. According to the manufacturer's specifications, pumps are expected to fail once every 100 hours. What are the chances that such a fuel pump system would not remain functioning for the full 50 hours?
Solution
We are given that \(\lambda\), the average number of failures in a 100hour interval is 1. Therefore, \(\theta\), the mean waiting time until the first failure is \(\dfrac{1}{\lambda}\), or 100 hours. Knowing that, let's now let \(Y\) denote the time elapsed until the \(\alpha\) = 2nd pump breaks down. Assuming the failures follow a Poisson process, the probability density function of \(Y\) is:
\(f_Y(y)=\dfrac{1}{100^2 \Gamma(2)}e^{y/100} y^{21}=\dfrac{1}{10000}ye^{y/100} \)
for \(y>0\). Therefore, the probability that the system fails to last for 50 hours is:
\(P(Y<50)=\int^{50}_0 \dfrac{1}{10000}ye^{y/100} dy\)
Integrating that baby is going to require integration by parts. Let's let:
\(u=y\) and \(dv=e^{y/100} \)
So that:
\(du=dy\) and \(v=100e^{y/100} \)
Now, for the integration:
15.8  ChiSquare Distributions
15.8  ChiSquare DistributionsChisquared distributions are very important distributions in the field of statistics. As such, if you go on to take the sequel course, Stat 415, you will encounter the chisquared distributions quite regularly. In this course, we'll focus just on introducing the basics of the distributions to you. In Stat 415, you'll see its many applications.
As it turns out, the chisquare distribution is just a special case of the gamma distribution! Let's take a look.
 Chisquare Distribution with \(r\) degrees of freedom

Let \(X\) follow a gamma distribution with \(\theta=2\) and \(\alpha=\frac{r}{2}\), where \(r\) is a positive integer. Then the probability density function of \(X\) is:
\(f(x)=\dfrac{1}{\Gamma (r/2) 2^{r/2}}x^{r/21}e^{x/2}\)
for \(x>0\). We say that \(X\) follows a chisquare distribution with \(r\) degrees of freedom, denoted \(\chi^2(r)\) and read "chisquarer."
There are, of course, an infinite number of possible values for \(r\), the degrees of freedom. Therefore, there are an infinite number of possible chisquare distributions. That is why (again!) the title of this page is called ChiSquare Distributions (with an s!), rather than ChiSquare Distribution (with no s)!
As the following theorems illustrate, the moment generating function, mean and variance of the chisquare distributions are just straightforward extensions of those for the gamma distributions.
Theorem
Let \(X\) be a chisquare random variable with \(r\) degrees of freedom. Then, the moment generating function of \(X\) is:
\(M(t)=\dfrac{1}{(12t)^{r/2}}\)
for \(t<\frac{1}{2}\).
Proof
The moment generating function of a gamma random variable is:
\(M(t)=\dfrac{1}{(1\theta t)^\alpha}\)
The proof is therefore straightforward by substituting 2 in for \(\theta\) and \(\frac{r}{2}\) in for \(\alpha\).
Theorem
Let \(X\) be a chisquare random variable with \(r\) degrees of freedom. Then, the mean of \(X\) is:
\(\mu=E(X)=r\)
That is, the mean of \(X\) is the number of degrees of freedom.
Proof
The mean of a gamma random variable is:
\(\mu=E(X)=\alpha \theta\)
The proof is again straightforward by substituting 2 in for \(\theta\) and \(\frac{r}{2}\) in for \(\alpha\).
Theorem
Let \(X\) be a chisquare random variable with \(r\) degrees of freedom. Then, the variance of \(X\) is:
\(\sigma^2=Var(X)=2r\)
That is, the variance of \(X\) is twice the number of degrees of freedom.
Proof
The variance of a gamma random variable is:
\(\sigma^2=Var(X)=\alpha \theta^2\)
The proof is again straightforward by substituting 2 in for \(\theta\) and \(\frac{r}{2}\) in for \(\alpha\).
15.9  The ChiSquare Table
15.9  The ChiSquare TableOne of the primary ways that you will find yourself interacting with the chisquare distribution, primarily later in Stat 415, is by needing to know either a chisquare value or a chisquare probability in order to complete a statistical analysis. For that reason, we'll now explore how to use a typical chisquare table to look up chisquare values and/or chisquare probabilities. Let's start with two definitions.
Definition. Let \(\alpha\) be some probability between 0 and 1 (most often, a small probability less than 0.10). The upper \(100\alpha^{th}\) percentile of a chisquare distribution with \(r\) degrees of freedom is the value \(\chi^2_\alpha (r)\) such that the area under the curve and to the right of \(\chi^2_\alpha (r)\) is \(\alpha\):
The above definition is used, as is the one that follows, in Table IV, the chisquare distribution table in the back of your textbook.
Definition. Let \(\alpha\) be some probability between 0 and 1 (most often, a small probability less than 0.10). The \(100\alpha^{th}\) percentile of a chisquare distribution with \(r\) degrees of freedom is the value \(\chi^2_{1\alpha} (r)\) such that the area under the curve and to the right of \(\chi^2_{1\alpha} (r)\) is \(1\alpha\):
With these definitions behind us, let's now take a look at the chisquare table in the back of your textbook.
Solution
In summary, here are the steps you should use in using the chisquare table to find a chisquare value:
 Find the row that corresponds to the relevant degrees of freedom, \(r\) .
 Find the column headed by the probability of interest... whether it's 0.01, 0.025, 0.05, 0.10, 0.90, 0.95, 0.975, or 0.99.
 Determine the chisquare value where the \(r\) row and the probability column intersect.
Now, at least theoretically, you could also use the chisquare table to find the probability associated with a particular chisquare value. But, as you can see, the table is pretty limited in that direction. For example, if you have a chisquare random variable with 5 degrees of freedom, you could only find the probabilities associated with the chisquare values of 0.554, 0.831, 1.145, 1.610, 9.236, 11.07, 12.83, and 15.09:
What would you do if you wanted to find the probability that a chisquare random variable with 5 degrees of freedom was less than 6.2, say? Well, the answer is, of course... statistical software, such as SAS or Minitab! For what we'll be doing in Stat 414 and 415, the chisquare table will (mostly) serve our purpose. Let's get a bit more practice now using the chisquare table.
Example 155
Let \(X\) be a chisquare random variable with 10 degrees of freedom. What is the upper fifth percentile?
Solution
The upper fifth percentile is the chisquare value x such that the probability to the right of \(x\) is 0.05, and therefore the probability to the left of \(x\) is 0.95. To find x using the chisquare table, we:
 Find \(r=10\) in the first column on the left.
 Find the column headed by \(P(X\le x)=0.95\).
Now, all we need to do is read the chisquare value where the \(r=10\) row and the \(P(X\le x)=0.95\) column intersect. What do you get?
The table tells us that the upper fifth percentile of a chisquare random variable with 10 degrees of freedom is 18.31.
What is the tenth percentile?
Solution
The tenth percentile is the chisquare value \(x\) such that the probability to the left of \(x\) is 0.10. To find x using the chisquare table, we:
 Find \(r=10\) in the first column on the left.
 Find the column headed by \(P(X\le x)=0.10\).
Now, all we need to do is read the chisquare value where the \(r=10\) row and the \(P(X\le x)=0.10\) column intersect. What do you get?
The table tells us that the tenth percentile of a chisquare random variable with 10 degrees of freedom is 4.865.
What is the probability that a chisquare random variable with 10 degrees of freedom is greater than 15.99?
Solution
There I go... just a minute ago, I said that the chisquare table isn't very helpful in finding probabilities, then I turn around and ask you to use the table to find a probability! Doing it at least once helps us make sure that we fully understand the table. In this case, we are going to need to read the table "backwards." To find the probability, we:
 Find \(r=10\) in the first column on the left.
 Find the value 15.99 in the \(r=10\) row.
 Read the probability headed by the column in which the 15.99 falls.
What do you get?
The table tells us that the probability that a chisquare random variable with 10 degrees of freedom is less than 15.99 is 0.90. Therefore, the probability that a chisquare random variable with 10 degrees of freedom is greater than 15.99 is 1−0.90, or 0.10.
15.10  Trick To Avoid Integration
15.10  Trick To Avoid IntegrationSometimes taking the integral is not an easy task. We do have some tools, however, to help avoid some of them. Let's take a look at an example.
Suppose we have a random variable, \(X\), that has a Gamma distribution and we want to find the Moment Generating function of \(X\), \(M_X(t)\). There is an example of how to compute this in the notes but let's try it another way.
\(\begin{align*} M_X(t)&=\int_0^\infty \frac{1}{\Gamma(\alpha)\beta^\alpha} x^{\alpha1}e^{x/\beta}e^{tx}dx\\ & = \int_0^\infty \frac{1}{\Gamma(\alpha)\beta^\alpha} x^{\alpha1}e^{x\left(\frac{1}{\beta}t\right)}dx \end{align*}\)
Let's rewrite this integral, taking out the constants for now and rewriting the exponent term.
\(\begin{align*} M_X(t)&= \frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty x^{\alpha1}e^{x\left(\frac{1}{\beta}t\right)}dx\\ & =\frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty x^{\alpha1}e^{x\left(\frac{1\beta t}{\beta}\right)}dx\\ & = \frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty x^{\alpha1}e^{x/\left(\frac{\beta}{1\beta t}\right)}dx\\ & = \frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty g(x)dx \end{align*}\)
When we rewrite it this way, the term under the integral, \(g(x)\), looks almost like a Gamma density function with parameters \(\alpha\) and \(\beta^*=\dfrac{\beta}{1\beta t}\). The only issues that we need to take care of are the constants in front of the integral and the constants to make \(g(x)\) a gamma density function.
To make \(g(x)\) a Gamma density function, we need the constants. Therefore, we need a \(\dfrac{1}{\Gamma(\alpha)}\) term and a \(\dfrac{1}{(\beta^*)^\alpha}\) term. We already have the first term, so lets rewrite the function.
\(M_X(t)=\frac{1}{\beta^\alpha}\int_0^\infty \frac{1}{\Gamma(\alpha)}x^{\alpha1}e^{x/\left(\frac{\beta}{1\beta t}\right)}dx\)
All that is left is the \(\dfrac{1}{(\beta^*)^\alpha}\).
\(\frac{1}{(\beta^*)^\alpha}=\frac{1}{\left(\frac{\beta}{1\beta t}\right)^\alpha}=\frac{\left(1\beta t\right)^\alpha}{\beta^\alpha}\)
We have the denominator term already. Lets rewrite just for clarity.
\(M_X(t)=\int_0^\infty \frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha1}e^{x/\left(\frac{\beta}{1\beta t}\right)}dx\)
Since \(\dfrac{1}{(\beta^*)^\alpha}=\dfrac{\left(1\beta t\right)^\alpha}{\beta^\alpha}\), we need only to include the \((1\beta t) ^\alpha\) term. If we include the term in the integral, we have to multiply by one. Therefore,
\( \begin{align*} & M_X(t)=\left(\frac{(1\beta t)^\alpha}{(1\beta t)^\alpha}\right)\int_0^\infty \frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha1}e^{x/\left(\frac{\beta}{1\beta t}\right)}dx\\ & = \left(\frac{1}{(1\beta t)^\alpha}\right)\int_0^\infty \frac{(1\beta t)^\alpha}{\Gamma(\alpha)\beta^\alpha}x^{\alpha1}e^{x/\left(\frac{\beta}{1\beta t}\right)}dx\\ & \left(\frac{1}{(1\beta t)^\alpha}\right)\int_0^\infty \frac{1}{\Gamma(\alpha)\left(\frac{\beta}{1\beta t}\right)^\alpha}x^{\alpha1}e^{x/\left(\frac{\beta}{1\beta t}\right)}dx\\ & \left(\frac{1}{(1\beta t)^\alpha}\right)\int_0^\infty h(x) dx \end{align*}\)
Therefore, \(h(x)\) is now a Gamma density function with parameters \(\alpha\) and \(\beta^*=\dfrac{\beta}{1\beta t}\). And, since \(h(x)\) is a pdf and we are integrating over the whole space, \(x\ge 0\), then \(\int_0^\infty h(x)dx=1\). If the integral is equal to 1 based on the definition of a pdf, we are left with:
\(M_X(t)=\dfrac{1}{(1\beta t)^\alpha}\left(1\right)=\dfrac{1}{(1\beta t)^\alpha}\)
From the notes and the text, you can see that the moment generating function calculated above is exactly what we were supposed to get.
Just to summarize what we did here. We did not actually calculate the integral. We used algebra to manipulate the function to use the definition of a pdf. I find that after practice, this method is a lot quicker for me than doing the integrals.
Additional Practice Problems
These problems are not due for homework.

Find \(E(X)\) using the method above.
\(\begin{align*} E(X)&=\int_0^\infty x\frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha1}e^{x/\beta}dx\\[5 pt] &=\frac{1}{\Gamma(\alpha)\beta^{\alpha}}\int_0^\infty x^{\alpha}e^{x/\beta}dx\\[5 pt] &=\left(\frac{1}{\Gamma(\alpha)\beta^{\alpha}}\right)\left(\frac{\Gamma(\alpha+1)\beta^{\alpha+1}}{\Gamma(\alpha+1)\beta^{\alpha+1}}\right)\int_0^\infty x^{\alpha}e^{x/\beta}dx\\[5 pt] &=\left(\frac{\Gamma(\alpha+1)\beta^{\alpha+1}}{\Gamma(\alpha)\beta^{\alpha}}\right)\int_0^\infty \left(\frac{1}{\Gamma(\alpha+1)\beta^{\alpha+1}}\right) x^{\alpha}e^{x/\beta}dx \\[5 pt] &=\frac{\Gamma(\alpha+1)\beta^{\alpha+1}}{\Gamma(\alpha)\beta^{\alpha}}(1)\\[5 pt] &=\frac{\alpha\Gamma(\alpha)\beta}{\Gamma(\alpha)}\\[5 pt]&=\alpha\beta \end{align*}\)

Find \(E(X^2)\) using the method above.
\(\begin{align*} E(X^2)&=\int_0^\infty x^2\frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha1}e^{x/\beta}dx\\[5 pt]&=\frac{1}{\Gamma(\alpha)\beta^{\alpha}}\int_0^\infty x^{\alpha+1}e^{x/\beta}dx\\[5 pt]&=\left(\frac{1}{\Gamma(\alpha)\beta^{\alpha}}\right)\left(\frac{\Gamma(\alpha+2)\beta^{\alpha+2}}{\Gamma(\alpha+2)\beta^{\alpha+2}}\right)\int_0^\infty x^{\alpha+1}e^{x/\beta}dx\\[5 pt]&=\left(\frac{\Gamma(\alpha+2)\beta^{\alpha+2}}{\Gamma(\alpha)\beta^{\alpha}}\right)\int_0^\infty \left(\frac{1}{\Gamma(\alpha+2)\beta^{\alpha+2}}\right) x^{\alpha+1}e^{x/\beta}dx\\[5 pt]& =\frac{\Gamma(\alpha+2)\beta^{\alpha+2}}{\Gamma(\alpha)\beta^{\alpha}}(1)\\[5 pt]&=\frac{(\alpha+1)\Gamma(\alpha+1)\beta^2}{\Gamma(\alpha)}\\[5 pt]&=\alpha(\alpha+1)\beta^2 \end{align*}\)