17.3 - The Trinomial Distribution

17.3 - The Trinomial Distribution

You might recall that the binomial distribution describes the behavior of a discrete random variable \(X\), where \(X\) is the number of successes in \(n\) tries when each try results in one of only two possible outcomes. What happens if there aren't two, but rather three, possible outcomes? That's what we'll explore here on this page, ending up not with the binomial distribution, but rather the trinomial distribution. A rather fitting name, I might say!

Example 17-3

football game

Suppose \(n=20\) students are selected at random:

  • Let \(A\) be the event that a randomly selected student went to the football game on Saturday. Also, let \(P(A)=0.20=p_1\), say.
  • Let \(B\) be the event that a randomly selected student watched the football game on TV on Saturday. Let \(P(B)=0.50=p_2\), say.
  • Let \(C\) be the event that a randomly selected student completely ignored the football game on Saturday. Let \(P(C)=0.3=1-p_1-p_2\).

One possible outcome, then, of selecting the 20 students at random is:

BBCABBAACABBBCCBCBCB

That is, the first two students watched the game on TV, the third student ignored the game, the fourth student went to the game, and so on. Now, if we let \(X\)denote the number in the sample who went to the football game on Saturday, let \(Y\) denote the number in the sample who watched the football game on TV on Saturday, and let \(Z\) denote the number in the sample who completely ignored the football game, then in this case:

  • \(X=4\) (because there are 4 As)
  • \(Y=10\) (because there are 10 Bs)
  • \(Z=20-X-Y\) (and yes, indeed, there are 6 Cs)

What is the joint probability mass function of \(X\)and \(Y\)?

Solution

This example lends itself to the following formal definition.

Definition. Suppose we repeat an experiment \(n\) independent times, with each experiment ending in one of three mutually exclusive and exhaustive ways (success, first kind of failure, second kind of failure). If we let \(X\) denote the number of times the experiment results in a success, let \(Y\) denote the number of times the experiment results in a failure of the first kind, and let \(Z\) denote the number of times the experiment results in a failure of the second kind, then the joint probability mass function of \(X\) and \(Y\) is:

\(f(x,y)=P(X=x,Y=y)=\dfrac{n!}{x!y!(n-x-y)!} p^x_1 p^y_2 (1-p_1-p_2)^{n-x-y}\)

with:

\(x=0, 1, \ldots, n\)

\(y=0, 1, \ldots, n\)

\(x+y\le n\)

Example 17-3 continued

What are the marginal probability mass functions of \(X\)and \(Y\)? Are \(X\)and \(Y\) independent? or dependent?

Solution

We can easily just lump the two kinds of failures back together, thereby getting that \(X\), the number of successes, is a binomial random variable with parameters \(n\) and \(p_1\). That is:

\(f(x)=\dfrac{n!}{x!(n-x)!} p^x_1 (1-p_1)^{n-x}\)

with \(x=0, 1, \ldots, n\). Similarly, we can lump the successes in with the failures of the second kind, thereby getting that \(Y\), the number of failures of the first kind, is a binomial random variable with parameters \(n\) and \(p_2\). That is:

\(f(y)=\dfrac{n!}{y!(n-y)!} p^y_2 (1-p_2)^{n-y}\)

with \(y=0, 1, \ldots, n\). Therefore, \(X\) and \(Y\) must be dependent, because if we multiply the p.m.f.s of \(X\) and \(Y\) together, we don't get the trinomial p.m.f. That is, \(f(x,y)\ne f(x)\times f(y)\):

\(\left[\dfrac{n!}{x!y!(n-x-y)!} p^x_1 p^y_2 (1-p_1-p_2)^{n-x-y}\right] \neq \left[\dfrac{n!}{x!(n-x)!} p^x_1 (1-p_1)^{n-x}\right] \times \left[\dfrac{n!}{y!(n-y)!} p^y_2 (1-p_2)^{n-y}\right]\)

By the way, there's also another way of arguing that \(X\) and \(Y\) must be dependent... because the joint support of \(X\) and \(Y\) is triangular!


Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility