Lesson 2: Probability

Lesson 2: Probability

Overview

For every new lesson, it is important to keep in mind the goal of statistics and where we are in achieving that goal. The goal is to make inferences about the population based on the sample. In Lesson 1, we learned how to obtain a sample by collecting data. We also learned how to describe the data in that sample using descriptive statistics and graphs. This lesson starts us on the path to inference.

In order to learn about inference, we need to learn a few more things first. Inference requires an understanding of probability, random variables, and probability distributions (Lesson 3). This Lesson will focus on the first step, probability.

Objectives

Upon successful completion of this lesson, you should be able to:

• Translate an event of interest into common probability notation.
• Translate common probability notation into a phrase or sentence describing the event of interest.
• Compute and interpret set operations numerically and with diagrams.
• Determine if two events are independent, mutually exclusive or neither.
• Compute and interpret conditional events and conditional probabilities.
• Apply Bayes' Theorem.

2.1 - Notation

2.1 - Notation

Probability Notation

Probability is the likelihood of an outcome. Before we can properly define probability, we must first define 'events.' It is helpful and convenient to denote the collection of events as a single letter rather than list all possible outcomes.

Event
a collection of outcomes, typically denoted by capital letters such as A, B, C, etc...

Examples of Events

• Suppose we ask 30 students to record their eye color. We can define an event B to be blue eye color. In other words, let $B=\{\text{blue eyes}\}$.
• A game is played where you roll two fair six-sided die. A player is allowed to “roll again” if there are doubles, (i.e. both die show the same face). Define the event R as a collection of outcomes that allow a player to roll again. Therefore, R={both 1’s, both 2’s, both 3’s, both 4’s, both 5’s, both 6’s}.

Often students in introductory statistics courses struggle with probability due to getting caught up and/or confused with the general notation used in describing events and the associated probability. To put it simply, the notation is shorthand to keep one from continually needing to write out long phrases to explain what is taking place. For instance, if one were to consider the toss of a fair coin the common theme is that there is a 1/2 chance, or 0.5 probability, of the coin coming up Tails.  But how does one write this event?

Converting to Probability Notation

1. Identifying the outcome event of interest: {Getting a Tail when we toss a fair coin}.
2. Use a single letter or word to represent this outcome of interest: T={Getting a Tail when we toss a fair coin}, for instance.
3. State your interest in the probability of this outcome: P(T) which is read, "Probability of getting a Tail when we toss a fair coin."

When you read a statistics text book a common lettering system uses the beginning of the alphabet. That is, the authors use 'A', 'B', etc. to define outcome events of interest.

As you can see, the lettering can become convoluted! Just remember that the key is to identify what your outcome event of interest is.

Try It! Probability Statements

Write out a probability statement for randomly selecting a female employee from a company where 35% of the employees are female.
Let F = {selecting a female employee from the company}. Therefore, P(F) = 0.35.

Example 2-1

Now let's complicate things. Consider again tossing a fair coin. We stated P(T) was the probability we get a tail when we toss the coin. How would one write the probability statement if the outcome was getting two tails when the coin was tossed twice?

Applying the steps we get...

1. Identify the outcome of the event: Getting a tail on both tosses of a fair coin.
2. Use a single letter or word to represent this outcome of interest: We can write this as T,T, where the first T represents the outcome of the first toss and the second T as the outcome of the second toss.
3. State your interest in the probability of this outcome: P(T,T) which is read, "Probability of getting a Tail on the first and second toss."

*Note: Often the comma is eliminated and P(T,T) is written as P(TT).

2.2 - Set Notation and Operations

2.2 - Set Notation and Operations

Outcome Space

When you first start learning about sets and events, it is often helpful to consider the outcome space.

Outcome Space
The outcome space of a scenario is all the possible outcomes that can occur and is often denoted S. The outcome space may also be referred to as the sample space

Example 2-2

Consider the experiment where two fair six-sided dice are rolled and their face values recorded. Write down the outcome space.

If we write the pair of faces such as (value of first die, value of second die), then...

First Die
1 2 3 4 5 6

Second

Die

1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1
2 1, 2 2, 2 3, 2 4, 2 5, 2 6, 2
3 1, 3 2, 3 3, 3 4, 3 5, 3 6, 3
4 1, 4 2, 4 3, 4 4, 4 5, 4 6, 4
5 1, 5 2, 5 3, 5 4, 5 5, 5 6, 5
6 1, 6 2, 6 3, 6 4, 6 5, 6 6, 6

In set notation we can write...

S = {(1,1) (2,1) (3,1) (4,1) (5,1) (6,1) (1,2) (2,2) (3,2) (4,2) (5,2) (6,2) (1,3) (2,3) (3,3) (4,3) (5,3) (6,3) (1,4) (2,4) (3,4) (4,4) (5,4) (6,4) (1,5) (2,5) (3,5) (4,5) (5,5) (6,5) (1,6) (2,6) (3,6) (4,6) (5,6) (6,6)}

There are 36 possible outcomes in the sample space S.

Try It! Outcome Spaces

Directions: Write out the event in probability notation and then identify the outcome space.
1. Getting an even number on the face of the second die.

Let A = {an even number on the face of the second die}.

First Die
1 2 3 4 5 6

Second

Die

1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1
2 1, 2 2, 2 3, 2 4, 2 5, 2 6, 2
3 1, 3 2, 3 3, 3 4, 3 5, 3 6, 3
4 1, 4 2, 4 3, 4 4, 4 5, 4 6, 4
5 1, 5 2, 5 3, 5 4, 5 5, 5 6, 5
6 1, 6 2, 6 3, 6 4, 6 5, 6 6, 6

In set notation...

A = {(1, 2), (1, 4), (1, 6), (2,2), (2, 4), (2, 6), (3, 2), (3, 4), (3, 6), (4, 2), (4, 4), (4, 6), (5, 2), (5, 4), (5, 6), (6, 2), (6, 4), (6, 6)}

2. The sum of two faces is greater than or equal to 10.

Let B = {sum of the two faces is greater than or equal to 10}

First Die
1 2 3 4 5 6

Second

Die

1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1
2 1, 2 2, 2 3, 2 4, 2 5, 2 6, 2
3 1, 3 2, 3 3, 3 4, 3 5, 3 6, 3
4 1, 4 2, 4 3, 4 4, 4 5, 4 6, 4
5 1, 5 2, 5 3, 5 4, 5 5, 5 6, 5
6 1, 6 2, 6 3, 6 4, 6 5, 6 6, 6

In set notation...

B = {(6, 4), (5, 5), (6, 5), (4, 6), (5, 6), (6,6)}

Set Operations

Now that we know how to denote events, the next step is to use the set notation to represent set operations. Each operation will also be presented in a Venn Diagram. Set operations are important because they allow us to create a new event by manipulation of other events

Union

Verbally: The union of two events, A and B, contains all of the outcomes that are in A, B or both. In statistics, ‘or’ means at least one event occurs and therefore includes the event where both occur,

Symbolically: The union of A and B is denoted $A\cup B$.

Visually: A or B also written as $A \cup B$  = {outcomes in both A or B or both}

Intersection

Verbally: The intersection of two events, A and B, contains all of the outcomes that are in both A and B.

Symbolically: The intersection is denoted by $A \cap B$

Visually: A and B also written as $A \cap B$ = {outcomes in both A and B}

Complement

Verbally: The complement of an event, A, contains all of the outcomes that are not in A.

Symbolically: The complement can be denoted as $A^c$, $\bar{A}$, or $A^\prime$.

Visually: A' also written as  $\bar{A}$ or $A^c$ = {outcomes not in A}

Mutually Exclusive

Verbally: A and B are called mutually exclusive (or disjoint) if the occurrence of outcomes in A excludes the occurrence of outcomes in B.  One example of two mutually exclusive events is A and A'.

Symbolically: There are no elements in $A \cap B$ and thus  $A \cap B=\emptyset$ . The empty set, denoted as $\emptyset$, is an event that contains no outcomes.

Visually: $A \cap B=\emptyset$

Example 2-3

Let's go back to the example where we roll two fair six-sided die. Given the following events:

• $A={(3, 5)}$
• $B=\text {a 4 is rolled on the first die}$
• $C=\text {a 5 is rolled on the second die}$
• $D=\text {the sum of the dice is 7}$
• $E={(7, 4)}$

Find $B\cap D$ and $B\cup D$

$B\cap D=\{(4,3)\}$

$B\cup D=\{(4,1), (4,2), (4,3), (4,4), (4,5), (4,6), (6, 1), (5, 2), (3, 4), (2, 5), (1, 6)\}$

Visually,

First Die
1 2 3 4 5 6

Second

Die

1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1
2 1, 2 2, 2 3, 2 4, 2 5, 2 6, 2
3 1, 3 2, 3 3, 3 4, 3 5, 3 6, 3
4 1, 4 2, 4 3, 4 4, 4 5, 4 6, 4
5 1, 5 2, 5 3, 5 4, 5 5, 5 6, 5
6 1, 6 2, 6 3, 6 4, 6 5, 6 6, 6

Try It! Outcome of Sets

Directions: Given the events in the example above, find the following outcome sets.
1. $D\cap C$ and $D\cup C$
$D\cap C=\{(2, 5)\}$, $D\cup C=\{(1, 5), (2, 5), (3, 5), (4, 5), (5, 5), (6, 5), (6,1), (5, 2), (4, 3), (3, 4), (1, 6)\}$
2. $A\cap D$ and $A\cup D$
$A\cap D=\{\emptyset\}$, $A\cup D=\{(6, 1), (5,2), (4,3), (3, 4), (2, 5), (1, 6), (3, 5)\}$
3. $B\cap C$ and $B\cup C$
$B\cap C={(4, 5)}$, $B\cup C=\{(4, 1), (4, 2), (4,3), (4,4), (4, 5), (4, 6), (1,5), (2, 5), (3, 5), (5, 5), (6,5)\}$

Example 2-4

Suppose events $A$, $B$, and $C$ are events of a particular scenario. Write the following using event notation.

1. At least one event occurs.
Answer: $A\cup B\cup C$  At least one event means A or B or C or any two events or all three events.
2. None of the events occur.
Answer: $A^\prime\cap B^\prime \cap C^\prime$
3. Only A occurs.
Answer: $A\cap (B\cup C)^\prime$

2.3 - Interpretations of Probability

2.3 - Interpretations of Probability

Interpretations of Probability

Before discussing the specific rules of probability, it's important to recognize there are three main interpretations of probability.

Classical Interpretation of Probability

The probability that event E occurs is denoted by P(E). When all outcomes are equally likely, then:

$P(E) = \frac{number\ of\ outcomes\ in\ E}{number\ of\ possible\ outcomes}$
Subjective Probability

Subjective probability reflects personal belief which involves personal judgment, information, intuition, etc.

For example, what is P (you will get an A in a certain course)? Each student may have a different answer to the question.

Relative Frequency Concept of Probability (Empirical Approach)

If a particular outcome happens over a large number of events then the percentage of that outcome is close to the true probability.

For example, if we flip the given coin 10,000 times and observe 4555 heads and 5445 tails, then for that coin, P(H)=0.4555.

$P(E) \approx \frac{number\ of\ outcomes\ in\ E}{number\ of\ possible\ outcomes}$

Example 2-5

1. Find the probability that exactly one head appears in two flips of a fair coin.

The possible outcomes are listed as: {(H, H), (H, T), (T, H), (T, T)}.

Note that the four outcomes are of equal probability since the coin is fair.

$P(getting\ exactly\ one\ H\ in\ two\ flips\ of\ a\ fair\ coin)=P\{(H,T),(T,H)\}=\frac{2}{4}=\frac{1}{2}$

2. Find the probability that two heads appear in two flips of a fair coin.

$P(getting\ two\ H\ in\ two\ flips\ of\ a\ fair\ coin)=P\{(H,H)\}=\frac{1}{4}$

3. Find the probability that the sum of two faces is greater than or equal to 10 when one rolls a pair of fair dice.

The possible outcomes of the experiment are:

{ (1,1) (2,1) (3,1) (4,1) (5,1) (6,1) (1,2) (2,2) (3,2) (4,2) (5,2) (6,2) (1,3) (2,3) (3,3) (4,3) (5,3) (6,3) (1,4) (2,4) (3,4) (4,4) (5,4) (6,4) (1,5) (2,5) (3,5) (4,5) (5,5) (6,5) (1,6) (2,6) (3,6) (4,6) (5,6) (6,6) }

If S denotes the sum of the points in the two faces:

\begin {align} P(S\ greater\ than\ or\ equal\ to\ 10)& =P(S=10)+P(S=11)+P(S=12)\\ & = P\{(4,6),(5,5),(6,4)\}+P\{(5,6),(6,5)\}+P\{(6,6)\} \\ & = \frac{3}{36}+\frac{2}{36}+\frac{1}{36} \\ & =\frac{1}{6} \\ \end {align}

Try It! Rolling Dice

Directions: Try each of these problems and click the buttons to reveal the correct answers.

Again using the example where we flip two fair six-sided dice, find the following:

• $A={(3, 5)}$
• $B=\text {a 4 is rolled on the first die}$
• $C=\text {a 5 is rolled on the second die}$
• $D=\text {the sum of the dice is 7}$
• $E={(7, 4)}$
1. $P(B\cap D)$ and $P(B\cup D)$.
$P(B\cap D)=\frac{1}{36}$ and $P(B\cup D)=\frac{11}{36}$.
2. $P(D\cap C)$ and $P(D\cup C)$.
$P(D\cap C)=\frac{1}{36}$ and $P(D\cup C)=\frac{11}{36}$
3. $P(A\cap D)$ and $P(A\cup D)$
$P(A\cap D)=0$ and $P(A\cup D)=\frac{7}{36}$
4. $P(B\cap C)$ and $P(B\cup C)$
$P(B\cap C)=\frac{1}{36}$ and $P(B\cup C)=\frac{11}{36}$

2.4 - Probability Properties

2.4 - Probability Properties

This section will provide the basic terms and properties associated with classical probability. We generally focus on classical probability but the probability properties apply to classical and subjective probabilities.

Probability of an event

Probabilities will always be between (and including) 0 and 1. A probability of 0 means that the event is impossible. A probability of 1 means an event is guaranteed to happen. A probability close to 0 means the event is "not likely" and a probability close to 1 means the event is "highly likely" to occur. We denote the probability of event A as P(A).

$0 \leq P(A) \leq 1$
Range of probability for any event A.
Probability of a complement

If A is an event, then the probability of A is equal to 1 minus the probability of the complement of A, $A^\prime$.

$P(A)=1-P(A^\prime)$
We can see from the formula that $1=P(A)+P(A^\prime)$.
Probability of the empty set

If A and B are mutually exclusive, then $A\cap B=\emptyset$. Therefore, $P(A\cap B)=0$. This is important when we consider mutually exclusive (or disjoint) events.

$P(A\cap B)=0$
Probability of the union of two events

$P(A\cup B)=P(A)+P(B)-P(A\cap B)$

If A and B are mutually exclusive, then $P(A)+P(B)$.

Try It! Probability Properties

Directions: Use the information given above to work out your answer to the questions below.

Given $P(A) = 0.6$, $P(B) = 0.5$, and $P(A\cap B)=0.2$.

1. Find $P(A^\prime)$.
$P(A^\prime)=1-P(A)=0.4$
2. Find $P(A \cap B^\prime)$.

$P(A \cap B^\prime)=P(A)-P(A\cap B)=0.6-0.2=0.4$

3. Find $P(B \cap A^\prime)$.

$P(B \cap A^\prime)=P(B)-P(A\cap B)=0.5-0.2=0.3$

4. Find $P(A \cup B)$
$P(A \cup B)=P(A)+P(B)-P(A \cap B)=0.6+0.5-0.2=0.9$

2.5 - Conditional Probability

2.5 - Conditional Probability

Conditional Probability and Independence

The concept of conditional events and independent events determines whether or not one of the events has an effect on the probability of the other event. You can possibly imagine several daily conversations you may have that invoke these concepts. For instance, say you are discussing driving directions with a friend on the quickest way to get to some destination. The friend asks if leaving during rush hour would change these plans. If you respond that it does, this reflects the concept of "conditional" as the quickest way is conditioned, or affected, by the time of day your friend leaves. If you respond that it does not, this reflects the concept of "independence" as the quickest way is the same regardless of when your friend leaves. The time of day does not affect the quickest way.

Another example is practically any sporting event. For instance, a team might have a probability of 0.6 of winning the Super Bowl or a country a probability of 0.3 of winning the World Cup.

• Conditional Scenario: What if it rains the team's chances may change (for the better or possibly for the worse)? The probability of winning is affected by the weather - conditional
• Independent Scenario: What if the game is being played in an enclosed stadium? In such a case the weather may have no effect on the team's chances - independent.
Conditional Probability

The probability of one event occurring given that it is known that a second event has occurred. This is communicated using the symbol $\mid$ which is read as "given."

For example, $P(A\mid B)$ is read as "Probability of A given B."

Computing Conditional Probability

The Probability of A given B:

$P(A\mid B)=\dfrac{P(A \: \cap\: B)}{P(B)}$

The Probability of B given A:

$P(B\mid A)=\dfrac{P(B \: \cap\: A)}{P(A)}$

The Probability of the Intersection of Dependent Events

The probability of dependent events A and B derived from the formulas for conditional probability:

$P(A \cap B)=P(B) P(A|B)$

$P(B \cap A)=P(A) P(B|A)$

Note! Usually, $P(A|B) \neq P(B|A)$.
Marginal Probability
The probability of an event without reference to any other event or events occurring.

Application

Enrollment Figures

The two-way table below displays the World Campus enrollment from Fall 2015 in terms of level (undergraduate and graduate) and biological sex.

Female Male Total
Total 6027 6215 12242

Choose one student from the sample, what is the probability that the student is a female?

$P(F)=\dfrac{6027}{12242}$

If it is known that the student is a graduate student, what is the probability that the student is a female?

We now are interested in only graduate students who are female (2213). The total is just the number of graduate students (5000).

$P(F|G)=\dfrac{2213}{5000}$

Note that this is not the same as the probability that a selected student is a female graduate student (ie. $P(F \cap G$)).

In this scenario, the marginal probability is not the same as the conditional probability. This means that given the student is a graduate, changes the likelihood that the student is a female. This is reflective of events that are not independent i.e. they are considered dependent events. Independent events will be discussed in more detail later in the lesson.

Example 2-6

Consider rolling two fair six-sided dice and the events C and D.

$C=\{\text{a 5 is rolled on the second die}\}$

$D=\{\text{the sum of the dice is 7}\}$,

find $P(C|D)$.

Let's approach this example in two ways: (1) using the sample space and (2) using the formulas above.

Approach 1:  The sample space

When we know that $D$ occurs, we only consider the pair of rolls in $D$ and forget all the other possibilities. Therefore, the conditional space becomes:

First Die
1 2 3 4 5 6

Second

Die

1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1
2 1, 2 2, 2 3, 2 4, 2 5, 2 6, 2
3 1, 3 2, 3 3, 3 4, 3 5, 3 6, 3
4 1, 4 2, 4 3, 4 4, 4 5, 4 6, 4
5 1, 5 2, 5 3, 5 4, 5 5, 5 6, 5
6 1, 6 2, 6 3, 6 4, 6 5, 6 6, 6

In set notation...

S={(1,6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1)}

Out of the six pairs that have a sum of 7, we can see only one pair, (2,5), has a 5 on the second die. Therefore,

$P(C|D)=\dfrac{1}{6}$

Approach 2:  From the previous example, we know $P(D)=\frac{6}{36}$, $P(D\cap C)=\frac{1}{36}$.  Therefore, using the formula above,

$P(C|D)=\dfrac{P(C\cap D)}{P(D)}=\dfrac{\frac{1}{36}}{\frac{6}{36}}=\dfrac{1}{6}$.

Important Note Regarding Conditional Probabilities

For any two events, $A$ and $B$,
1. $P(A|B)=1-P(A^\prime|B)$
2. $P(A)=P(A\cap B)+P(A\cap B^\prime)=P(A|B)P(B)+P(A|B^\prime)P(B^\prime)$

The Venn diagrams below will help you visualize why this is true.

P(A∩B)
The first diagram shows $P(A\cap B)$.

$+$

$P(A\cap B^\prime)$
...when added to $P(A\cap B^\prime)$...

$=$

$P(A)$
...gives the $P(A)$.

2.6 - Independent Events

2.6 - Independent Events

To introduce the idea of independence let's look at a table similar to the one on the previous page. The two-way table below displays the World Campus enrollment predictions for a future semester in terms of level (undergraduate and graduate) and biological sex.

Female Male Total
Total 6600 5400 12000

Choose one student from this future semester, what is the probability that the student is a female?

$P(F)=\dfrac{6600}{12000}=.55$

If it is known that the student is a graduate student, what is the probability that the student is a female?

We now are interested in only graduate students who are female (2750). The total is just the number of graduate students (5000).

$P(F|G)=\dfrac{2750}{5000}=.55$

In this example, the marginal probability is the same as the conditional probability. This will be reflective of events that are independent. In other words knowing the selected student is a graduate student does not give us any additional information about the gender of the student.

We can see that in some situations, the conditional and marginal probabilities are different and for some situations, the two probabilities may be equal. When they are equal, the events are independent events, when they are not equal, they are not independent events.

Unless one is explicitly told that events are independent, one cannot simply assume that they are. However, there are some events for which we naturally assume such independence exists (e.g. a flip of a fair coin or the roll of a fair die). For these examples, one expects that the probability of an outcome of one trial does not affect the probability of subsequent trial outcomes. That is, the probability a tail comes up on the first flip of a coin does not change the probability that a tail comes up on the second, third, fourth, etc. flip of that coin.

We can check for independence of two events by showing that any ONE of the following is true. For any given probabilities for events A and B, the events are independent if:

1. $P(A \cap B)=P(A)\cdot P(B)$
2. $P(A|B)=P(A)$ or,
3. $P(B|A)=P(B)$
Independent Events

Two events, A and B, are considered independent events if the probability of A occurring is not changed based on any knowledge of the outcome of B.

Dependent Events

Two events are not independent, or dependent if the knowledge of the outcome of B changes the probability of A.

Example 2-7

The probability of getting a tail on any one flip is $\frac{1}{2}$. The probability of getting tails on both flips, (T,T), is $\frac{1}{4}$. Let event A be the outcome of getting a tail on flip one and event B getting a tail on flip two. Find the probability of $P(A \cap B)$

This gives us $P(A)=\frac{1}{2}$ and $P(B)=\frac{1}{2}$.  The probability of getting a tail on both flips can be presented by $P(A \cap B)=\frac{1}{4}$. We can verify that the two events are independent by checking if $P(A \cap B)=P(A)\cdot P(B)$. We see this is true by $P(A)=\frac{1}{4}+\frac{1}{4}$. Therefore the two events, getting a tail on the first flip and getting a tail on the second flip are independent.

Example 2-8

You apply to grad schools at Harvard (H) and PSU (P).  Probability accepted into Harvard is 0.3. Probability accepted into PSU is 0.6. The probability of being accepted to both is 0.25. Noting the probabilities:

$P(H) = 0.3$,
$P(P) = 0.6$, and
$P(H \cap P)=.25$
Are events getting accepted into Harvard and into Penn State independent events?

Before you jump into an automatic assumption of "of course they are!" be sure you verify one of the above expressions hold true!
$P(H)\cdot P(P)=0.3(0.6)=0.18$ which does not equal 0.25 which is $P(H \cap P)$. Therefore they are dependent!

Example 2-9

Given the following table, determine whether employment status and gender are independent.

Employment Status and Gender

Employed Unemployed Total
Male 460 40 500
Female 140 260 400
Total 600 300 900

Let A denote the event of being employed and B denote the event of being a male.

$P(A \cap B) =P(employed\ and\ male)=\frac{460}{900}=0.5111$

$P(A) = P(employed) = \frac{600}{900} = \frac{2}{3}$

$P(B) = P(male) = \frac{500}{900}=\frac{5}{9}$

Since $P(A)\cdot P(B) =\frac{2}{3}\cdot \frac{5}{9} = 0.370370370$ is not the same as $P(A \cap B)=0.5111$ Thus, $P(A \cap B)$≠$P(A)\cdot P(B)$ and we can conclude that employment status and gender are not independent for the data provided.

Why is this useful?

If the data are not the whole population but represent a random sample from a certain target population, and we want to draw inference about whether the employment status and gender are related for the population, then we know that the sample may not follow the exact relationship $P(A \cap B)=P(A)\cdot P(B)$ even if such a relationship holds for the population due to sampling variability. When we talk about the inference about the independence of a two-way table, we will provide the rationale for the chi-square test for independence which measures how far off the observations are from being independent. This will be discussed in greater detail in a future lesson.

Independent vs. Mutually Exclusive

Students often confuse independent events and mutually exclusive events. The two terms mean very different things. Recall that if two events are mutually exclusive, they have no elements in common and thus cannot both happen at the same time. In fact, mutually exclusive events are dependent. If A and B are mutually exclusive events, then...

$P(A \cap B)=0 \neq P(A)P(B)$

Try It! Independent Events

Exxon will open a new gas station at a busy intersection in State College next year and feels that the probability that it will show a profit in its first year is 0.6 taking into consideration that Mobil may or may not open a gas station opposite to it. We know that if Mobil also opens a gas station opposite to it, the probability of a first year profit of the Exxon station will be 0.3 and the probability that Mobil will open the station opposite to it is 0.4.

1. What is the probability that both Exxon shows a profit in the first year and Mobil opens a gas station opposite to it? Let A denote the event that Exxon shows a profit in the first year and B denote the event that Mobil opens a gas station opposite to it.

Given: $P(A) = 0.6, P(B) = 0.4, P(A\mid B)=0.3$

Using the intersection formula for dependent events: $P(A \cap B)=P(B) P(A|B)=0.4\cdot0.3=0.12$

The probability that both Exxon and Mobil will make a profit the first year and Mobil will open the station opposite is .12.

2. What is the probability that either Exxon shows a profit in the first year or Mobil opens a gas station opposite to it?
For an 'or' use the formula for the Union of two dependent events: $P(A\cup B)=P(A)+P(B)-P(A\cap B)=0.6+0.4-0.12=0.88$

1. From an urn with 6 red balls and 4 blue balls, two balls are picked randomly without replacement. Find the probability that both balls picked are red.

Let R be the event of picking a red ball and B for picking a blue ball.

The $P(R)=\dfrac{6}{10}$

The $P(B)=\dfrac{4}{10}$

We are interested in finding $P(both\ balls\ picked \ are\ red) = P(first\ ball\ red)\cap P(second\ ball\ red)$ or $P(R\cap R)$

Since we are choosing without replacement these events are dependent.

$P(R \cap R)=P(R) P(R|R)$

$P(R \cap R)=\dfrac{6}{10}\cdot \dfrac{5}{9}=\frac{1}{3}$

2. Let $A$ and $B$ be the following events:

$A$ = get a job

$B$ = buy a new car

It is a given that $P(A) = 0.9$, $P(B) = 0.7$. What is the probability of double happiness: that you get a job and buy a new car? In other words, we want to find $P(A \cap B)$

There is not yet enough information to answer the question.

First, we will ask whether A, B are independent. In this case, the simplistic approach of saying that the two events are independent is not realistic. Thus, we will think harder and try to assess either $P(A|B)$ or $P(B|A)$? Thinking about it, it is not hard to assess the probability of buying a new car knowing that he/she gets a job. For example, if one thinks that $P(B|A) = 0.75$ (this probability is subjectively chosen and may be different for different individuals), the person happens to think that the chance to buy a new car knowing that he/she gets a job is 75%.

$P(A\cap B)=P(A)P(B|A)=0.9\cdot0.75 = 0.675$

2.7 - Bayes' Theorem

2.7 - Bayes' Theorem

Example 2-10: Jury Trial

In a jury trial, suppose the probability the defendant is convicted, given guilt, is 0.95, and the probability the defendant is acquitted, given innocence, is 0.95. Suppose that 90% of all defendants truly are guilty. Find the probability the defendant was actually innocent given the defendant is convicted. The video will step you through this example.

Video: Jury Trial Example

Let Guilty = $G$
Innocent = $I$
Acquitted = $A$
Convicted = $C$

$P(G) = 0.9$ so $P(I) = 0.1$
$P(C | G) = 0.95$ so $P(A | G) = 0.05$
$P(A | I) = 0.95$ so $P(C | I) = 0.05$
Need to find: $P(I | C)$

\begin{align} P(I\ and\ C)  &= P(C | I)*P(I)\\ &= 0.05*0.1\\ &= 0.005 \end{align}
\begin{align} P(C) &= P(G\ and\ C) + P(I\ and\ C)\\ &= (0.95)*(0.9) + ( 0.05)*(0.1)\\ &= 0.855 + 0.005\\ &= 0.86\\ \end{align}
\begin{align} P(I | C) &= \dfrac{P(I\ and\ C) }{P(C)} \\&= \dfrac{0.005}{0.86}\\ &= 0.006 \end{align}

The above example illustrates the use of Bayes' theorem to find "reverse" conditional probabilities.

Bayes' Theorem

Suppose we have events $A_1, \dots, A_k$  and event B.  If $A_1, \dots, A_k$ are $k$ mutually exclusive events, then...

$P(A_{i}|B)=\dfrac{P(B | A_{i})P(A_{i})}{\sum_{i} P(B | A_{i})P(A_{i})}=\dfrac{P(B | A_{i})P(A_{i})}{P(B| A_{1})P(A_{1})+P(B |A_{2})P(A_{2})+...+P(B| A_{k})P(A_{k})}$

Applying this to just two events A and B we have...

$P(A|B)=\dfrac{P(B | A)P(A)}{ P(B | A)P(A)+P(B| A')P(A')}$

Example 2-11: Screw Manufacturing

A company creates their product using a specially made screw. For financial purposes, the company gets their screws from three different manufacturers. If a screw is defective, it can cause a lot of damage. Here is the table of the proportion of screws from each manufacturer and the probability of obtaining a defective screw.

Manufacturer Probability of Company's Screws Probability of Defective Screw
A 0.40 0.01
B 0.25 0.02
C 0.35 0.015

If a screw is found to be defective, what is the probability that it came from Manufacturer C?

Let D denote a defective screw.  We want $P(C|D)$.  We can use Bayes' Theorem to find this probability.

\begin{align} P(C|D) &=\frac{P(C\cap D)}{P(D)}\\ &=\frac{P(D|C)P(C)}{P(D|C)P(C)+P(D|A)P(A)+P(D|B)P(B)}\\ &=\frac{0.015(0.35)}{0.015(0.35)+0.01(0.40)+0.02(0.25)}\\ &=\frac{0.00525}{0.01425}\\ &=\frac{7}{19}=0.36842 \end{align}

Practical Application: Bayes' Theorem in Diagnostic Testing

In diagnostic testing (e.g. drug tests), there are five key concepts:

Prevalence

Prevalence is the probability or proportion of occurrence of a disease or behavior in the population at a particular point in time.

• Example: Proportion of bus drives who use illegal drugs
Sensitivity and Specificity
• Sensitivity is the probability of a positive result given person is actually positive.
• Example: the probability of a home pregnancy test coming up positive for a woman who is actually pregnant
• Specificity is the probability of a negative result given person is actually negative.
• Example: the probability of a home pregnancy test coming up negative for a woman who is not pregnant
False Positives and False Negatives
• False Positives are when results come back positive for someone who is actually negative
• Example: a home pregnancy test coming up positive for a woman who is not pregnant
• False Negatives are when results come back negative for someone who is actually positive
• Example: a home pregnancy test coming up negative for a woman who is actually pregnant

Example 2-12: Diabetes Screening

Consider the following data on a diabetes screening test based on a non-fasting blood screen test, which is relatively inexpensive and painless.

Test Results
Diabetes? Positive Negative Total
Yes 350 150 500
No 1900 7600 9500
Total 2250 7750 10,000

With these results, we see the Sensitivity is 350 out of 500 or 70% and the specificity is 7600 out of 9500 or 80%. The overall prevalence of diabetes is 500 out of 10000 or 5%.

From this test, how many were “missed” (i.e. actually had diabetes – the false negatives) and how many were incorrectly identified as having the disease (i.e. false positives).

The test missed identifying 150 (a false negative rate of 150/500 or 30%) while the false positive rate was 1900/9500 or 20%.

We can see the importance of getting second opinions. What happens with the second opinion (or second test) is that a more expensive and accurate test is used (e.g. a clinical test for pregnancy or a glucose tolerance test for diabetes that requires fasting and a day at a clinic/hospital/doctor’s office). These additional tests are done to verify results before continuing with what can be expensive and uncomfortable treatments.

2.8 - Lesson 2 Summary

2.8 - Lesson 2 Summary

Lesson 2 Summary

Probability is important because it tells us the likelihood of events. Understanding conditional probabilities is also important because p-values, which we'll explore in furture lessons, are conditional probabilities. Now that we understand the probability of events, we can move on to understanding random variables and probability distributions, and thus more deeply into inference

Probability of Events
Set Operations Symbol Definition
Intersection $P(A\cap B)$ Probability of A and B
Union $P(A\cup B)$

Probability of A or B

Note: This includes the possibility of A and B

Complement $P(A')$ The probability of NOT A
Conditional $P(A\mid B)$ The probability of A given B

 [1] Link ↥ Has Tooltip/Popover Toggleable Visibility