11.3 - Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value
11.3 - Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive ValueIn this example, two columns indicate the actual condition of the subjects, diseased or non-diseased. The rows indicate the results of the test, positive or negative.
Cell A contains true positives, subjects with the disease, and positive test results. Cell D subjects do not have the disease and the test agrees.
A good test will have minimal numbers in cells B and C. Cell B identifies individuals without disease but for whom the test indicates 'disease'. These are false positives. Cell C has false negatives.
If these results are from a population-based study, prevalence can be calculated as follows:
Prevalence of Disease= \(\dfrac{T_{\text{disease}}}{\text{Total}} \times 100\)
The population used for the study influences the prevalence calculation.
Sensitivity is the probability that a test will indicate 'disease' among those with the disease:
Sensitivity: A/(A+C) × 100
Specificity is the fraction of those without the disease who will have a negative test result:
Specificity: D/(D+B) × 100
Sensitivity and specificity are characteristics of the test. The population does not affect the results.
A clinician and a patient have a different question: what is the chance that a person with a positive test truly has the disease? If the subject is in the first row in the table above, what is the probability of being in cell A as compared to cell B? A clinician calculates across the row as follows:
Positive Predictive Value: A/(A+B) × 100
Negative Predictive Value: D/(D+C) × 100
Positive and negative predictive values are influenced by the prevalence of disease in the population that is being tested. If we test in a high prevalence setting, it is more likely that persons who test positive truly have the disease than if the test is performed in a population with low prevalence.
Let's see how this works out with some numbers...
Hypothetical Example 1 - Screening Test A
100 people are tested for the disease. 15 people have the disease; 85 people are not diseased. So, the prevalence is 15%:
- Prevalence of Disease:
\(\dfrac{T_{\text{disease}}}{\text{Total}} \times 100\),
15/100 × 100 = 15%
Sensitivity is two-thirds, so the test is able to detect two-thirds of the people with the disease. The test misses one-third of the people who have the disease.
- Sensitivity:
A/(A + C) × 100
10/15 × 100 = 67%
The test has 53% specificity. In other words, out of 85 persons without the disease, 45 have true negative results while 40 individuals test positive for a disease that they do not have.
- Specificity:
D/(D + B) × 100
45/85 × 100 = 53%
The sensitivity and specificity are characteristics of this test. For a clinician, however, the important fact is among the people who test positive, only 20% actually have the disease.
- Positive Predictive Value:
A/(A + B) × 100
10/50 × 100 = 20%
Of those that test negative, 90% do not have the disease.
- Negative Predictive Value:
D/(D + C) × 100
45/50 × 100 = 90%
Now, let's change the prevalence.
Hypothetical Example 2 - Increased Prevalence, Same Test
This time we use the same test, but in a different population, with a disease prevalence of 30%.
- Prevalence of Disease:
- \(\dfrac{T_{\text{disease}}}{\text{Total}} \times 100\)
30/100 × 100 = 30%
We maintain the same sensitivity and specificity because these are characteristics of this test.
- Sensitivity:
A/(A + C) × 100
20/30 × 100 = 67%
- Specificity:
D/(D + B) × 100
37/70 × 100 = 53%
Now let's calculate the predictive values:
- Positive Predictive Value:
A/(A + B) × 100
20/53 × 100 = 38%
- Negative Predictive Value:
D/(D + C) × 100
37/47 × 100 = 79%
Using the same test in a population with a higher prevalence increases positive predictive value. Conversely, increased prevalence results in decreased negative predictive value. When considering predictive values of diagnostic or screening tests, recognize the influence of the prevalence of the disease. The figure below depicts the relationship between disease prevalence and predictive value in a test with 95% sensitivity and 95% specificity:
Relationship between disease prevalence and predictive value in a test with 95% sensitivity and 85% specificity.
(From Mausner JS, Kramer S: Mausner and Bahn Epidemiology: An Introductory Text. Philadelphia, WB Saunders, 1985, p. 221.)
Try it!
Minimizing false positives is important when the costs or risks of follow-up therapy are high and the disease itself is not life-threatening...prostate cancer in elderly men is one example; as another, obstetricians must consider the potential harm from a false positive maternal serum AFP test (which may be followed up with amniocentesis, ultrasonography, and increased fetal surveillance as well as producing anxiety for the parents and labeling of the unborn child), against the potential benefit.
Try it!
We don’t want any false negatives if the disease is often asymptomatic and
- is serious, progresses quickly and can be treated more effectively at early stages, OR
- easily spreads from one person to another
What is a good test in a population? Actually, all tests have advantages and disadvantages, such that no test is perfect. There is no free lunch in disease screening and early detection.