# 17.4 - Comparing Two Diagnostic Tests

17.4 - Comparing Two Diagnostic Tests

Suppose that we want to compare sensitivity and specificity for two diagnostic tests. Let $$p_1$$ denote the test characteristic for diagnostic test #1 and let $$p_2$$ = test characteristic for diagnostic test #2.

The appropriate statistical test depends on the setting. If diagnostic tests were studied on two independent groups of patients, then two-sample tests for binomial proportions are appropriate (chi-square, Fisher's exact test). If both diagnostic tests were performed on each patient, then paired data result and methods that account for the correlated binary outcomes are necessary (McNemar's test).

Suppose two different diagnostic tests are performed in two independent samples of individuals using the same gold standard. The following 2 × 2 tables result:

 Diagnostic Test #1 Disease No Disease Positive 82 30 Negative 18 70

 Diagnostic Test #2 Disease No Disease Positive 140 10 Negative 60 90

Suppose that sensitivity is the statistic of interest. The estimates of sensitivity are $$p_1 = \dfrac{82}{100} = 0.82$$ and $$p_2 = \dfrac{140}{200} = 0.70$$ for diagnostic test #1 and diagnostic test #2, respectively. The following SAS program will provide confidence intervals for the sensitivity for each test as well as comparison of the tests with regard to sensitivity.

## SAS® Example

### Using PROC FREQ in SAS for comparing two diagnostic tests based on data from two samples

18.2_comparing_diagnostic.sas

***********************************************************************
* This is a program that illustrates the use of PROC FREQ in SAS for  *
* comparing two diagnostic tests based on data from two samples.      *
***********************************************************************;

proc format;
value yesnofmt 1='yes' 2='no';
run;

data sensitivity_diag1;
input positive count;
format positive yesnofmt.;
cards;
1 82
2 18
;
run;

proc freq data=sensitivity_diag1;
tables positive/binomial alpha=0.05;
weight count;
title "Exact and Asymptotic 95% Confidence Intervals for Sensitivity with Diagnostic Test #1";
run;

data sensitivity_diag2;
input positive count;
format positive yesnofmt.;
cards;
1 140
2  60
;
run;

proc freq data=sensitivity_diag2;
tables positive/binomial alpha=0.05;
weight count;
title "Exact and Asymptotic 95% Confidence Intervals for Sensitivity with Diagnostic Test #2";
run;

data comparison;
input test positive count;
format positive yesnofmt.;
cards;
1 1  82
1 2  18
2 1 140
2 2  60
;
run;

proc freq data=comparison;
tables positive*test/chisq;
exact chisq;
weight count;
title "Exact and Asymptotic Tests for Comparing Sensitivities";
run;


Run the program and look at the output. Do you see the exact 95% confidence intervals for the two diagnostic tests as (0.73, 0.89) and (0.63, 0.76), respectively?

The SAS program also indicates that the p-value = 0.0262 from Fisher's exact test for testing $$H_0 \colon p_1 = p_2$$ .

Thus, diagnostic test #1 has a significantly better sensitivity than diagnostic test #2.

## SAS® Example

### Using PROC FREQ in SAS for comparing two diagnostic tests based on data from one sample

Suppose both diagnostic tests (test #1 and test #2) are applied to a given set of individuals, some with the disease (by the gold standard) and some without the disease.

As an example, data can be summarized in a 2 × 2 table for the 100 diseased patients as follows:

 Diagnostic Test #2 Diagnostic Test #1 Positive Negative Positive 30 35 Negative 23 12

The appropriate test statistic for this situation is McNemar's test. The patients with a (+, +) result and the patients with a ( - , - ) result do not distinguish between the two diagnostic tests. The only information for comparing the sensitivities of the two diagnostic tests comes form those patients with a (+, - ) or ( - , +) result.

Testing that the sensitivities are equal, i.e., $$H_0 \colon p_1 = p_2$$ , is comparable to testing that.

$$H_0 \colon p$$ = (probability of preferring diagnostic test #1 over diagnostic test # 2) = ½ In the above example, N = 58 and 35 of the 58 display a (+, - ) result, so the estimated binomial probability is 35/58 = 0.60. The exact p-value is 0.148 from McNemar's test (see SAS Example 18.3_comparing_diagnostic.sas below).

***********************************************************************
* This is a program that illustrates the use of PROC FREQ in SAS for  *
* comparing two diagnostic tests based on data from one sample.       *
***********************************************************************;

proc format;
value testfmt 1='positive' 2='negative';
run;

data comparison;
input test1 test2 count;
format test1 test2 testfmt.;
cards;
1 1 30
1 2 35
2 1 23
2 2 12
run;

proc freq data=comparison;
tables test1*test2/agree;
weight count;
exact McNem;
title "McNemar's Test for Comparing Sensitivities";
run;


Thus, the two diagnostic tests are not significantly different with respect to sensitivity.

  Link ↥ Has Tooltip/Popover Toggleable Visibility