# 6.2.3 - More on Model-fitting

6.2.3 - More on Model-fitting

Suppose two models are under consideration, where one model is a special case or "reduced" form of the other obtained by setting $$k$$ of the regression coefficients (parameters) equal to zero. The larger model is considered the "full" model, and the hypotheses would be

$$H_0$$: reduced model versus $$H_A$$: full model

Equivalently, the null hypothesis can be stated as the $$k$$ predictor terms associated with the omitted coefficients have no relationship with the response, given the remaining predictor terms are already in the model. If we fit both models, we can compute the likelihood-ratio test (LRT) statistic:

$$G^2 = −2 (\log L_0 - \log L_1)$$

where $$L_0$$ and $$L_1$$ are the max likelihood values for the reduced and full models, respectively. The degrees of freedom would be $$k$$, the number of coefficients in question. The p-value is the area under the $$\chi^2_k$$ curve to the right of $$G^2)$$.

To perform the test in SAS, we can look at the "Model Fit Statistics" section and examine the value of "−2 Log L" for "Intercept and Covariates." Here, the reduced model is the "intercept-only" model (i.e., no predictors), and "intercept and covariates" is the full model. For our running example, this would be equivalent to testing "intercept-only" model vs. full (saturated) model (since we have only one predictor).

Model Fit Statistics
Criterion Intercept Only Intercept and Covariates
Log Likelihood Full Log Likelihood
AIC 5178.510 5151.390 19.242
SC 5185.100 5164.569 32.421
-2 Log L 5176.510 5147.390 15.242

Larger differences in the "-2 Log L" values lead to smaller p-values more evidence against the reduced model in favor of the full model. For our example, $$G^2 = 5176.510 − 5147.390 = 29.1207$$ with $$2 − 1 = 1$$ degree of freedom. Notice that this matches the deviance we got in the earlier text above.

Also, notice that the $$G^2$$ we calculated for this example is equal to 29.1207 with 1df and p-value <.0001 from "Testing Global Hypothesis: BETA=0" section (the next part of the output, see below).

## Testing the Joint Significance of All Predictors

Testing the null hypothesis that the set of coefficients is simultaneously zero. For example, consider the full model

$$\log\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 x_1+\cdots+\beta_k x_k$$

and the null hypothesis $$H_0\colon \beta_1=\beta_2=\cdots=\beta_k=0$$ versus the alternative that at least one of the coefficients is not zero. This is like the overall F−test in linear regression. In other words, this is testing the null hypothesis of the intercept-only model:

$$\log\left(\dfrac{\pi}{1-\pi}\right)=\beta_0$$

versus the alternative that the current (full) model is correct. This corresponds to the test in our example because we have only a single predictor term, and the reduced model that removes the coefficient for that predictor is the intercept-only model.

In the SAS output, three different chi-square statistics for this test are displayed in the section "Testing Global Null Hypothesis: Beta=0," corresponding to the likelihood ratio, score, and Wald tests. Recall our brief encounter with them in our discussion of binomial inference in Lesson 2.

Testing Global Null Hypothesis: BETA=0
Test Chi-Square DF Pr > ChiSq
Likelihood Ratio 29.1207 1 <.0001
Score 27.6766 1 <.0001
Wald 27.3361 1 <.0001

Large chi-square statistics lead to small p-values and provide evidence against the intercept-only model in favor of the current model. The Wald test is based on asymptotic normality of ML estimates of $$\beta$$s. Rather than using the Wald, most statisticians would prefer the LR test. If these three tests agree, that is evidence that the large-sample approximations are working well and the results are trustworthy. If the results from the three tests disagree, most statisticians would tend to trust the likelihood-ratio test more than the other two.

In our example, the "intercept only" model or the null model says that student's smoking is unrelated to parents' smoking habits. Thus the test of the global null hypothesis $$\beta_1=0$$ is equivalent to the usual test for independence in the $$2\times2$$ table. We will see that the estimated coefficients and standard errors are as we predicted before, as well as the estimated odds and odds ratios.

Residual deviance is the difference between −2 logL for the saturated model and −2 logL for the currently fit model. The high residual deviance shows that the model cannot be accepted. The null deviance is the difference between −2 logL for the saturated model and −2 logL for the intercept-only model. The high residual deviance shows that the intercept-only model does not fit.

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  -1.82661    0.07858 -23.244  < 2e-16 ***
parentsmoke1  0.45918    0.08782   5.228 1.71e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

In our $$2\times2$$ table smoking example, the residual deviance is almost 0 because the model we built is the saturated model. And notice that the degree of freedom is 0 too. Regarding the null deviance, we could see it equivalent to the section "Testing Global Null Hypothesis: Beta=0," by likelihood ratio in SAS output.

For our example, Null deviance = 29.1207 with df = 1. Notice that this matches the deviance we got in the earlier text above.

## The Homer-Lemeshow Statistic

An alternative statistic for measuring overall goodness-of-fit is the Hosmer-Lemeshow statistic.

NOTE! We use one predictor model here, that is, at least one parent smokes.

This is a Pearson-like chi-square statistic that is computed after the data are grouped by having similar predicted probabilities. It is more useful when there is more than one predictor and/or continuous predictors in the model too. We will see more on this later.

$$H_0$$: the current model fits well
$$H_A$$: the current model does not fit well

To calculate this statistic:

1. Group the observations according to model-predicted probabilities ( $$\hat{\pi}_i$$)
2. The number of groups is typically determined such that there is roughly an equal number of observations per group
3. The Hosmer-Lemeshow (HL) statistic, a Pearson-like chi-square statistic, is computed on the grouped data but does NOT have a limiting chi-square distribution because the observations in groups are not from identical trials. Simulations have shown that this statistic can be approximated by a chi-squared distribution with $$g − 2$$ degrees of freedom, where $$g$$ is the number of groups.

Warning about the Hosmer-Lemeshow goodness-of-fit test:

• It is a conservative statistic, i.e., its value is smaller than what it should be, and therefore the rejection probability of the null hypothesis is smaller.
• It has low power in predicting certain types of lack of fit such as nonlinearity in explanatory variables.
• It is highly dependent on how the observations are grouped.
• If too few groups are used (e.g., 5 or less), it almost always fails to reject the current model fit. This means that it's usually not a good measure if only one or two categorical predictor variables are involved, and it's best used for continuous predictors.

In the model statement, the option lackfit tells SAS to compute the HL statistic and print the partitioning. For our example, because we have a small number of groups (i.e., 2), this statistic gives a perfect fit (HL = 0, p-value = 1). Instead of deriving the diagnostics, we will look at them from a purely applied viewpoint. Recall the definitions and introductions to the regression residuals and Pearson and Deviance residuals.

## Residuals

The Pearson residuals are defined as

$$r_i=\dfrac{y_i-\hat{\mu}_i}{\sqrt{\hat{V}(\hat{\mu}_i)}}=\dfrac{y_i-n_i\hat{\pi}_i}{\sqrt{n_i\hat{\pi}_i(1-\hat{\pi}_i)}}$$

The contribution of the $$i$$th row to the Pearson statistic is

$$\dfrac{(y_i-\hat{\mu}_i)^2}{\hat{\mu}_i}+\dfrac{((n_i-y_i)-(n_i-\hat{\mu}_i))^2}{n_i-\hat{\mu}_i}=r^2_i$$

and the Pearson goodness-of fit statistic is

$$X^2=\sum\limits_{i=1}^N r^2_i$$

which we would compare to a $$\chi^2_{N-p}$$ distribution. The deviance test statistic is

$$G^2=2\sum\limits_{i=1}^N \left\{ y_i\text{log}\left(\dfrac{y_i}{\hat{\mu}_i}\right)+(n_i-y_i)\text{log}\left(\dfrac{n_i-y_i}{n_i-\hat{\mu}_i}\right)\right\}$$

which we would again compare to $$\chi^2_{N-p}$$, and the contribution of the $$i$$th row to the deviance is

$$2\left\{ y_i\log\left(\dfrac{y_i}{\hat{\mu}_i}\right)+(n_i-y_i)\log\left(\dfrac{n_i-y_i}{n_i-\hat{\mu}_i}\right)\right\}$$

We will note how these quantities are derived through appropriate software and how they provide useful information to understand and interpret the models.

 [1] Link ↥ Has Tooltip/Popover Toggleable Visibility