This model is the default model in a way that serves for testing of goodness-of-fit of the other models. Recall that the saturated model has the maximum number of parameters and fitting a saturated model is the same as estimating ML parameters of distributions appropriate for each cell of the contingency table.

#### Main assumption

The \(N = IJK\) counts in the cells are assumed to be independent observations of a Poisson random variable.

#### Model structure

\(\log(\mu_{ijk})=\lambda+\lambda_i^A+\lambda_j^B+\lambda_k^C+\lambda_{ij}^{AB}+\lambda_{ik}^{AC}+\lambda_{jk}^{BC}+\lambda_{ijk}^{ABC}\)

Parameter constraints can be different, but the (typical) set-to-zero constraints imply that any \(\lambda\) for which a subscript corresponds to the last category (i.e., includes \(I\), \(J\), or \(K\)) is set to 0. For example,

\(\lambda_I^A=\lambda_J^B=\lambda_K^C=\lambda_{Ij}^{AB}=\cdots=\lambda_{ijK}^{ABC}=0\)

#### Parameter estimation and interpretation

- \(\lambda\) represents an "overall" effect or a grand mean (on the log scale) of the expected counts, and it ensures that \(\sum_i\sum_j\sum_k\mu_{ijk}=n\).
- \(\lambda_i^A\), \(\lambda_j^B\), and \(\lambda_k^C\) represent the "main" effects of variables \(A\), \(B\), and \(C\), or deviations from the grand mean.
- \(\lambda_{ij}^{AB}\), \(\lambda_{ik}^{AC}\), \(\lambda_{jk}^{BC}\) represent the interaction/association between two variables while controlling the third (e.g., conditional odds ratios, test for partial associations) and reflect the departure from independence
- \(\lambda_{ijk}^{ABC}\) represents the interaction/association between three variables and reflect the departure from independence

If there is a significant interaction term, we typically do not look at the lower-order terms but only interpret the higher-order terms because the values of lower-order terms are coding dependent and can be misleading.

##### Model Fit

The saturated model has a perfect fit with \(G^2=0\) and df = 0 since the number of cells is equal to the number of unique parameters in the model.

##### Model Selection

Relevant when comparing to simpler models. The saturated model is the most complex model possible.

#### Fitting in SAS and R

Using PROC GENMOD, let us fit the saturated log-linear model.

```
proc genmod data=berkeley order=data;
class D S A;
model count= D S A D*S D*A S*A D*S*A/dist=poisson link=log;
run;
```

When we use the order=data option, GENMOD orders the levels of class variables in the same order as they appear in the dataset. For each class variable, GENMOD creates a set of dummy using the **last** category as a reference group (recall the CATMOD and GENMOD coding from the previous lesson). Therefore, we can interpret a two-way association as the log-odds ratio for the two variables in question, with the other variable held constant at its last category (i.e., conditional odds-ratios).

Here's a portion of the SAS output that includes ML estimates.

Analysis Of Maximum Likelihood Parameter Estimates | ||||||
---|---|---|---|---|---|---|

Parameter | DF | Estimate | Standard Error |
|||

Intercept | 1 | 3.1781 | 0.2041 | |||

D | DeptA | 1 | 1.3106 | 0.2300 | ||

D | DeptB | 1 | -0.3448 | 0.3170 | ||

D | DeptC | 1 | 2.1302 | 0.2159 | ||

D | DeptD | 1 | 1.6971 | 0.2220 | ||

D | DeptE | 1 | 1.3652 | 0.2287 | ||

D | DeptF | 0 | 0.0000 | 0.0000 | ||

S | Male | 1 | -0.0870 | 0.2952 | ||

S | Female | 0 | 0.0000 | 0.0000 | ||

A | Reject | 1 | 2.5808 | 0.2117 | ||

A | Accept | 0 | 0.0000 | 0.0000 | ||

D*S | DeptA | Male | 1 | 1.8367 | 0.3167 | |

D*S | DeptA | Female | 0 | 0.0000 | 0.0000 | |

D*S | DeptB | Male | 1 | 3.1203 | 0.3857 | |

D*S | DeptB | Female | 0 | 0.0000 | 0.0000 | |

D*S | DeptC | Male | 1 | -0.4338 | 0.3169 | |

D*S | DeptC | Female | 0 | 0.0000 | 0.0000 | |

D*S | DeptD | Male | 1 | 0.1391 | 0.3194 | |

D*S | DeptD | Female | 0 | 0.0000 | 0.0000 | |

D*S | DeptE | Male | 1 | -0.4860 | 0.3415 | |

D*S | DeptE | Female | 0 | 0.0000 | 0.0000 | |

D*S | DeptF | Male | 0 | 0.0000 | 0.0000 | |

D*S | DeptF | Female | 0 | 0.0000 | 0.0000 | |

D*A | DeptA | Reject | 1 | -4.1250 | 0.3297 | |

D*A | DeptA | Accept | 0 | 0.0000 | 0.0000 | |

D*A | DeptB | Reject | 1 | -3.3346 | 0.4782 | |

D*A | DeptB | Accept | 0 | 0.0000 | 0.0000 | |

D*A | DeptC | Reject | 1 | -1.9204 | 0.2288 | |

D*A | DeptC | Accept | 0 | 0.0000 | 0.0000 | |

D*A | DeptD | Reject | 1 | -1.9589 | 0.2378 | |

D*A | DeptD | Accept | 0 | 0.0000 | 0.0000 | |

D*A | DeptE | Reject | 1 | -1.4237 | 0.2425 | |

D*A | DeptE | Accept | 0 | 0.0000 | 0.0000 | |

D*A | DeptF | Reject | 0 | 0.0000 | 0.0000 | |

D*A | DeptF | Accept | 0 | 0.0000 | 0.0000 | |

S*A | Male | Reject | 1 | 0.1889 | 0.3052 | |

S*A | Male | Accept | 0 | 0.0000 | 0.0000 | |

S*A | Female | Reject | 0 | 0.0000 | 0.0000 | |

S*A | Female | Accept | 0 | 0.0000 | 0.0000 | |

D*S*A | DeptA | Male | Reject | 1 | 0.8632 | 0.4027 |

D*S*A | DeptA | Male | Accept | 0 | 0.0000 | 0.0000 |

D*S*A | DeptA | Female | Reject | 0 | 0.0000 | 0.0000 |

D*S*A | DeptA | Female | Accept | 0 | 0.0000 | 0.0000 |

D*S*A | DeptB | Male | Reject | 1 | 0.0311 | 0.5335 |

... | ... | ... | ... | ... | ... | ... |

The scale parameter was held fixed.

By default, R will set the first category (first alphabetically) as the reference level with the set-to-zero constraint, which we can change manually if we wish. The results will be equivalent, but the interpretations of the estimates reported will be different. With this choice of reference, we can interpret a two-way association as the log-odds ratio for the two variables in question, with the other variable held constant at its last category (i.e., conditional odds-ratios). Also, note that the built-in data set UCBAdmissions uses "Gender" instead of "Sex".

```
UCBAdmissions
berk.data = as.data.frame(UCBAdmissions)
berk.data$Gender = relevel(berk.data$Gender, ref='Female')
berk.data$Dept = relevel(berk.data$Dept, ref='F')
berk.sat = glm(Freq~Admit*Gender*Dept, family=poisson(), data=berk.data)
```

Here's a portion of the summary output that includes ML estimates.

```
> summary(berk.sat)
Call:
glm(formula = Freq ~ Admit * Gender * Dept, family = poisson(),
data = berk.data)
Deviance Residuals:
[1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.17805 0.20412 15.569 < 2e-16 ***
AdmitRejected 2.58085 0.21171 12.190 < 2e-16 ***
GenderMale -0.08701 0.29516 -0.295 0.7682
DeptA 1.31058 0.23001 5.698 1.21e-08 ***
DeptB -0.34484 0.31700 -1.088 0.2767
DeptC 2.13021 0.21591 9.866 < 2e-16 ***
DeptD 1.69714 0.22204 7.644 2.11e-14 ***
DeptE 1.36524 0.22870 5.969 2.38e-09 ***
AdmitRejected:GenderMale 0.18890 0.30516 0.619 0.5359
AdmitRejected:DeptA -4.12505 0.32968 -12.512 < 2e-16 ***
AdmitRejected:DeptB -3.33462 0.47817 -6.974 3.09e-12 ***
AdmitRejected:DeptC -1.92041 0.22876 -8.395 < 2e-16 ***
AdmitRejected:DeptD -1.95888 0.23781 -8.237 < 2e-16 ***
AdmitRejected:DeptE -1.42370 0.24250 -5.871 4.33e-09 ***
GenderMale:DeptA 1.83670 0.31672 5.799 6.66e-09 ***
GenderMale:DeptB 3.12027 0.38572 8.090 5.99e-16 ***
GenderMale:DeptC -0.43376 0.31687 -1.369 0.1710
GenderMale:DeptD 0.13907 0.31938 0.435 0.6632
GenderMale:DeptE -0.48599 0.34151 -1.423 0.1547
AdmitRejected:GenderMale:DeptA 0.86318 0.40267 2.144 0.0321 *
AdmitRejected:GenderMale:DeptB 0.03113 0.53349 0.058 0.9535
AdmitRejected:GenderMale:DeptC -0.31382 0.33741 -0.930 0.3523
AdmitRejected:GenderMale:DeptD -0.10691 0.34013 -0.314 0.7533
AdmitRejected:GenderMale:DeptE -0.38908 0.36500 -1.066 0.2864
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 2.6501e+03 on 23 degrees of freedom
Residual deviance: 1.4788e-13 on 0 degrees of freedom
AIC: 207.06
```

The intercept is a normalizing constant and should be ignored. The main effects for D, S, and A are all difficult to interpret and not very meaningful since we have significant two-way and three-way associations; the two- and three-way associations are highly meaningful. For example, the estimated coefficient for the SA association is 0.1889.

Exponentiating this coefficient gives

\(\exp(0.1889)=1.208\),

which is the estimated SA odds ratio for Department F since that is the reference department in this analysis. The reference group for S is "female," and the reference group for A is "accept." If we write the \(2 \times 2\) table for \(S \times A\) in Department F, i.e., the "partial" table, with the reference groups in the last row and column, we get

Dept F | Reject | Accept |
---|---|---|

Men | 351 | 22 |

Women | 317 | 24 |

for which the estimated odds ratio is:

\(351 \times 24/317 \times 22=1.208\).

The Wald z-statistic for this coefficient,

\(z=0.1889/0.3052\),

which corresponds to Chi-square statistic \(0.62^2 = 0.38\) with p-value 0.5359 and indicates that the SA odds ratio for Department F is not significantly different from 1.00 or that the log odds ratio is not significantly different from 0.

The 95% confidence interval for the parameter estimate, that is, for the log odds ratio, is \((−0.4092, 0.7870)\). Thus the 95% confidence interval for the odds ratio is

\((\exp(-0.4092),\exp(0.7870))=(0.67,2.20)\).

To get the SA odds ratio for any other department, we have to combine the SA coefficient with one of the DSA coefficients. For example, the SA odds ratio for Department A is

\(\exp(0.1889+0.8632)=2.864\).

Based on:

\begin{align} \theta_{SA(i="A")} &= \dfrac{\mu_{ijk} \mu_{ij'k'}}{\mu_{ij'k} \mu_{ijk'}}\\ &= \\ &= (\lambda_{jk}+\lambda_{j'k'}-\lambda_{j'k}-\lambda_{jk'})+(\lambda_{ijk}+\lambda_{ij'k'}-\lambda_{ij'k}-\lambda_{ijk'})\\ &= (0.1889+0-0-0)+(0.8632+0-0-0) \end{align}

The Wald z-statistic for the first DSA coefficient,

\(z=0.8632/0.4027\),

indicates that the SA odds ratio for Department A is significantly different from the SA odds ratio in Department F. To see if the SA odds ratio in Department A is significantly different from 1.00, we would have to compute the standard error for the sum of the two coefficients using the estimated covariance matrix, or refit the model by fixing the level of interest equal to 0.

In many situations, however, we take recourse to the saturated model only as a last resort. As the number of variables grow, saturated models become more and more difficult to interpret. In the following sections, we look at simpler models which are useful in explaining the associations among the discrete variables of interest.