Statistical inference is the process of using sample data to make meaningful statements about population parameters. Maximum likelihood estimation is a good starting point for this, but because random samples vary, our estimators themselves are subject to variation that we have to take into account before making conclusions. A typical example of this is to form a confidence interval for a parameter by starting with the point estimate and adding and subtracting a margin of error, depending on sample variability and desired confidence. Hypothesis testing must likewise take into account sample variation before establishing significant results. This lesson considers both of these approaches for binomial and multinomial distributions.
- Objective 2.1
Explain the role of a margin of error when using sample data to make inferences about population parameters.
- Objective 2.2
Understand how a hypothesis or model constraint affects its likelihood function and use a likelihood ratio test to evaluate such a constraint.
- Objective 2.3
Distinguish between category counts observed directly from sample data and those expected from a given model and measure their agreement to evaluate the fit of the model.