Lesson 14: Nested and Split Plot Designs

Lesson 14: Nested and Split Plot Designs

Introduction

Nested and Split Plot experiments are multifactor experiments that have some important industrial applications although historically these come out of agricultural contexts. "Split plot" designs -- here we are originally talking about fields which are divided into whole and split plots, and then individual plots get assigned different treatments. For instance, one whole plot might have different irrigation techniques or fertilization strategies applied, or the soil might be prepared in a different way. The whole plot serves as the experimental unit for this particular treatment. Then we could divide each whole plot into sub plots, and each subplot is the experimental unit for another treatment factor.

Whenever we talk about split plot designs we focus on the experimental unit for a particular treatment factor.

Nested and split-plot designs frequently involve one or more random factors, so the methodology of Chapter 13 of our text (expected mean squares, variance components) is important.

There are many variations of these designs – here we will only consider some of the more basic situations.

Objectives

Upon successful completion of this lesson, you should be able to:

  • Understanding the concept of nesting factors inside another factor.
  • Getting familiar with the two-stage nested designs where either or both of the factors could be fixed or random.
  • Getting familiar with split-plot designs and their applications where changing the level of some of the factors is hard, relative to other factors.
  • Understanding the two main approaches to analyze the split- plot designs and their derivatives and the basis for each approach.
  • Getting familiar with split-split-plot designs as an extension of split-plot designs.
  • Getting familiar with strip- plot designs (or split-block designs) and their difference from the split-plot designs.

14.1 - The Two-Stage Nested Design

14.1 - The Two-Stage Nested Design

When factor B is nested in levels of factor A, the levels of the nested factor don't have exactly the same meaning under each level of the main factor, in this case factor A. In a nested design, the levels of factor (B) are not identical to each other at different levels of factor (A), although they might have the same labels. For example, if A is school and B is teacher, teacher 1 will differ between the schools. This has to be kept in mind when trying to determine if the design is crossed or nested. To be crossed, the same teacher needs to teach at all the schools.

As another example, consider a company that purchases material from three suppliers and the material comes in batches. In this case, we might have 4 batches from each supplier, but the batches don't have the same characteristics of quality when purchased from different suppliers. Therefore, the batches would be nested. When we have a nested factor and you want to represent this in the model the identity of the batch always requires an index of the factor in which it is nested. The linear statistical model for the two-stage nested design is:

\(y_{ijk}=\mu+\tau_i+\beta_{j(i)}+\varepsilon_{k(ij)}
\left\{\begin{array}{c}
i=1,2,\ldots,a \\
j=1,2,\ldots,b \\
k=1,2,\ldots,n
\end{array}\right. \)

The subscript j(i) indicates that \(j^{th}\) level of factor B is nested under the \(i^{th}\) level of factor A. Furthermore, it is useful to think of replicates as being nested under the treatment combinations; thus, \(k(ij)\) is used for the error term. Because not every level of B appears with every level of A, there is no interaction between A and B. (In most of our designs, the error is nested in the treatments, but we only use this notation for error when there are other nested factors in the design).

When B is a random factor nested in A, we think of it as the replicates for A. So whether factor A is a fixed or random factor the error term for testing the hypothesis about A is based on the mean squares due to B(A) which is read "B nested in A". Table 14.1 displays the expected mean squares in the two-stage nested design for different combinations of factor A and B being fixed or random.

E(MS) A Fixed A Fixed A Random
B Fixed B Random B Random
\(E(MS_A)\) \(\sigma^{2}+\dfrac{b n \sum \tau_{i}^{2}}{a-1}\) \(\sigma^{2}+n \sigma_{\beta}^{2}+\dfrac{b n \sum \tau_{i}^{2}}{a-1}\) \(\sigma^{2}+n \sigma_{\beta}^{2}+b n \sigma_{\tau}^{2}\)
\(E(MS_{B(A)})\) \(\sigma^2 + \dfrac{n \sum \sum \beta_{j(i)}^2}{a(b - 1)}\) \(\sigma^{2}+n \sigma_{\beta}^{2}\) \(\sigma^{2}+n \sigma_{\beta}^{2}\)
\(E(MS_E)\) \(\sigma^2\) \(\sigma^2\) \(\sigma^2\)
Table 14.1 (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition

The analysis of variance table is shown in table 14.2.

Source of Variation Sum of Squares Degrees of Freedom Mean Square
A \(b n \sum\left(\overline{y}_{i . .}-\overline{y}_{\dots}\right)^{2}\) a - 1 \(MS_A\)
B within A \(n \sum \sum\left(\overline{y}_{i j .}-\overline{y}_{i .}\right)^{2}\) a(b - 1) \(MS_{B(A)}\)
Error \(\sum \sum \sum\left(y_{i j k}-\overline{y}_{ij}\right)^{2}\) ab(n - 1) \(MS_E\)
Total \(\sum \sum \sum\left(y_{i j k}-\overline{y}_{\dots}\right)^{2}\) abn - 1  
Table 14.2 (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition)

Another way to think about this is to note that batch is the experimental unit for the factor 'supplier'. Does it matter how many measurements you make on each batch? (Yes, this will improve your measurement precision on the batch.) However, the variability among the batches from the supplier is the appropriate measure of the variability of factor A, the suppliers.

Essentially the question that we want to answer is, "Is the purity of the material the same across suppliers?"

In this example the model assumes that the batches are random samples from each supplier, i.e. suppliers are fixed, the batches are random, and the observations are random.

Experimental design: Select four batches at random from each of three suppliers. Make three purity determinations from each batch. See the schematic representation of this design in Fig. 14-1.

1 1 2 3 4 2 1 2 3 4 3 1 2 3 4 Suppliers Batches Observations y 311 y 321 y 331 y 341 y 311 y 322 y 332 y 342 y 311 y 323 y 333 y 343 y 211 y 221 y 231 y 241 y 211 y 221 y 232 y 242 y 211 y 221 y 233 y 243 y 111 y 112 y 113 y 121 y 122 y 123 y 131 y 141 y 122 y 123 y 122 y 123 Figure 14.1 A Two-staged nested design (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition)

It is the average of the batches and the variability across the batches that are most important. When analyzing these data, we want to decide which supplier should they use? This will depend on both the supplier mean and the variability among batches?

Here is the design question: How many batches should you take and how many measurements should you make on each batch? This will depend on the cost of performing a measurement versus the cost of getting another batch. If measurements are expensive one could get many batches and just take a few measurements on each batch, or if it is costly to get a new batch then you may want to spend more money taking many multiple measurements per batch.

At a minimum, you need at least two measurements (n = 2) so that you can estimate the variability among your measurements, \(\sigma^2\), and at least two batches per supplier (b = 2) so you can estimate the variability among batches, \(\sigma^{2}_{\beta}\). Some would say that you need at least three in order to be sure!

To repeat the design question: how large should b and n be, or, how many batches versus how many samples per batch? This will be a function of the cost of taking a measurement and the cost of getting another batch. In order to answer these questions, you need to know these cost functions. It will also depend on the variance among batches versus the variance of the measurements within batches.

Minitab can provide the estimates of these variance components.

Minitab General Linear Model (unlike SAS GLM), bases its F tests on what the expected mean squares determine is the appropriate error. The program will tell us that when we test the hypothesis of no supplier effect, we should use the variation among batches (since Batch is random) as the error for the test.

Run the example given in Minitab Example14-1.mpx to see the test statistic, which is distributed as an F-distribution with 2 and 9 degrees of freedom.

Example 14.1: Practical Interpretation

There is no significant difference (p-value = 0.416) in purity among suppliers, but significant variation exists (p-value = 0.017) in purity among batches (within suppliers)

What are the practical implications of this conclusion?

Examine the residual plots. The plot of residuals versus supplier is very important (why?)

An assumption in the Analysis of Variance is that the variances are all equal. The measurement error should not depend on the batch means, i.e., the variation in measurement error is probably the same for a high-quality batch as it is for low-quality batch. We also assume the variability among batches, \(\sigma^{2}_B\), is the same for all suppliers. This is an assumption that you will want to check! Because the whole reason one supplier might be better than another is because they have lower variation among their batches. We always need to know what assumptions we are making and whether they are true or not. It is often the most important thing to learn - when you learn there is a failed assumption!

What if we had incorrectly analyzed this experiment as a crossed factorial rather than a nested design? The analysis would be:

The inappropriate Analysis of variance for crossed effects is shown in Table 14.5.

Source of Variation Sum of Squares Degrees of Freedom Mean Square \(F_0\) P-Value
Suppliers (S) 15.06 2 7.53 1.02 0.42
Batches (B) 25.64 3 8.55 3.24 0.04
\(S \times B\) Interaction 44.28 6 7.38 2.80 0.03
Error 63.33 24 2.64    
Total 148.31 35      
Table 14.5 (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition)

This analysis indicates that batches differ significantly and that there is significant interaction between batch and supplier. However, neither the main effect of Batch nor the interaction is meaningful, since batches are not the same across suppliers. Note that the sum of the Batch and the S × B Sum of Squares and Degree of Freedom is the Batch(Supplier) line in the correct Table.

For the model with the A factor also a random effect, analysis of variance method can be used to estimate all three components of variance.

\({\hat{\sigma}}^2=MS_E\)

\({\hat{\sigma}}^2_{\beta}=\frac{MS_{B(A)}-MS_E}{n}\)

And

\({\hat{\sigma}}^2_{\tau}=\frac{MS_A-MS_{B(A)}}{bn}\)


14.2 - The General m-Stage Nested Design

14.2 - The General m-Stage Nested Design

The results from the previous section can easily be generalized to the case of m completely nested factors. The textbook gives an example of a 3-stage nested design in which the effect of two formulations on the alloy harness is of interest. To perform the experiment, three heats of each alloy formulation are prepared, two ingots are selected at random from each heat, and two harness measurements are made on each ingot. Figure 14.5 shows the situation.

Suppliers Batches Observations y 1111 y 1112 y 2111 y 1121 y 1122 y 2121 y 1211 y 1221 y 1222 y 2211 y 1222 y 2221 1 1 2 2 1 2 3 1 2 1 1 1 2 2 1 2 3 1 2 2 y 1311 y 1312 y 2311 y 1321 y 1322 y 2321 y 2112 y 2122 y 2212 y 2222 y 2312 y 2322 Figure 14.5 A three-stage nested design (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition)

The linear statistical model for the 3-stage nested design would be

\(y_{ijk}=\mu+\tau_i+\beta_{j(i)}+\gamma_{k(ij)}+\varepsilon_{l(ijk)}
\left\{\begin{array}{c} i=1,2,\ldots,a \\ j=1,2,\ldots,b \\ k=1,2,\ldots,c \\ l=1,2,\ldots,n \end{array}\right. \)

Where \(\tau_i\) is the effect of the \(i^{th}\) alloy formulation, \(\beta_{j(i)}\) is the effect of the \(j^{th}\) heat within the \(i^{th}\) alloy, and \(\gamma_{k(ij)}\) is the effect of the \(k^{th}\) ingot within the \(j^{th}\) heat and \(i^{th}\) alloy and \(\epsilon_{l(ijk)}\) is the usual NID error term. The calculation of the sum of squares for the analysis of variance is shown in Table 14.8.

Source of Variation Sum of Squares Degrees of Freedom Mean Square
A \(b c n \sum_i \left(\overline{y}_{i . . .}-\overline{y}_{m}\right)^{2}\) a - 1 \(MS_A\)
B (within A) \(cn \sum_j \sum_j \left(\overline{y}_{i j .}-\overline{y}_{i .}\right)^{2}\) a(b - 1) \(MS_{B(A)}\)
C (within B) \(n \sum_{i} \sum_{j} \sum_{k}\left(\overline{y}_{i j k .}-\overline{y}_{m}\right)^{2}\) ab(c - 1) \(MS_{C(B)}\)
Error \(\sum_{i} \sum_{j} \sum_{k} \sum_{l}\left(y_{i j k t}-\overline{y}_{i j k}\right)^{2}\) abc(n - 1) \(MS_E\)
Total \(\sum_{i} \sum_{j} \sum_{k} \sum_{l}\left(y_{i j k t}-\overline{y}_{....}\right)^{2}\) abcn - 1  
Table 14.8 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition)
NOTE! the Sum of Squares formulas for B(A) and C(B) have an error - they should have the A means and B means subtracted, respectively, not the overall mean.)

To test the hypotheses and to form the test statistics once again we use the expected mean squares. Table 14.9 illustrates the calculated expected mean squares for a three-stage nested design with A and B fixed and C random.

Factor Fai Fbj Rck Rnl Expected Mean Square
\(\tau_i\) 0 c b n \(\sigma^{2}+n \sigma_{\gamma}^{2}+\dfrac{b c n \sum \tau_{i}^{2}}{a-1}\)
\(\beta_{j(i)}\) 1 0 c n \(\sigma^{2}+n \sigma_{\gamma}^{2}+\dfrac{c n \sum \sum \beta_{j(i)}^{2}}{a(b-1)}\)
\(\gamma_{k(ij)}\) 1 1 1 n \(\sigma^{2}+n \sigma_{\gamma}^{2}\)
\(\epsilon_{l(ijk)}\) 1 1 1 1 \(\sigma^2\)
Table 14.9 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition)

14.3 - The Split-Plot Designs

14.3 - The Split-Plot Designs
NOTE! It is worth mentioning the fact that the notation used in this section (especially the use of Greek rather than Latin letters for the random error terms in the linear models) is not our preference. But, we decided to keep the text book’s notation to avoid any possible confusion.

There exist some situations in multifactor factorial experiments where the experimenter may not be able to randomize the runs completely. Three good examples of split-plot designs can be found in the article: "How to Recognize a Split-Plot Experiment" by Scott M. Kowalski and Kevin J. Potcner, Quality Progress, November 2003.

Another good example of such a case is in the textbook in Section 14.4. The example is about a paper manufacturer who wants to analyze the effect of three pulp preparation methods and four cooking temperatures on the tensile strength of the paper. The experimenter wants to perform three replicates of this experiment on three different days each consisting of 12 runs (3 × 4). The important issue here is the fact that making the pulp by any of the methods is cumbersome. This method is a “hard to change” factor. It would be economical to randomly select any of the preparation methods, make the blend and divide it into four samples and cook each of them with one of the four cooking temperatures. Then the second method is used to prepare the pulp and so on. As we can see, in order to achieve this economy in the process, there is a restriction on the randomization of the experimental runs.

In this example, each replicate or block is divided into three parts called whole plots (Each preparation method is assigned to a whole plot). Next, each whole plot is divided into four samples which are split-plots and one temperature level is assigned to each of these split-plots. It is important to note that since the whole-plot treatment in the split-plot design is confounded with whole plots and the split-plot treatment is not confounded, if possible, it is better to assign the factor we are most interested in to split plots.

Analysis of Split-Plot designs

In the statistical analysis of split-plot designs, we must take into account the presence of two different sizes of experimental units used to test the effect of whole plot treatment and split-plot treatment. Factor A effects are estimated using the whole plots and factor B and the A*B interaction effects are estimated using the split plots. Since the size of the whole plot and split plots are different, they have different precisions. Generally, there exist two main approaches to analyze the split-plot designs and their derivatives.

  1. First approach uses the Expected Mean Squares of the terms in the model to build the test statistics and is the one discussed by the book. The major disadvantage to this approach is the fact that it does not consider the randomization restrictions which may exist in any experiment.
  2. Second approach which might be of more interest to statisticians and the one which considers any restriction in randomization of the runs is considered as the traditional approach to the analysis of split-plot designs.

Both of the approaches will be discussed but there will be more emphasis on the second approach, as it is more widely accepted for analysis of split-plot designs. It should be noted that the results from the two approaches may not be much different.

The linear statistical model given in the text for the split-plot design is:

\(y_{ijk}=\mu+\tau_i+\beta_j+(\tau\beta)_{ij}+\gamma_k+(\tau\gamma)_{ik}+(\beta\gamma)_{jk}+(\tau\beta\gamma)_{ijk}+\varepsilon_{ijk} \left\{\begin{array}{c} i=1,2,\ldots,r \\ j=1,2,\ldots,a \\ k=1,2,\ldots,b \end{array}\right. \)

Where, \(\tau_i\) , \(\beta_j\) and \((\tau \beta)_{ij}\) represent the whole plot and \(\gamma_k\), \((\tau \gamma)_{ik}\), \((\beta \gamma)_{jk}\) and \((\tau \beta \gamma)_{ijk}\) represent the split-plot. Here \(\tau_i\), \(\beta_j\) and \(\gamma_k\) are block effect, factor A effect and factor B effect, respectively. The sums of squares for the factors are computed as in the three-way analysis of variance without replication.

To analyze the treatment effects we first follow the approach discussed in the book. Table 14.17 shows the expected mean squares used to construct test statistics for the case where replicates or blocks are random and whole plot treatments and split-plot treatments are fixed factors.

  Factor rRi aFj bFk lRh Expected Mean Square
Whole plot \(\tau_i\) 1 a b 1 \(\sigma^2 + ab\sigma_{\tau}^2\)
\(\beta_j\) r 0 b 1 \(\sigma^2 + b \sigma_{\tau \beta}^2 + \dfrac{rb \sum \beta_{j}^2}{a-1}\)  
\( (\tau \beta)_{ij}\) 1 0 b 1 \(\sigma^2 + b \sigma_{\tau \beta}^2\)  
Subplot \(\gamma_k\) r a 0 1 \(\sigma^2 + \sigma_{\tau \gamma}^2\ + \dfrac{ra \sum \gamma_{k}^2}{(b-1)})\)
\( (\tau \gamma)_{ik}\) 1 a 0 1 \(\sigma^2 + \sigma_{\tau \gamma}^2\)  
\( (\beta \ gamma)_{jk}\) r 0 0 1 \(\sigma^2 + \sigma_{\tau \beta \gamma}^2 + \dfrac{r \sum \sum (\beta \gamma)_{jk}^2}{(a - 1)(b - 1)}\)  
\( (\tau \ beta \gamma)_{ijk}\) 1 0 0 1 \(\sigma^2 + \sigma_{\tau \beta \gamma}^2\)  
\(\epsilon_{ijk}\) 1 1 1 1 \(\sigma^2\) (not estimable)  
Table 14.17 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition)

The analysis of variance for the tensile strength is shown in Table 14.16.

Source of Variation Sum of Squares Degrees of Freedom Mean Square \(F_0\) P-Value
Replicates (or Blocks) 77.55 2 38.78    
Preparation method (A) 128.39 2 64.20 7.08 0.05
\(\text{Whole Plot Error (replicates (or Blocks)} \times A\) 36.28 4 9.07    
Temperature (B) 434.08 3 144.69 41.94 <0.01
\(\text{Replicates (or Blocks)} \times B\) 20.67 6 3.45    
AB 75.17 6 12.53 2.96 0.05
\(\text{Subplot Error (replicates (or Blocks)} \times AB\) 50.83 12 4.24    
Total 822.97 35      
Table 14.18 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition)

As mentioned earlier analysis of split-plot designs using the second approach is based mainly on the randomization restrictions. Here, the whole plot section of the analysis of variance could be considered as a Randomized Complete Block Design or RCBD with Method as our single factor (If we didn’t have the blocks, it could be considered as a Complete Randomized Design or CRD). Remember how we dealt with these designs (Step back to Chapter 4). The error term which we used to construct our test statistic (The sum of the square of which was achieved by subtraction) is just the interaction between our single factor and the Blocks. (If you recall, we mentioned that any interaction between the Blocks and the treatment factor is considered part of the experimental error). Similarly, in the split-plot section of the analysis of variance, all the interactions which include the Block term are pooled to form the error term of the split-plot section. If we ignore method, we would have an RCBD where the blocks are the individual preparations. However, there is a systematic effect due to the method, which is taken out of the Block effect. Similarly, the block by temperature has a systematic effect due to method*temperature, so a SS for this effect is removed from the block*temperature interaction. So, one way to think of the SP Error is that it is Block*Temp+Block*Method*Temp with 2*3+2*2*3=18 d.f.

The Mean Square error terms derived in this fashion are then be used to build the F test statistics of each section of the ANOVA table, respectively. Below, we have implemented this second approach for data. To do so, we have first produced the ANOVA table using the GLM command in Minitab, assuming a full factorial design. Next, we have pooled the sum of squares and their respective degrees of freedom to create the SP Error term as described.

  DF SS MS F P-value
Blocks 2 77.556 38.78 4.28 0.1014
Method 2 128.389 64.19 7.08 0.0485
WP Error (Blocks*Methods) 4 36.278 9.07    
Temp 3 434.083 144.69 36.43 0.0000
Method*Temp 6 75.167 12.53 3.15 0.0272
SP Error 18 71.5 3.97    
  35 822.973      

As you can see, there is a little difference between the output of analysis of variance performed in this manner and the one using the Expected Mean Squares because we have pooled Block*Temp and Blocks*Method*Temp to form the subplot error.

Advantages and Disadvantages of Split-Plot Experiments

In summary, when one of the treatment factors needs more replication or experimental units (material) than another or when it is hard to change the level of one of the factors, these design become important. The primary disadvantage of these designs is the loss in precision in the whole plot treatment comparison and the statistical complexity.


14.4 - The Split-Split-Plot Design

14.4 - The Split-Split-Plot Design

The restriction on randomization mentioned in the split-plot designs can be extended to more than one factor. For the case where the restriction is on two factors the resulting design is called a split-split-plot design. These designs usually have three different sizes or types of experimental units.

Example 14.4 of the text book (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition) discusses an experiment in which a researcher is interested in studying the effect of technicians, dosage strength and wall thickness of the capsule on absorption time of a particular type of antibiotic. There are three technicians, three dosage strengths and four capsule wall thicknesses resulting in 36 observations per replicate and the experimenter wants to perform four replicates on different days. To do so, first, technicians are randomly assigned to units of antibiotics which are the whole plots. Next, the three dosage strengths are randomly assigned to split-plots. Finally, for each dosage strength, the capsules are created with different wall thicknesses, which is the split-split factor and then tested in random order.

First notice the restrictions that exist on randomization. Here, we can not simply randomize the 36 runs in a single block (or replicate) because we have our first hard to change factor, named Technician. Furthermore, even after selecting a level for this hard to change factor (say technician 2) we can not randomize the 12 runs under this technician because we have another hard to change factor, named dosage strength. After we select a random level for this second factor, say dosage strength of level 3, we can then randomize the four runs under these two combinations of two factors and randomly run the experiments for different wall thicknesses as our third factor.

The linear statistical model for the split-split-plot design would be:

\(y_{ijkh}=\mu+\tau_i+\beta_j+(\tau\beta)_{ij}+\gamma_k+(\tau\gamma)_{ik}+(\beta\gamma)_{jk}+(\tau\beta\gamma)_{ijk} +\delta_h\)

\(+(\tau\delta)_{ih}+(\beta\delta)_{jh}+(\tau\beta\delta)_{ijh}+(\gamma\delta)_{kh} +(\tau\gamma\delta)_{ikh}+(\beta\gamma\delta)_{jkh}+(\tau\beta\gamma\delta)_{ijkh}+\varepsilon_{ijkh} \left\{\begin{array}{c} i=1,2,\ldots,r \\ j=1,2,\ldots,a \\ k=1,2,\ldots,b \\ h=1,2,\ldots,c \end{array}\right. \)

Using the Expected Mean Square approach mentioned earlier for split-plot designs, we can proceed and analyze the split-split-plot designs, as well. Based on Expected Mean Squares given in Table 14.25 to build test statistics (assuming the block factor to be random and the other factors to be fixed), , and are whole plot, split-plot and split-split-plot errors, respectively. Minitab handles this model exactly in this way by GLM. (This was Table 14.22 in the 7th edition. The 8th edition has only the factors and EMS without the list of subscripts.)

  Factor rRi aFj bFk cFh lRl Expected Mean Square
Whole plot \(\tau_i\) 1 a b c 1 \(\sigma^2 + abcd \sigma_{\tau}^2\)
\(\beta_j\) r 0 b c 1 \(\sigma^2 + \sigma_{\tau \beta}^2 + \dfrac{rbc \sum \beta_{j}^2}{(a - 1)}\)
\((\tau \beta)_{ij}\) 1 0 b c 1 \(\sigma^2 + bc \sigma_{\tau \beta}^2\)
Subplot \(\gamma_k\) r a 0 c 1 \(\sigma^{2}+a c \sigma_{\tau \gamma}^{2}+\dfrac{r a c \sum \gamma_{k}^{2}}{(b-1)}\)
\((\tau \gamma)_{ik}\) 1 a 0 c 1 \(\sigma^2 + ac\sigma_{\tau \gamma}^2\)
\((\beta \gamma)_{jk}\) r 0 0 c 1

\(\sigma^2 + c\sigma_{\tau \beta \gamma}^2 + \dfrac{rc \sum \sum(\beta \gamma)_{jh}^2}{(a - 1)(b - 1)}\)

\(\tau \beta \gamma)_{ijk}\) 1 0 0 c 1

\(\sigma^2 + c\sigma_{\tau \beta \gamma}^2\)

Sub-subplot \(\delta_h\) r a b 0 1 \(\sigma^{2}+a b \sigma_{\tau\delta}^{2}+\dfrac{r a b \sum \delta_{k}^{2}}{(c-1)}\)
\((\tau \delta)_{ih}\) 1 a b 0 1 \(\sigma^{2}+a b \sigma_{\tau s}^{2}\)
\((\beta \delta)_{jh}\) r 0 b 0 1 \(\sigma^{2}+b \sigma_{\tau \beta s}^{2}+\dfrac{r b \sum \sum(\beta \delta)_{j k}^{2}}{(a-1)(c-1)}\)
\((\tau \beta\delta)_{ijh}\) 1 0 b 0 1 \(\sigma^{2}+b \sigma_{\tau \beta \delta}^{2}\)

\((\gamma \delta)_{kh}\)

r a 0 0 1 \(\sigma^{2}+a \sigma_{\tau \gamma \beta s}^{2}+\dfrac{r a \sum \sum(\gamma\delta)_{kh}^{2}}{(b-1)(c-1)}\)
\((\tau \gamma \delta)_{ikh}\) 1 a 0 0 1 \(\sigma^2 + a\sigma_{\tau \gamma \delta}^2\)

\((\beta \gamma \delta)_{jkh}\)

r 0 0 0 1

\(\sigma^2 + \sigma_{\tau \beta \gamma \delta}^2 + \dfrac{r \sum \sum (\gamma \delta)_{ijk}^2}{(a - 1)(b - 1)(c - 1)}\)

\((\tau \beta \gamma \delta)_{ijkh}\) 1 0 0 0 1 \(\sigma^2 + \sigma_{\tau \beta \gamma \delta}^2\)
\(\epsilon_{l(ijkh)}\) 1 1 1 1 1 \(\sigma^2\) (not estimable)
Table 14.25 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition)

However, we can use the traditional split-plot approach and extend it to the case of split-split-plot designs as well. Keep in mind, as mentioned earlier, we should pool all the interaction terms with the block factor into the error term used to test for significance of the effects, in each section of the design, separately.


14.5 - The Strip-Plot Designs

14.5 - The Strip-Plot Designs

These designs are also called Split-Block Designs. In the case where there are only two factors, Factor A is applied to whole plots like the usual split-plot designs but factor B is also applied to strips which are actually a new set of whole plots orthogonal to the original plots used for factor A. Figure 14.11from the 7th edition of the text is an example of strip-plot design where both of the factors have three levels.

A 1 B 1 A 2 B 1 A 3 B 1 A 3 B 3 A 1 B 3 A 2 B 3 A 3 B 2 A 1 B 2 A 2 B 2 B 1 B 3 B 2 A 3 A 1 A 1 Whole Plots Figure 14.11 (Design and Analysis of Experiments, Douglas C. Montgomery, 7th Edition)

The linear statistical model for this two factor design is:

\(y_{ijk}=\mu+\tau_i+\beta_j+(\tau\beta)_{ij}+\gamma_k+(\tau\gamma)_{ik}+(\beta\gamma)_{jk}+\varepsilon_{ijk}
\left\{\begin{array}{c}
i=1,2,\ldots,r \\
j=1,2,\ldots,a \\
k=1,2,\ldots,b
\end{array}\right. \)

Where, \((\tau \beta)_{ij}\), \((\tau \gamma)_{ik}\) and \(\epsilon_{ijk}\) are the errors used to test Factor A, Factor B and interaction AB, respectively. Furthermore, Table 14.26 shows the analysis of variance assuming A and B to be fixed and blocks or replicates to be random.

Source of Variation Sum of Squares Degree of Freedom Expected Mean Square
Replicates (or Blocks) \(SS_{\text{Replicates}}\) r - 1 \(\sigma_{\epsilon}^2 + ab\sigma_{\tau}^2\)
A \(SS_A\) a - 1 \(\sigma_{\epsilon}^2 + b \sigma_{\gamma \beta}^2 + \dfrac{rb \sum \beta_{j}^2}{a - 1}\)
\(\text{Whole Plot Error}_A\) \(SS_{WP_{A}}\) (r - 1)(a - 1) \(\sigma_{\epsilon} + b \sigma_{\tau \beta}^2\)
B \(SS_B\) b - 1 \(\sigma_{\epsilon}^2 + a \sigma_{\tau \gamma}^2 + \dfrac{ra \sum \gamma_{j}^2}{b - 1}\)
\(\text{Whole Plot Error}_B\) \(SS_{WP_{B}}\) (r - 1)(b - 1) \(\sigma_{\epsilon}^2 + a\sigma_{\tau \gamma}^2\)
AB \(SS_{AB}\) (a - 1)(b - 1) \(\sigma_{\epsilon}^2 + \dfrac{r \sum \sum (\tau \beta)_{jk}^2}{(a - 1)(b - 1)}\)
Subplot Error \(SS_{SP}\) (r - 1)(a - 1)(b - 1) \(\sigma_{\epsilon}^2\)
Total \(SS_T\) rab - 1  
Table 14.26 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition)

It is important to note that the split-block design has three sizes of experimental units where the units for effects of factor A and B are equal to whole plot of each factor and the experimental unit for interaction AB is a subplot which is the intersection of the two whole plots. This results into three different experimental errors which we discussed earlier.


Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility