Jump to main navigation


Contrasts

9 June 2011

Context

  • Recent statistical advice has progressively advocated the use of effect sizes and confidence intervals over the traditional p-values of hyothesis tests.
  • It is argued that the size of p-values are regularly incorrectly inferred to represent the strength of an effect.
  • Proponents argue that effect sizes (the magnitude of the differences between treatment levels or trajectories) and associated measures of confidence or precision of these estimates provide far more informative summaries.
  • Furthermore, the resolve of these recommendations is greatly enhanced as model complexity increases (particularly with the introduction of nesting or repeated measures) and appropriate degrees of freedom on which to base p-value calculations become increasingly more tenuous.
  • Consequently, there have been numerous pieces of work that oppose/criticize/impugn the use of p-values and advocate the use of effect sizes.
  • Unfortunately, few offer much guidance in how such effects sizes can be calculated, presented and interpreted.
  • Therefore, the reader is left questioning the way that they perform and present their statistical findings and is confused about how they should proceed in the future knowing that if they continue to present their results in the traditional framework, they are likely to be challenged during the review process.
  • This paper aims to address these issues by illustrating the following:
    • The use of contrast coefficients to calculate effect sizes, marginal means and associated confidence intervals for a range of common statistical designs.
    • The techniques apply equally to models of any complexity and model fitting procedure (including Bayesian models).
    • Graphical presentation of effect sizes
    • All examples are backed up with full R code (in appendix)

All of the following data sets are intentionally artificial. By creating data sets with specific characteristics (samples drawn from exactly known populations), we can better assess the accuracy of the techniques.

Contrasts in prediction in regression analyses

Imagine a situation in which we had just performed simple linear regression, and now we wished to summarize the trend graphically. For this demonstration, we will return to a dataset we used in a previous Workshop (the fertilizer data set): An agriculturalist was interested in the effects of fertilizer load on the yield of grass. Grass seed was sown uniformly over an area and different quantities of commercial fertilizer were applied to each of ten 1 m2 randomly located plots. Two months later the grass from each plot was harvested, dried and weighed. The data are in the file fertilizer.csv.

Download Fertilizer data set
Format of fertilizer.csv data files
FERTILIZERYIELD
2584
5080
7590
100154
125148
......

FERTILIZERMass of fertilizer (g.m-2) - Predictor variable
YIELDYield of grass (g.m-2) - Response variable
turf

Open the fertilizer dataset
Show code

> fert <- read.table("fertilizer.csv", header = T, sep = ",", strip.white = T)

> fert

   FERTILIZER YIELD
1          25    84
2          50    80
3          75    90
4         100   154
5         125   148
6         150   169
7         175   206
8         200   244
9         225   212
10        250   248

Linear model parameterization for regression looks like:
yi = β0 + βi x1 + εi
 
εi ∼ Normal(0,σ2)
#Residual 'random' effects
Lets fit the linear model relating grass yield to fertilizer concentration:
Show code

> fert.lm <- lm(YIELD ~ FERTILIZER, data = fert)

> fert.lm

Call:
lm(formula = YIELD ~ FERTILIZER, data = fert)
Coefficients:
(Intercept)   FERTILIZER  
    51.9333       0.8114  

When we fit a linear model, we estimate the values of the parameters (β0 and β1) such that the deviations in the predicted values of yi are minimized (OLS). Once we have the estimates of the intercept and slope parameters, we can solve the equation for new values of x so as to predict new values of y.

For example, if the estimated intercept (β0) and slope (β0) where 2 and 3 respectively

, we could predict a new value of y when x was equal to 5.

yi = β0 + β1 xi
yi = 2 + 3× xi
y = 2 + 3× 5
y = 17

In other words, what we are saying, is that to estimate a new value of y, we want to add one unit of 2 to five units of 3. So the contrasts associated with predicting a new value of y from a new x value of 5 would be 1 and 5
yi =
β0β1
×
c1
c2
yi =
23
×
1
5

So what we are after is what is called the outer product of the two matrices. The outer product multiplies the first entry of the left matrix by the first value (first row) of the second matrix and the second values together and so on, and then sums them all up. In R, the outer product of two matrices is performed by the %*%.

# Create a sequence of x-values that span the range of the Fertilizer variable

> coefs <- c(2, 3)

> cmat <- c(1, 5)

> coefs %*% cmat

     [,1]
[1,]   17

In this way, we can predict multiple new y values from multiple new x values. When the second matrix contains multiple columns, each column represents a separate contrast. Therefore the outer products are performed separately for each column of the second (contrast) matrix.

yi =
23
×
111
55.58
# Create a sequence of x-values that span the range of the Fertilizer variable

> coefs <- c(2, 3)

> cmat <- cbind(c(1, 5), c(1, 5.5), c(1, 8))

> cmat

     [,1] [,2] [,3]
[1,]    1  1.0    1
[2,]    5  5.5    8

> coefs %*% cmat

     [,1] [,2] [,3]
[1,]   17 18.5   26

Use the predict() function to extract the line of best fit and standard errors

For simple examples such as this, there is a function called predict() that can be used to derive estimates (and standard errors) from simple linear models. This function is essentially a clever wrapper for the series of actions that we will perform manually. As it is well tested under simple conditions, it provides a good comparison and reference point for our manual calculations.

# Create a sequence of x-values that span the range of the Fertilizer variable

> newX <- seq(min(fert$FERTILIZER), max(fert$FERTILIZER), l = 100)

> pred <- predict(fert.lm, newdata = data.frame(FERTILIZER = newX),

> se = T)

Show the list of predicted fits and standard errors

> pred

$fit
        1         2         3         4         5         6         7         8 
 72.21818  74.06226  75.90634  77.75041  79.59449  81.43857  83.28264  85.12672 
        9        10        11        12        13        14        15        16 
 86.97080  88.81488  90.65895  92.50303  94.34711  96.19118  98.03526  99.87934 
       17        18        19        20        21        22        23        24 
101.72342 103.56749 105.41157 107.25565 109.09972 110.94380 112.78788 114.63196 
       25        26        27        28        29        30        31        32 
116.47603 118.32011 120.16419 122.00826 123.85234 125.69642 127.54050 129.38457 
       33        34        35        36        37        38        39        40 
131.22865 133.07273 134.91680 136.76088 138.60496 140.44904 142.29311 144.13719 
       41        42        43        44        45        46        47        48 
145.98127 147.82534 149.66942 151.51350 153.35758 155.20165 157.04573 158.88981 
       49        50        51        52        53        54        55        56 
160.73388 162.57796 164.42204 166.26612 168.11019 169.95427 171.79835 173.64242 
       57        58        59        60        61        62        63        64 
175.48650 177.33058 179.17466 181.01873 182.86281 184.70689 186.55096 188.39504 
       65        66        67        68        69        70        71        72 
190.23912 192.08320 193.92727 195.77135 197.61543 199.45950 201.30358 203.14766 
       73        74        75        76        77        78        79        80 
204.99174 206.83581 208.67989 210.52397 212.36804 214.21212 216.05620 217.90028 
       81        82        83        84        85        86        87        88 
219.74435 221.58843 223.43251 225.27658 227.12066 228.96474 230.80882 232.65289 
       89        90        91        92        93        94        95        96 
234.49697 236.34105 238.18512 240.02920 241.87328 243.71736 245.56143 247.40551 
       97        98        99       100 
249.24959 251.09366 252.93774 254.78182 
$se.fit
        1         2         3         4         5         6         7         8 
11.166947 11.007132 10.848295 10.690481 10.533736 10.378107 10.223647 10.070408 
        9        10        11        12        13        14        15        16 
 9.918448  9.767826  9.618605  9.470850  9.324633  9.180026  9.037107  8.895956 
       17        18        19        20        21        22        23        24 
 8.756660  8.619309  8.483996  8.350821  8.219888  8.091306  7.965188  7.841654 
       25        26        27        28        29        30        31        32 
 7.720827  7.602837  7.487817  7.375907  7.267251  7.161995  7.060293  6.962301 
       33        34        35        36        37        38        39        40 
 6.868176  6.778080  6.692176  6.610628  6.533597  6.461247  6.393735  6.331217 
       41        42        43        44        45        46        47        48 
 6.273842  6.221752  6.175081  6.133952  6.098479  6.068759  6.044878  6.026905 
       49        50        51        52        53        54        55        56 
 6.014893  6.008878  6.008878  6.014893  6.026905  6.044878  6.068759  6.098479 
       57        58        59        60        61        62        63        64 
 6.133952  6.175081  6.221752  6.273842  6.331217  6.393735  6.461247  6.533597 
       65        66        67        68        69        70        71        72 
 6.610628  6.692176  6.778080  6.868176  6.962301  7.060293  7.161995  7.267251 
       73        74        75        76        77        78        79        80 
 7.375907  7.487817  7.602837  7.720827  7.841654  7.965188  8.091306  8.219888 
       81        82        83        84        85        86        87        88 
 8.350821  8.483996  8.619309  8.756660  8.895956  9.037107  9.180026  9.324633 
       89        90        91        92        93        94        95        96 
 9.470850  9.618605  9.767826  9.918448 10.070408 10.223647 10.378107 10.533736 
       97        98        99       100 
10.690481 10.848295 11.007132 11.166947 
$df
[1] 8
$residual.scale
[1] 18.99936

Use these predicted values to draw a line of best fit and standard error (and confidence) bands

> plot(YIELD ~ FERTILIZER, fert, pch = 16)

> points(pred$fit ~ newX, type = "l")

> points(pred$fit - pred$se.fit ~ newX, type = "l", col = "grey")

> points(pred$fit + pred$se.fit ~ newX, type = "l", col = "grey")

#Now for 95% confidence intervals

> plot(YIELD ~ FERTILIZER, fert, pch = 16)

> points(pred$fit ~ newX, type = "l")

> lwrCI <- pred$fit + qnorm(0.025) * pred$se.fit

> uprCI <- pred$fit + qnorm(0.975) * pred$se.fit

> points(lwrCI ~ newX, type = "l", col = "blue")

> points(uprCI ~ newX, type = "l", col = "blue")

Use contrasts and matrix algebra to derive predictions and errors

  1. First retrieve the fitted model parameters (intercept and slope)

    > coefs <- fert.lm$coef

    > coefs

    (Intercept)  FERTILIZER 
     51.9333333   0.8113939 

  2. Generate a model matrix appropriate for the predictor variable range. The more numbers there are over the range the smoother (less jagged) the fitted curve.

    > cc <- as.matrix(expand.grid(1, newX))

    > head(cc)

         Var1     Var2
    [1,]    1 25.00000
    [2,]    1 27.27273
    [3,]    1 29.54545
    [4,]    1 31.81818
    [5,]    1 34.09091
    [6,]    1 36.36364

    Note that the contrast matrix that we just generated has the individual contrasts in rows whereas (recall) they need to be in columns. We can either transpose this matrix (with the t() function) and restore it, or else just use the t() function inline when calculating the outer product of the coefficients and contrast matrices.

    Essentially, what we want to do is:

    yi =
    51.9330.8114
    ×
    1.0001.0001.000...
    25.00027.27329.545...
  3. Calculate (derive) the predicted yield values for each of the new Fertilizer concentrations.

    > newY <- coefs %*% t(cc)

    > newY

             [,1]     [,2]     [,3]     [,4]     [,5]     [,6]     [,7]     [,8]
    [1,] 72.21818 74.06226 75.90634 77.75041 79.59449 81.43857 83.28264 85.12672
            [,9]    [,10]    [,11]    [,12]    [,13]    [,14]    [,15]    [,16]
    [1,] 86.9708 88.81488 90.65895 92.50303 94.34711 96.19118 98.03526 99.87934
            [,17]    [,18]    [,19]    [,20]    [,21]    [,22]    [,23]   [,24]
    [1,] 101.7234 103.5675 105.4116 107.2556 109.0997 110.9438 112.7879 114.632
           [,25]    [,26]    [,27]    [,28]    [,29]    [,30]    [,31]    [,32]
    [1,] 116.476 118.3201 120.1642 122.0083 123.8523 125.6964 127.5405 129.3846
            [,33]    [,34]    [,35]    [,36]   [,37]   [,38]    [,39]    [,40]
    [1,] 131.2287 133.0727 134.9168 136.7609 138.605 140.449 142.2931 144.1372
            [,41]    [,42]    [,43]    [,44]    [,45]    [,46]    [,47]    [,48]
    [1,] 145.9813 147.8253 149.6694 151.5135 153.3576 155.2017 157.0457 158.8898
            [,49]   [,50]   [,51]    [,52]    [,53]    [,54]    [,55]    [,56]
    [1,] 160.7339 162.578 164.422 166.2661 168.1102 169.9543 171.7983 173.6424
            [,57]    [,58]    [,59]    [,60]    [,61]    [,62]   [,63]   [,64]
    [1,] 175.4865 177.3306 179.1747 181.0187 182.8628 184.7069 186.551 188.395
            [,65]    [,66]    [,67]    [,68]    [,69]    [,70]    [,71]    [,72]
    [1,] 190.2391 192.0832 193.9273 195.7713 197.6154 199.4595 201.3036 203.1477
            [,73]    [,74]    [,75]   [,76]   [,77]    [,78]    [,79]    [,80]
    [1,] 204.9917 206.8358 208.6799 210.524 212.368 214.2121 216.0562 217.9003
            [,81]    [,82]    [,83]    [,84]    [,85]    [,86]    [,87]    [,88]
    [1,] 219.7444 221.5884 223.4325 225.2766 227.1207 228.9647 230.8088 232.6529
           [,89]   [,90]    [,91]    [,92]    [,93]    [,94]    [,95]    [,96]
    [1,] 234.497 236.341 238.1851 240.0292 241.8733 243.7174 245.5614 247.4055
            [,97]    [,98]    [,99]   [,100]
    [1,] 249.2496 251.0937 252.9377 254.7818

  4. Calculate (derive) the standard errors of the predicted yield values for each of the new Fertilizer concentrations.

    > vc <- vcov(fert.lm)

    > se <- sqrt(diag(cc %*% vc %*% t(cc)))

    > head(se)

    [1] 11.16695 11.00713 10.84830 10.69048 10.53374 10.37811

  5. Reconstruct the summary plot with the derived fit, standard errors and confidence intervals

    > plot(YIELD ~ FERTILIZER, fert, pch = 16)

    > points(newY[1, ] ~ newX, type = "l")

    > points(newY[1, ] - se ~ newX, type = "l", col = "grey")

    > points(newY[1, ] + se ~ newX, type = "l", col = "grey")

    > lwrCI <- newY[1, ] + qnorm(0.025) * se

    > uprCI <- newY[1, ] + qnorm(0.975) * se

    > points(lwrCI ~ newX, type = "l", col = "blue")

    > points(uprCI ~ newX, type = "l", col = "blue")

Contrasts in linear mixed effects models

For the next demonstration, I will use completely fabricated data. Perhaps one day I will seek out a good example with real data, but for now, the artificial data will do.

Create the artificial data

This data set consists of fish counts over time on seven reefs (you can tell it is artificial, because they have increased ;^)

> set.seed(1)

> x <- NULL

> y <- NULL

> for (i in 1:7) {

> xx <- 2006:2010

> yy <- round(-8300 + 4.14 * x + rnorm(5, 0, 1) + rnorm(1,

> 0, 3), 0) + 1

> yy[yy < 0] <- 0

> x <- c(x, xx)

> y <- c(y, yy)

> }

> data <- data.frame(x, y, reef = gl(7, 5, 35, LETTERS[1:7]))


#use a scatterplot to explore the trend for each reef

> library(car)

> scatterplot(y ~ x | reef, data)

Lets fit the generalized linear mixed effects model relating fish counts (y) to year (x). To do so, we will employ the glmmPQL() function from within the MASS package. The glmmPQL fits generalized linear mixed effects models with penalized quasi-likelihood estimation. We will initially use a Gaussian (Normal) error distribution.

> library(MASS)

> data.lme <- glmmPQL(y ~ x, random = ~1 | reef, data, family = "gaussian")

> summary(data.lme)

Linear mixed-effects model fit by maximum likelihood
 Data: data 
  AIC BIC logLik
   NA  NA     NA
Random effects:
 Formula: ~1 | reef
        (Intercept) Residual
StdDev:     1.36174  1.74417
Variance function:
 Structure: fixed weights
 Formula: ~invwt 
Fixed effects: y ~ x 
                Value Std.Error DF   t-value p-value
(Intercept) -7875.171 244.01692 97 -32.27305       0
x               3.929   0.12152 97  32.32805       0
 Correlation: 
  (Intr)
x -1    
Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max 
-1.7776922 -0.5179669  0.1723439  0.6385208  1.8104544 
Number of Observations: 105
Number of Groups: 7 

Use the predict() function to extract the line of best fit and standard errors

# Create a sequence of x-values that span the range of the x variable

> newX <- seq(min(data$x), max(data$x), l = 100)

> pred <- predict(data.lme, newdata = data.frame(x = newX), level = 0,

> se = T)

  [1]  5.542857  5.701587  5.860317  6.019048  6.177778  6.336508  6.495238
  [8]  6.653968  6.812698  6.971429  7.130159  7.288889  7.447619  7.606349
 [15]  7.765079  7.923810  8.082540  8.241270  8.400000  8.558730  8.717460
 [22]  8.876190  9.034921  9.193651  9.352381  9.511111  9.669841  9.828571
 [29]  9.987302 10.146032 10.304762 10.463492 10.622222 10.780952 10.939683
 [36] 11.098413 11.257143 11.415873 11.574603 11.733333 11.892063 12.050794
 [43] 12.209524 12.368254 12.526984 12.685714 12.844444 13.003175 13.161905
 [50] 13.320635 13.479365 13.638095 13.796825 13.955556 14.114286 14.273016
 [57] 14.431746 14.590476 14.749206 14.907937 15.066667 15.225397 15.384127
 [64] 15.542857 15.701587 15.860317 16.019048 16.177778 16.336508 16.495238
 [71] 16.653968 16.812698 16.971429 17.130159 17.288889 17.447619 17.606349
 [78] 17.765079 17.923810 18.082540 18.241270 18.400000 18.558730 18.717460
 [85] 18.876190 19.034921 19.193651 19.352381 19.511111 19.669841 19.828571
 [92] 19.987302 20.146032 20.304762 20.463492 20.622222 20.780952 20.939683
 [99] 21.098413 21.257143
attr(,"label")
[1] "Predicted values"

You may well have noticed that although we asked for standard errors of the predictions, we did not receive them. The overloaded predict() function only provides standard errors for simple linear and generalized linear models (not mixed effects models). This is one of the reasons we are exploring the use of contrasts - so that we can derive these things manually.

Use contrasts and matrix algebra to derive predictions and errors

  1. First retrieve the fitted model parameters (intercept and slope) for the fixed effects.

    > coefs <- data.lme$coef$fixed

    > coefs

     (Intercept)            x 
    -7875.171429     3.928571 

  2. Generate a model matrix appropriate for the predictor variable range. The more numbers there are over the range the smoother (less jagged) the fitted curve.

    > newX <- seq(2006, 2010, l = 100)

    > cc <- as.matrix(expand.grid(1, newX))

    > head(cc)

         Var1     Var2
    [1,]    1 2006.000
    [2,]    1 2006.040
    [3,]    1 2006.081
    [4,]    1 2006.121
    [5,]    1 2006.162
    [6,]    1 2006.202

  3. Calculate (derive) the predicted fish counts over time across all reefs.

    > newY <- coefs %*% t(cc)

    > newY

             [,1]     [,2]     [,3]     [,4]     [,5]     [,6]     [,7]     [,8]
    [1,] 5.542857 5.701587 5.860317 6.019048 6.177778 6.336508 6.495238 6.653968
             [,9]    [,10]    [,11]    [,12]    [,13]    [,14]    [,15]   [,16]
    [1,] 6.812698 6.971429 7.130159 7.288889 7.447619 7.606349 7.765079 7.92381
           [,17]   [,18] [,19]   [,20]   [,21]   [,22]    [,23]    [,24]    [,25]
    [1,] 8.08254 8.24127   8.4 8.55873 8.71746 8.87619 9.034921 9.193651 9.352381
            [,26]    [,27]    [,28]    [,29]    [,30]    [,31]    [,32]    [,33]
    [1,] 9.511111 9.669841 9.828571 9.987302 10.14603 10.30476 10.46349 10.62222
            [,34]    [,35]    [,36]    [,37]    [,38]   [,39]    [,40]    [,41]
    [1,] 10.78095 10.93968 11.09841 11.25714 11.41587 11.5746 11.73333 11.89206
            [,42]    [,43]    [,44]    [,45]    [,46]    [,47]    [,48]   [,49]
    [1,] 12.05079 12.20952 12.36825 12.52698 12.68571 12.84444 13.00317 13.1619
            [,50]    [,51]   [,52]    [,53]    [,54]    [,55]    [,56]    [,57]
    [1,] 13.32063 13.47937 13.6381 13.79683 13.95556 14.11429 14.27302 14.43175
            [,58]    [,59]    [,60]    [,61]   [,62]    [,63]    [,64]    [,65]
    [1,] 14.59048 14.74921 14.90794 15.06667 15.2254 15.38413 15.54286 15.70159
            [,66]    [,67]    [,68]    [,69]    [,70]    [,71]   [,72]    [,73]
    [1,] 15.86032 16.01905 16.17778 16.33651 16.49524 16.65397 16.8127 16.97143
            [,74]    [,75]    [,76]    [,77]    [,78]    [,79]    [,80]    [,81]
    [1,] 17.13016 17.28889 17.44762 17.60635 17.76508 17.92381 18.08254 18.24127
         [,82]    [,83]    [,84]    [,85]    [,86]    [,87]    [,88]    [,89]
    [1,]  18.4 18.55873 18.71746 18.87619 19.03492 19.19365 19.35238 19.51111
            [,90]    [,91]   [,92]    [,93]    [,94]    [,95]    [,96]    [,97]
    [1,] 19.66984 19.82857 19.9873 20.14603 20.30476 20.46349 20.62222 20.78095
            [,98]    [,99]   [,100]
    [1,] 20.93968 21.09841 21.25714

  4. Calculate (derive) the standard errors of the predicted fish counts.

    > vc <- vcov(data.lme)

    > se <- sqrt(diag(cc %*% vc %*% t(cc)))

    > head(se)

    [1] 0.5931468 0.5911899 0.5892667 0.5873775 0.5855225 0.5837022

  5. Make a data frame to hold the predicted values, standard errors and confidence intervals

    > mat <- data.frame(x = newX, fit = newY[1, ], se = se, lwr = newY[1,

    > ] + qnorm(0.025) * se, upr = newY[1, ] + qnorm(0.975) * se)

    > head(mat)

             x      fit        se      lwr      upr
    1 2006.000 5.542857 0.5931468 4.380311 6.705403
    2 2006.040 5.701587 0.5911899 4.542876 6.860298
    3 2006.081 5.860317 0.5892667 4.705376 7.015259
    4 2006.121 6.019048 0.5873775 4.867809 7.170286
    5 2006.162 6.177778 0.5855225 5.030175 7.325381
    6 2006.202 6.336508 0.5837022 5.192473 7.480543

  6. Finally, lets plot the data with the predicted trend and confidence intervals overlaid

    > plot(y ~ x, data, pch = as.numeric(data$reef), col = "grey")

    > lines(fit ~ x, mat, col = "black")

    > lines(lwr ~ x, mat, col = "grey")

    > lines(upr ~ x, mat, col = "grey")

Contrasts in generalized linear mixed effects models

The next demonstration, will use very similar data to the previous example, except that a Poisson error distribution would be more appropriate (due to the apparent relationship between mean and variance).

Create the artificial data

This data set consists of fish counts over time on seven reefs (you can tell it is artificial, because they have increased ;^)

> set.seed(2)

> x <- NULL

> y <- NULL

> for (i in 1:7) {

> xx <- 2006:2010

> yy <- round(-8300 + 4.14 * x + rpois(5, 2) * 2 + rnorm(1,

> 0, 3), 0) + 1

> yy[yy < 0] <- 0

> x <- c(x, xx)

> y <- c(y, yy)

> }

> data <- data.frame(x, y, reef = gl(7, 5, 35, LETTERS[1:7]))

> scatterplot(y ~ x | reef, data)

> hist(yy)


#use a scatterplot to explore the trend for each reef

> library(car)

> scatterplot(y ~ x | reef, data)

Lets fit the generalized linear mixed effects model relating fish counts (y) to year (x). To do so, we will employ the glmmPQL() function from within the MASS package. The glmmPQL fits generalized linear mixed effects models with penalized quasi-likelihood estimation. This time we will use a Poisson distribution distribution.

> library(MASS)

> data.lme <- glmmPQL(y ~ x, random = ~1 | reef, data, family = "poisson")

> summary(data.lme)

Linear mixed-effects model fit by maximum likelihood
 Data: data 
  AIC BIC logLik
   NA  NA     NA
Random effects:
 Formula: ~1 | reef
        (Intercept) Residual
StdDev:  0.02313573  1.14246
Variance function:
 Structure: fixed weights
 Formula: ~invwt 
Fixed effects: y ~ x 
                Value Std.Error DF   t-value p-value
(Intercept) -479.5766  37.74061 97 -12.70718       0
x              0.2403   0.01879 97  12.78711       0
 Correlation: 
  (Intr)
x -1    
Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max 
-1.8833765 -0.7650920  0.1044236  0.5940481  2.0397711 
Number of Observations: 105
Number of Groups: 7 

Use contrasts and matrix algebra to derive predictions and errors

  1. First retrieve the fitted model parameters (intercept and slope) for the fixed effects.

    > coefs <- data.lme$coef$fixed

    > coefs <- coefs

    > coefs

     (Intercept)            x 
    -479.5766067    0.2402791 

  2. Generate a model matrix appropriate for the predictor variable range. The more numbers there are over the range the smoother (less jagged) the fitted curve.

    > newX <- seq(2006, 2010, l = 100)

    > cc <- as.matrix(expand.grid(1, newX))

    > head(cc)

         Var1     Var2
    [1,]    1 2006.000
    [2,]    1 2006.040
    [3,]    1 2006.081
    [4,]    1 2006.121
    [5,]    1 2006.162
    [6,]    1 2006.202

  3. Calculate (derive) the predicted fish counts over time across all reefs. As is, these will be on the log scale, since the canonical link function for a Poisson distribution is a log function. Exponentiation will put this back in scale of the response (fish counts)

    > newY <- exp(coefs %*% t(cc))

    > newY

             [,1]     [,2]     [,3]     [,4]     [,5]     [,6]     [,7]     [,8]
    [1,] 11.28225 11.39231 11.50345 11.61567 11.72899 11.84341 11.95895 12.07561
             [,9]    [,10]    [,11]    [,12]    [,13]   [,14]    [,15]    [,16]
    [1,] 12.19342 12.31237 12.43249 12.55377 12.67624 12.7999 12.92477 13.05086
            [,17]    [,18]    [,19]    [,20]    [,21]    [,22]   [,23]    [,24]
    [1,] 13.17818 13.30674 13.43655 13.56763 13.69999 13.83364 13.9686 14.10487
            [,25]    [,26]    [,27]    [,28]    [,29]    [,30]    [,31]  [,32]
    [1,] 14.24247 14.38141 14.52171 14.66338 14.80643 14.95087 15.09672 15.244
            [,33]    [,34]    [,35]    [,36]    [,37]    [,38]    [,39]    [,40]
    [1,] 15.39271 15.54288 15.69451 15.84761 16.00222 16.15833 16.31596 16.47513
            [,41]    [,42]    [,43]    [,44]    [,45]   [,46]    [,47]    [,48]
    [1,] 16.63585 16.79815 16.96202 17.12749 17.29458 17.4633 17.63366 17.80569
            [,49]    [,50]   [,51]    [,52]    [,53]    [,54]    [,55]   [,56]
    [1,] 17.97939 18.15479 18.3319 18.51074 18.69132 18.87366 19.05778 19.2437
            [,57]  [,58]    [,59]    [,60]    [,61]    [,62]    [,63]    [,64]
    [1,] 19.43144 19.621 19.81241 20.00569 20.20086 20.39793 20.59692 20.79786
            [,65]    [,66]    [,67]    [,68]    [,69]   [,70]    [,71]    [,72]
    [1,] 21.00075 21.20562 21.41249 21.62138 21.83231 22.0453 22.26036 22.47752
           [,73]    [,74]   [,75]    [,76]    [,77]    [,78]    [,79]    [,80]
    [1,] 22.6968 22.91822 23.1418 23.36756 23.59553 23.82571 24.05814 24.29284
            [,81]    [,82]    [,83]    [,84]    [,85]    [,86]    [,87]    [,88]
    [1,] 24.52983 24.76913 25.01077 25.25476 25.50114 25.74991 26.00112 26.25477
           [,89]    [,90]    [,91]    [,92]    [,93]    [,94]    [,95]    [,96]
    [1,] 26.5109 26.76953 27.03068 27.29438 27.56065 27.82952 28.10101 28.37515
            [,97]    [,98]    [,99]   [,100]
    [1,] 28.65196 28.93148 29.21372 29.49872

  4. Calculate (derive) the standard errors of the predicted fish counts. Again, these need to be corrected if they are to be on the response scale. This is done by multiplying the standard error estimates by the exponential fitted estimates.

    > vc <- vcov(data.lme)

    > se <- sqrt(diag(cc %*% vc %*% t(cc)))

    > se <- se * newY[1, ]

    > head(se)

    [1] 0.6003354 0.5988101 0.5972302 0.5955962 0.5939090 0.5921693

  5. Make a data frame to hold the predicted values, standard errors and confidence intervals

    > mat <- data.frame(x = newX, fit = newY[1, ], se = se, lwr = newY[1,

    > ] + qnorm(0.025) * se, upr = newY[1, ] + qnorm(0.975) * se)

    > head(mat)

             x      fit        se      lwr      upr
    1 2006.000 11.28225 0.6003354 10.10561 12.45888
    2 2006.040 11.39231 0.5988101 10.21866 12.56596
    3 2006.081 11.50345 0.5972302 10.33290 12.67400
    4 2006.121 11.61567 0.5955962 10.44832 12.78302
    5 2006.162 11.72899 0.5939090 10.56495 12.89303
    6 2006.202 11.84341 0.5921693 10.68278 13.00404

  6. Finally, lets plot the data with the predicted trend and confidence intervals overlaid

    > plot(y ~ x, data, pch = as.numeric(data$reef), col = "grey")

    > lines(fit ~ x, mat, col = "black")

    > lines(lwr ~ x, mat, col = "grey")

    > lines(upr ~ x, mat, col = "grey")

Contrasts in Single Factor ANOVA

Linear model effects parameterization for single factor ANOVA looks like:
yi = μ + βi xi + εi
 
εi ∼ Normal(0,σ2)
#Residual 'random' effects

Motivating example

Lets say we had measured a response (e.g. weight) from four individuals (replicates) treated in one of three ways (three treatment levels).

Show code for generating these data

> set.seed(1)

> baseline <- 40

> all.effects <- c(baseline, 10, 15)

> sigma <- 3

> n <- 3

> reps <- 4

> FactA <- gl(n = n, k = 4, len = n * reps, lab = paste("a", 1:n,

> sep = ""))

> X <- as.matrix(model.matrix(~FactA))

> eps <- round(rnorm(n * reps, 0, sigma), 2)

> Response <- as.numeric(as.matrix(X) %*% as.matrix(all.effects) +

> eps)

> data <- data.frame(Response, FactA)

> write.table(data, file = "fish.csv", quote = F, row.names = F,

> sep = ",")

Download fish data set

Q1-1 Open and summarize the data
Show code

> fish <- read.table("fish.csv", header = T, sep = ",", strip.white = T)

> fish

   Response FactA
1     38.12    a1
2     40.55    a1
3     37.49    a1
4     44.79    a1
5     50.99    a2
6     47.54    a2
7     51.46    a2
8     52.21    a2
9     56.73    a3
10    54.08    a3
11    59.54    a3
12    56.17    a3

Raw dataTreatment means

FactA: a1 38.12, 40.55, 37.49, 44.79
FactA: a2 50.99, 47.54, 51.46, 52.21
FactA: a3 56.73, 54.08, 59.54, 56.17

FactA: a1 FactA: a2 FactA: a3
Mean 40.24 50.55 56.63

Q1-2 Linear models (ANOVA)
yi = μ + βi xi + εi
 
y = μ + β1 x1 + β2 x2 + β3 x3 + εi
# over-parameterized model
y = β1 x1 + β2 x2 + β3 x3 + εi
# means parameterization model
y = Intercept + β2 x2 + β3 x3 + εi
# effects parameterization model

Two alternative model parameterizations (based on treatment contrasts) would be:

Show code
# Means parameterization

> data.frame(Response, FactA, model.matrix(~-1 + FactA, data = fish))

   Response FactA FactAa1 FactAa2 FactAa3
1     38.12    a1       1       0       0
2     40.55    a1       1       0       0
3     37.49    a1       1       0       0
4     44.79    a1       1       0       0
5     50.99    a2       0       1       0
6     47.54    a2       0       1       0
7     51.46    a2       0       1       0
8     52.21    a2       0       1       0
9     56.73    a3       0       0       1
10    54.08    a3       0       0       1
11    59.54    a3       0       0       1
12    56.17    a3       0       0       1

# Effects parameterization

> data.frame(Response, FactA, model.matrix(~FactA, data = fish))

   Response FactA X.Intercept. FactAa2 FactAa3
1     38.12    a1            1       0       0
2     40.55    a1            1       0       0
3     37.49    a1            1       0       0
4     44.79    a1            1       0       0
5     50.99    a2            1       1       0
6     47.54    a2            1       1       0
7     51.46    a2            1       1       0
8     52.21    a2            1       1       0
9     56.73    a3            1       0       1
10    54.08    a3            1       0       1
11    59.54    a3            1       0       1
12    56.17    a3            1       0       1

Means parameterizationEffects parameterization

DataModel parameters
ResponseFactAFactAa1FactAa2FactAa3
38.12 a1 1.00 0.00 0.00
40.55 a1 1.00 0.00 0.00
37.49 a1 1.00 0.00 0.00
44.79 a1 1.00 0.00 0.00
50.99 a2 0.00 1.00 0.00
47.54 a2 0.00 1.00 0.00
51.46 a2 0.00 1.00 0.00
52.21 a2 0.00 1.00 0.00
56.73 a3 0.00 0.00 1.00
54.08 a3 0.00 0.00 1.00
59.54 a3 0.00 0.00 1.00
56.17 a3 0.00 0.00 1.00

DataModel parameters
ResponseFactAInterceptFactAa2FactAa3
38.12 a1 1.00 0.00 0.00
40.55 a1 1.00 0.00 0.00
37.49 a1 1.00 0.00 0.00
44.79 a1 1.00 0.00 0.00
50.99 a2 1.00 1.00 0.00
47.54 a2 1.00 1.00 0.00
51.46 a2 1.00 1.00 0.00
52.21 a2 1.00 1.00 0.00
56.73 a3 1.00 0.00 1.00
54.08 a3 1.00 0.00 1.00
59.54 a3 1.00 0.00 1.00
56.17 a3 1.00 0.00 1.00

Whilst the above two are equivalent for simple designs (such as that illustrated), the means parameterization (that does not include an intercept, or overall mean) is limited to simple single factor designs. Designs that involve additional predictors can only be accommodated via effects parameterization.

Lets now fit the above means and effects parameterized models and contrasts the estimated parameters:

Means parameterizationEffects parameterization

> summary(lm(Response ~ -1 + FactA, fish))$coef

        Estimate Std. Error  t value     Pr(>|t|)
FactAa1  40.2375   1.300481 30.94046 1.885834e-10
FactAa2  50.5500   1.300481 38.87022 2.453386e-11
FactAa3  56.6300   1.300481 43.54541 8.871078e-12

> summary(lm(Response ~ FactA, fish))$coef

            Estimate Std. Error   t value     Pr(>|t|)
(Intercept)  40.2375   1.300481 30.940464 1.885834e-10
FactAa2      10.3125   1.839159  5.607184 3.312139e-04
FactAa3      16.3925   1.839159  8.913043 9.243186e-06

The model on the left omits the intercept and thus estimates the treatment means. The parameters estimated by the model on the right are the mean of the first treatment level and the differences between each of the other treatment means and the first treatment mean. If there were only two treatment levels or first treatment was a control (two which you wish to compare the other treatment means), then the effects model is providing what the statisticians are advocating - estimates of the effect sizes as well as estimates of precision (standard error from which confidence intervals can easily be derived).

Derived parameters - simple contrasts

These parameters represent model coefficients that can be substituted back into a linear model. They represent the set of linear constants in the linear model. We can subsequently multiply these parameters to produce alternative linear combinations in order to estimate other values from the linear model.

yi = μ + βi xi
yi = μ + β2 x2 + β2 x2
yi = 40.2375 × a1 + 10.3125 × a2 + 16.3925 × a3
For example, if we wanted to estimate the effect size for the difference between treatments a2 and a3, we could make a1=0, a2=1 and a3=-1. That is, if we multiplied our model parameters by a contrast matrix of 0,1,-1;
yi =
μβ2β3
×
a1
a2
a3

The matrix;

a1
a2
a3
is a matrix of contrast coefficients. Each row corresponds to a column in the parameter matrix (vector). The contrast matrix can have multiple columns, each corresponding to a different derived comparison.

yi = 40.2375 × a1 + 10.3125 × a2 + 16.3925 × a3
yi = 40.2375 × 0 + 10.3125 × 1 + 16.3925 × -1
yi = 10.3125 - 16.3925
yi = -6.080

Planned comparisons

Some researchers might be familiar with the use of contrasts in performing so called 'planned comparisons' or 'planned contrasts'. This involves redefining the actual parameters used in the linear model to reflect more meaningful comparisons amongst groups. For example, rather than compare each group mean against the mean of one (the first) treatment group, we many wish to include a comparison of the first treatment group (perhaps a control group) against the average two other treatment groups.

> contrasts(fish$FactA) <- cbind(`a1 vs (a2 & a3)` = c(1, -0.5,

> -0.5))

> summary(aov(Response ~ FactA, fish), split = list(FactA = list(1,

> 2)))

                         Df Sum Sq Mean Sq F value   Pr(>F)    
FactA                     2 549.37  274.69  40.604 3.13e-05 ***
  FactA: a1 vs (a2 & a3)  1 475.44  475.44  70.279 1.52e-05 ***
Residuals                 9  60.89    6.77                     
---
Signif. codes:  0 â€˜***’ 0.001 â€˜**’ 0.01 â€˜*’ 0.05 â€˜.’ 0.1 â€˜ â€™ 1 

> contrasts(fish$FactA) <- cbind(`a2 vs a3` = c(0, 1, -1))

> summary(aov(Response ~ FactA, fish), split = list(FactA = list(1,

> 2)))

                  Df Sum Sq Mean Sq F value   Pr(>F)    
FactA              2 549.37 274.685  40.604 3.13e-05 ***
  FactA: a2 vs a3  1  73.93  73.933  10.929 0.009144 ** 
Residuals          9  60.89   6.765                     
---
Signif. codes:  0 â€˜***’ 0.001 â€˜**’ 0.01 â€˜*’ 0.05 â€˜.’ 0.1 â€˜ â€™ 1 

In so doing, this technique provides a relatively easy way to test other non-standard hypotheses (linear effects).

However, a greater understanding of contrasts has even greater benefits when it comes to calculations of effect sizes, marginal means and their associated confidence intervals - features that are considered of greater interpretive value than hypothesis tests. Contrasts can also be used to derive a range of other derived parameters from the fitted model.

Calculating treatment means (and confidence intervals) from the effects model

A simple way to illustrate the use of contrasts to derive new parameters from a linear model is in the estimation of population treatment means from the effects model.
yi =
μβ2β3
×
111
010
001

> contrasts(fish$FactA) <- contr.treatment

> fish.lm <- lm(Response ~ FactA, fish)

> cmat <- cbind(c(1, 0, 0), c(1, 1, 0), c(1, 0, 1))

> colnames(cmat) <- c("a1", "a2", "a3")

> fish.mu <- fish.lm$coef %*% cmat

> fish.mu

          a1    a2    a3
[1,] 40.2375 50.55 56.63

> fish.se <- sqrt(diag(t(cmat) %*% vcov(fish.lm) %*% cmat))

> fish.se

      a1       a2       a3 
1.300481 1.300481 1.300481 

> fish.ci <- as.numeric(fish.mu) + outer(fish.se, qt(c(0.025, 0.975),

> df = fish.lm$df.resid))

> fish.ci

       [,1]     [,2]
a1 37.29561 43.17939
a2 47.60811 53.49189
a3 53.68811 59.57189

Calculating all the pairwise effects (and confidence intervals) from the effects model

yi =
μβ2β3
×
000
10-1
011

> contrasts(data$FactA) <- contr.treatment

> data.lm <- lm(Response ~ FactA, data)

> cmat <- cbind(c(0, 1, 0), c(0, 0, 1), c(0, -1, 1))

> colnames(cmat) <- c("a1 vs a2", "a1 vs a3", "a2 vs a3")

> data.mu <- data.lm$coef %*% cmat

> data.mu

     a1 vs a2 a1 vs a3 a2 vs a3
[1,]  10.3125  16.3925     6.08

> data.se <- sqrt(diag(t(cmat) %*% vcov(data.lm) %*% cmat))

> data.se

a1 vs a2 a1 vs a3 a2 vs a3 
1.839159 1.839159 1.839159 

> data.ci <- as.numeric(data.mu) + outer(data.se, qt(c(0.025, 0.975),

> df = data.lm$df.resid))

> data.ci

              [,1]     [,2]
a1 vs a2  6.152034 14.47297
a1 vs a3 12.232034 20.55297
a2 vs a3  1.919534 10.24047

Contrasts in factorial ANOVA

Motivating example

We now extend this example such that each individual fish was subjected to one of five levels of one treatment and one of three levels of another treatment. There were four replicate fish in each treatment combination.

Note, this is an artificial data set. Most of the data sets used in this Workshop series have been real data sets. Real datasets are useful as they provide 'real world' examples of how to apply statistical principles and tools. However, for understanding some techniques, fabricated data allows us to explore techniques under controlled conditions. Furthermore, they allow us to compare statistical outcomes and patterns derived from statistical techniques to the 'truth'. Recall that the purpose of statistics is to 'estimate' population parameters. Fabricating data allows us to know the true values of the parameters that we are trying to estimate and thus allow us to evaluate the accuracy of our outcomes and understanding. I apologise if the use of artificial data causes you any pain or personal hardship.

In the creation of these data, I will be defining the 'true' (theoretical) effect sizes and level of 'noise'.

Show code for generating these data

> set.seed(1)

> n.FactA <- 5

> n.FactB <- 3

> nsample <- 12

> n <- n.FactA * nsample

> FactA <- gl(n = n.FactA, k = nsample, length = n, lab = paste("a",

> 1:n.FactA, sep = ""))

> FactB <- gl(n = n.FactB, k = nsample/n.FactB, length = n, lab = paste("b",

> 1:n.FactB, sep = ""))

> baseline <- 40

> FactA.effects <- c(-10, -5, 5, 10)

> FactB.effects <- c(5, 10)

> interaction.effects <- c(-2, 3, 0, 4, 4, 0, 3, -2)

> all.effects <- c(baseline, FactA.effects, FactB.effects, interaction.effects)

> X <- as.matrix(model.matrix(~FactA * FactB))

Theoretical means

We will confirm the theoretical means of this design matrix (based on the defined effects).

> RESPONSE1 <- as.numeric(as.matrix(X) %*% as.matrix(all.effects))

> theoryMeans <- tapply(RESPONSE1, list(FactA, FactB), mean)

> theoryMeans

   b1 b2 b3
a1 40 45 50
a2 30 33 44
a3 35 43 45
a4 45 50 58
a5 50 59 58

b1 b2 b3
a1 40.00 45.00 50.00
a2 30.00 33.00 44.00
a3 35.00 43.00 45.00
a4 45.00 50.00 58.00
a5 50.00 59.00 58.00

And now we can add noise to the response to create data with variability. We will add noise by adding value to each observation that is randomly drawn from a normal distribution with a mean of zero and a standard deviation of 3.

> sigma <- 3

> eps <- rnorm(n, 0, sigma)

> X <- as.matrix(model.matrix(~FactA * FactB))

> RESPONSE <- as.numeric(as.matrix(X) %*% as.matrix(all.effects) +

> eps)

> data <- data.frame(RESPONSE, FactB, FactA)

> write.table(data, file = "fish1.csv", quote = F, row.names = F,

> sep = ",")

Lets just examine the first six rows of these data to confirm that a little noise has been added

> head(data)

  RESPONSE FactB FactA
1 38.12064    b1    a1
2 40.55093    b1    a1
3 37.49311    b1    a1
4 44.78584    b1    a1
5 45.98852    b2    a1
6 42.53859    b2    a1

Download fish1 data set

Q2-1 Open and summarize the data
Show code

> fish1 <- read.table("fish1.csv", header = T, sep = ",", strip.white = T)

> head(fish1)

  RESPONSE FactB FactA
1 38.12064    b1    a1
2 40.55093    b1    a1
3 37.49311    b1    a1
4 44.78584    b1    a1
5 45.98852    b2    a1
6 42.53859    b2    a1

Simple sample means

We will now calculate the sample means of each FactA by FactB combination. These are called 'cell means' as they are the means of each cell in a table in which the levels of FactA and FactB are the rows and columns (as was depicted in the table above for the theoretical means). If the samples are collected in an unbiased manner (whereby the collected samples reflect the populations), then the cell means should roughly approximate the true means of the underlying populations from whcih the observations were drawn.

Raw sample group means:

> groupMeans <- with(fish1, tapply(RESPONSE, list(FactA, FactB),

> mean))

> groupMeans

         b1       b2       b3
a1 40.23763 45.55109 51.62901
a2 28.68304 34.75708 43.83975
a3 34.20286 43.89676 43.90636
a4 46.05720 50.62681 57.80265
a5 50.41613 60.96889 57.26748

And the differences between the theoretical means (which DID NOT contain any observational variance) and these raw sample group means:

> theory_groupMeans <- theoryMeans - groupMeans

> theory_groupMeans

           b1         b2         b3
a1 -0.2376313 -0.5510949 -1.6290130
a2  1.3169574 -1.7570763  0.1602548
a3  0.7971382 -0.8967625  1.0936407
a4 -1.0571983 -0.6268060  0.1973520
a5 -0.4161308 -1.9688851  0.7325183

Here is a side-by-side comparison of the theoretical means, the raw sample means and the cell differences:

Theoretical cell meansSample cell meansDifferences

b1 b2 b3
a1 40.00 45.00 50.00
a2 30.00 33.00 44.00
a3 35.00 43.00 45.00
a4 45.00 50.00 58.00
a5 50.00 59.00 58.00

b1 b2 b3
a1 40.24 45.55 51.63
a2 28.68 34.76 43.84
a3 34.20 43.90 43.91
a4 46.06 50.63 57.80
a5 50.42 60.97 57.27

b1 b2 b3
a1 -0.24 -0.55 -1.63
a2 1.32 -1.76 0.16
a3 0.80 -0.90 1.09
a4 -1.06 -0.63 0.20
a5 -0.42 -1.97 0.73

The sum of the deviations (differences) of the sample means from the theoretical means is:

> sum(theory_groupMeans)

[1] -4.842737

Modeled population means

Alternatively, we can estimate the cell means based on a full multiplicative linear model.

yij = µ + αi + βj + αβij

Note that this model is over-parameterized - that is, there are more parameters to estimate than there are groups of data from which to independently estimate them.

The effects parameterization of this model via treatment contrasts incorporates the mean of the first cell (µ) followed by a series of differences. In all, there are 15 groups (five levels of FactA multiplied by three levels of FactB) and correspondingly, there are 15 parameters being estimated:

yij = µ + β1 + β2 + β3 + β4 + β5 + β6 + β7 + β8 + β9 + β10 + β11 + β12 + β13 + β14

These parameters correspond to the following:

Model term Summarized model parameter
(name given in R)

Interpretation
µ (Intercept) (a1b1)
β1 FactAa2 (a2b1 - a1b1)
β2 FactAa3 (a3b1 - a1b1)
β3 FactAa4 (a4b1 - a1b1)
β4 FactAa5 (a5b1 - a1b1)
β5 FactBb2 (a1b2 - a1b1)
β6 FactBb3 (a1b3 - a1b1)
β7 FactAa2:FactBb2 (a2b1+a1b2) - (a1b1+a2b2)
β8 FactAa3:FactBb2 (a3b1+a1b2) - (a1b1+a3b2)
β9 FactAa4:FactBb2 (a4b1+a1b2) - (a1b1+a4b2)
β10 FactAa5:FactBb2 (a5b1+a1b2) - (a1b1+a5b2)
β11 FactAa2:FactBb3 (a2b1+a1b3) - (a1b1+a2b3)
β12 FactAa3:FactBb3 (a3b1+a1b3) - (a1b1+a3b3)
β13 FactAa4:FactBb3 (a4b1+a1b3) - (a1b1+a4b3)
β14 FactAa5:FactBb3 (a5b1+a1b3) - (a1b1+a5b3)

In R, the linear model is fitted via the lm() function.

> fish1.lm <- lm(RESPONSE ~ FactA * FactB, fish1)

Whilst these parameters satisfy the model parameterization, interpretating and utilizing the estimated parameters can take some getting used to. Lets begin by summarizing the estimates of the 15 model parameters.

> fish1.lm$coef

    (Intercept)         FactAa2         FactAa3         FactAa4         FactAa5 
     40.2376313     -11.5545886      -6.0347694       5.8195671      10.1784995 
        FactBb2         FactBb3 FactAa2:FactBb2 FactAa3:FactBb2 FactAa4:FactBb2 
      5.3134636      11.3913817       0.7605701       4.3804371      -0.7438559 
FactAa5:FactBb2 FactAa2:FactBb3 FactAa3:FactBb3 FactAa4:FactBb3 FactAa5:FactBb3 
      5.2392908       3.7653208      -1.6878842       0.3540679      -4.5400308 

But this is where the good stuff begins. As the parameters are defined to encapsulate all of the patterns amongst the groups, the linear combination of parameters can now be used to calculate a whole range of related derived estimates (and their standard deviations etc) including:

  • population cell means
  • marginal means (row and column means)
  • effect sizes (differences between marginal means)
  • effect sizes (deviations from overall mean)

In order to achieve this wizardry, we must indicate which combination of the parameter estimates are relevant for each of the derived estimates required. And this is where contrast coefficients come in. The contrast coefficients are a set of numbers (equal in length to the number of model parameters) that determine the combinations and weights of the model parameters that are appropriate for each of the derived estimates. Hence when the set of model parameters are multiplied by a single set of contrast coefficients and then summed up, the result is a new derived estimate.

By way of example, the following contrast would specify that the derived estimate should include only (and all of) the first model parameter (and thus according to the table above, estimate the mean of cell FactA level a1 and FactB level b1). In R, the %*% is a special operator that performs matrix multiplication (each element of one matrix is multiplied by the corresponding elements of the other matrix) and returns their sum.

> fish1.lm$coef

    (Intercept)         FactAa2         FactAa3         FactAa4         FactAa5 
     40.2376313     -11.5545886      -6.0347694       5.8195671      10.1784995 
        FactBb2         FactBb3 FactAa2:FactBb2 FactAa3:FactBb2 FactAa4:FactBb2 
      5.3134636      11.3913817       0.7605701       4.3804371      -0.7438559 
FactAa5:FactBb2 FactAa2:FactBb3 FactAa3:FactBb3 FactAa4:FactBb3 FactAa5:FactBb3 
      5.2392908       3.7653208      -1.6878842       0.3540679      -4.5400308 

> c(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)

 [1] 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> fish1.lm$coef %*% c(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

> 0)

         [,1]
[1,] 40.23763

In the above, we indicated that only the first model parameter should be included in deriving the specified estimate, as all the others parameters are multiplied by zeros.

We can calculate multiple derived estimates at once, by multiplying the model parameters by multiple sets of contrast coefficients (represented as a matrix). That is, matrix multiplication. Hence to also calculate the cell mean of FactA level a2 and FactB b1 (a2b1), we would specify the first (a1b1 cell mean) and second (difference between a2b1 and a1b1 cell means) contrasts as ones (1) and the others zeros.

The cbind() function binds together vectors into the columns of a matrix.

> mat <- cbind(a1b1 = c(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

> 0, 0), a2b2 = c(1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

> 0))

> mat

      a1b1 a2b2
 [1,]    1    1
 [2,]    0    1
 [3,]    0    0
 [4,]    0    0
 [5,]    0    0
 [6,]    0    0
 [7,]    0    0
 [8,]    0    0
 [9,]    0    0
[10,]    0    0
[11,]    0    0
[12,]    0    0
[13,]    0    0
[14,]    0    0
[15,]    0    0

> fish1.lm$coef %*% mat

         a1b1     a2b2
[1,] 40.23763 28.68304

At this point we could go on and attempt to work out which combinations of model parameters would be required to define each of the cell means. Whilst it is reasonably straight forward to do this for cells a1b1, a2b1, a3b1, a4b1, a5b1, a1b2, a1b3 (those along the top and down the left column), it requires more mental gymnastics to arrive at the entire set. However, there is a function in R called model.matrix() that helps us achieve this in a relatively pain-free manner. The model.matrix() function gives the design matrix that is used in linear modelling to indicate group membership of categorical variables and in general provides the link between the observed data (response) and the linear combination of predictor variables.

As just eluded to, model.matrix relates each individual observation. However, we are not interested in estimating ('predicting') each observation, we want to predict the means of groups of observations. Hence, we need to reduce the matrix down to the level of the groups.

Show code for entire design matrix

> model.matrix(fish1.lm)

   (Intercept) FactAa2 FactAa3 FactAa4 FactAa5 FactBb2 FactBb3 FactAa2:FactBb2
1            1       0       0       0       0       0       0               0
2            1       0       0       0       0       0       0               0
3            1       0       0       0       0       0       0               0
4            1       0       0       0       0       0       0               0
5            1       0       0       0       0       1       0               0
6            1       0       0       0       0       1       0               0
7            1       0       0       0       0       1       0               0
8            1       0       0       0       0       1       0               0
9            1       0       0       0       0       0       1               0
10           1       0       0       0       0       0       1               0
11           1       0       0       0       0       0       1               0
12           1       0       0       0       0       0       1               0
13           1       1       0       0       0       0       0               0
14           1       1       0       0       0       0       0               0
15           1       1       0       0       0       0       0               0
16           1       1       0       0       0       0       0               0
17           1       1       0       0       0       1       0               1
18           1       1       0       0       0       1       0               1
19           1       1       0       0       0       1       0               1
20           1       1       0       0       0       1       0               1
21           1       1       0       0       0       0       1               0
22           1       1       0       0       0       0       1               0
23           1       1       0       0       0       0       1               0
24           1       1       0       0       0       0       1               0
25           1       0       1       0       0       0       0               0
26           1       0       1       0       0       0       0               0
27           1       0       1       0       0       0       0               0
28           1       0       1       0       0       0       0               0
29           1       0       1       0       0       1       0               0
30           1       0       1       0       0       1       0               0
31           1       0       1       0       0       1       0               0
32           1       0       1       0       0       1       0               0
33           1       0       1       0       0       0       1               0
34           1       0       1       0       0       0       1               0
35           1       0       1       0       0       0       1               0
36           1       0       1       0       0       0       1               0
37           1       0       0       1       0       0       0               0
38           1       0       0       1       0       0       0               0
39           1       0       0       1       0       0       0               0
40           1       0       0       1       0       0       0               0
41           1       0       0       1       0       1       0               0
42           1       0       0       1       0       1       0               0
43           1       0       0       1       0       1       0               0
44           1       0       0       1       0       1       0               0
45           1       0       0       1       0       0       1               0
46           1       0       0       1       0       0       1               0
47           1       0       0       1       0       0       1               0
48           1       0       0       1       0       0       1               0
49           1       0       0       0       1       0       0               0
50           1       0       0       0       1       0       0               0
51           1       0       0       0       1       0       0               0
52           1       0       0       0       1       0       0               0
53           1       0       0       0       1       1       0               0
54           1       0       0       0       1       1       0               0
55           1       0       0       0       1       1       0               0
56           1       0       0       0       1       1       0               0
57           1       0       0       0       1       0       1               0
58           1       0       0       0       1       0       1               0
59           1       0       0       0       1       0       1               0
60           1       0       0       0       1       0       1               0
   FactAa3:FactBb2 FactAa4:FactBb2 FactAa5:FactBb2 FactAa2:FactBb3
1                0               0               0               0
2                0               0               0               0
3                0               0               0               0
4                0               0               0               0
5                0               0               0               0
6                0               0               0               0
7                0               0               0               0
8                0               0               0               0
9                0               0               0               0
10               0               0               0               0
11               0               0               0               0
12               0               0               0               0
13               0               0               0               0
14               0               0               0               0
15               0               0               0               0
16               0               0               0               0
17               0               0               0               0
18               0               0               0               0
19               0               0               0               0
20               0               0               0               0
21               0               0               0               1
22               0               0               0               1
23               0               0               0               1
24               0               0               0               1
25               0               0               0               0
26               0               0               0               0
27               0               0               0               0
28               0               0               0               0
29               1               0               0               0
30               1               0               0               0
31               1               0               0               0
32               1               0               0               0
33               0               0               0               0
34               0               0               0               0
35               0               0               0               0
36               0               0               0               0
37               0               0               0               0
38               0               0               0               0
39               0               0               0               0
40               0               0               0               0
41               0               1               0               0
42               0               1               0               0
43               0               1               0               0
44               0               1               0               0
45               0               0               0               0
46               0               0               0               0
47               0               0               0               0
48               0               0               0               0
49               0               0               0               0
50               0               0               0               0
51               0               0               0               0
52               0               0               0               0
53               0               0               1               0
54               0               0               1               0
55               0               0               1               0
56               0               0               1               0
57               0               0               0               0
58               0               0               0               0
59               0               0               0               0
60               0               0               0               0
   FactAa3:FactBb3 FactAa4:FactBb3 FactAa5:FactBb3
1                0               0               0
2                0               0               0
3                0               0               0
4                0               0               0
5                0               0               0
6                0               0               0
7                0               0               0
8                0               0               0
9                0               0               0
10               0               0               0
11               0               0               0
12               0               0               0
13               0               0               0
14               0               0               0
15               0               0               0
16               0               0               0
17               0               0               0
18               0               0               0
19               0               0               0
20               0               0               0
21               0               0               0
22               0               0               0
23               0               0               0
24               0               0               0
25               0               0               0
26               0               0               0
27               0               0               0
28               0               0               0
29               0               0               0
30               0               0               0
31               0               0               0
32               0               0               0
33               1               0               0
34               1               0               0
35               1               0               0
36               1               0               0
37               0               0               0
38               0               0               0
39               0               0               0
40               0               0               0
41               0               0               0
42               0               0               0
43               0               0               0
44               0               0               0
45               0               1               0
46               0               1               0
47               0               1               0
48               0               1               0
49               0               0               0
50               0               0               0
51               0               0               0
52               0               0               0
53               0               0               0
54               0               0               0
55               0               0               0
56               0               0               0
57               0               0               1
58               0               0               1
59               0               0               1
60               0               0               1
attr(,"assign")
 [1] 0 1 1 1 1 2 2 3 3 3 3 3 3 3 3
attr(,"contrasts")
attr(,"contrasts")$FactA
[1] "contr.treatment"
attr(,"contrasts")$FactB
[1] "contr.treatment"

There are a couple of ways that we could reduce this matrix down to the level of the groups
  • As the model contains only categorical predictors, we can just return a unique version (one without duplicates)

    > mat <- unique(model.matrix(fish1.lm))

    > head(mat)

       (Intercept) FactAa2 FactAa3 FactAa4 FactAa5 FactBb2 FactBb3 FactAa2:FactBb2
    1            1       0       0       0       0       0       0               0
    5            1       0       0       0       0       1       0               0
    9            1       0       0       0       0       0       1               0
    13           1       1       0       0       0       0       0               0
    17           1       1       0       0       0       1       0               1
    21           1       1       0       0       0       0       1               0
       FactAa3:FactBb2 FactAa4:FactBb2 FactAa5:FactBb2 FactAa2:FactBb3
    1                0               0               0               0
    5                0               0               0               0
    9                0               0               0               0
    13               0               0               0               0
    17               0               0               0               0
    21               0               0               0               1
       FactAa3:FactBb3 FactAa4:FactBb3 FactAa5:FactBb3
    1                0               0               0
    5                0               0               0
    9                0               0               0
    13               0               0               0
    17               0               0               0
    21               0               0               0

  • A more universal technique is to aggregate (summarize) the design matrix according to the factor levels. This technique also allows us to generate contrasts for marginal means and handles models that also include continuous covariates.
    # We take the design matrix converted into a data frame
    # split the design matrix up into subsets according to the levels of FactA and FactB in the fish1 dataframe
    # Apply the mean function to each column of these subsets
    # Combine the means from each subset into a single data frame
    # Retain all but the first two columns (which correspond to the the splitting variables that are appended
    # onto the dataframe)
    # It is then vitally important that the resulting dataframe be converted into a matrix to allow
    # proper matrix multiplications

    > library(plyr)

    > mat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactA +

    > fish1$FactB, function(df) mean(df))[, -1:-2]

    > mat <- as.matrix(mat)

    > head(mat)

         (Intercept) FactAa2 FactAa3 FactAa4 FactAa5 FactBb2 FactBb3
    [1,]           1       0       0       0       0       0       0
    [2,]           1       0       0       0       0       1       0
    [3,]           1       0       0       0       0       0       1
    [4,]           1       1       0       0       0       0       0
    [5,]           1       1       0       0       0       1       0
    [6,]           1       1       0       0       0       0       1
         FactAa2:FactBb2 FactAa3:FactBb2 FactAa4:FactBb2 FactAa5:FactBb2
    [1,]               0               0               0               0
    [2,]               0               0               0               0
    [3,]               0               0               0               0
    [4,]               0               0               0               0
    [5,]               1               0               0               0
    [6,]               0               0               0               0
         FactAa2:FactBb3 FactAa3:FactBb3 FactAa4:FactBb3 FactAa5:FactBb3
    [1,]               0               0               0               0
    [2,]               0               0               0               0
    [3,]               0               0               0               0
    [4,]               0               0               0               0
    [5,]               0               0               0               0
    [6,]               1               0               0               0

Hence, the entire contrast matrix for deriving cell mean estimates would be (Note I will display this as transposed to make it more readable and to facilitate correct matrix multiplication).

a1b1 a1b2 a1b3 a2b1 a2b2 a2b3 a3b1 a3b2 a3b3 a4b1 a4b2 a4b3 a5b1 a5b2 a5b3
µ 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
β1 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
β2 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
β3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00
β4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00
β5 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00
β6 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00
β7 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
β8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
β9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
β10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00
β11 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
β12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
β13 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00
β14 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00

The estimated population cell means are thence derived by multiplying the model parameters by this transposed contrast matrix (Note that the design matrix needs to be transposed such that the model parameters for each derived estimate are arranged into columns):

> fish1.lm$coef %*% t(mat)

         [,1]     [,2]     [,3]     [,4]     [,5]     [,6]     [,7]     [,8]
[1,] 40.23763 45.55109 51.62901 28.68304 34.75708 43.83975 34.20286 43.89676
         [,9]   [,10]    [,11]    [,12]    [,13]    [,14]    [,15]
[1,] 43.90636 46.0572 50.62681 57.80265 50.41613 60.96889 57.26748

We can make the presentation of these numbers a little more palatable by arranging them in a table (matrix) and labeling the rows and columns.

> modelMeans <- matrix(fish1.lm$coef %*% t(mat), ncol = length(levels(fish1$FactB)),

> byrow = T)

> dimnames(modelMeans) <- list(levels(fish1$FactA), levels(fish1$FactB))

> modelMeans

         b1       b2       b3
a1 40.23763 45.55109 51.62901
a2 28.68304 34.75708 43.83975
a3 34.20286 43.89676 43.90636
a4 46.05720 50.62681 57.80265
a5 50.41613 60.96889 57.26748

Here is a side-by-side comparison of the theoretical means, the raw sample means and the cell differences:

Theoretical cell meansSample cell meansModel derived cell means

b1 b2 b3
a1 40.00 45.00 50.00
a2 30.00 33.00 44.00
a3 35.00 43.00 45.00
a4 45.00 50.00 58.00
a5 50.00 59.00 58.00

b1 b2 b3
a1 40.24 45.55 51.63
a2 28.68 34.76 43.84
a3 34.20 43.90 43.91
a4 46.06 50.63 57.80
a5 50.42 60.97 57.27

b1 b2 b3
a1 40.24 45.55 51.63
a2 28.68 34.76 43.84
a3 34.20 43.90 43.91
a4 46.06 50.63 57.80
a5 50.42 60.97 57.27

Although this might seem like a tedius way to calculate cell means, keep in mind that this is an example of a simple system. The same framework can be applied to more complex designs (such as nested, blocking, repeated measures - mixed effects models) for which simple sample means are unlikely to be as good at estimating true population means as estimates that are informed by all the FactA by Fact B trends across nesting levels. Furthermore, the model basis (and contrast sets) allows us to get estimates of standard deviations, standard errors and confidence intervals around the derived estimates.

Conditions

In order to make use of matrix algebra, the following formatting conditions must be adhered to:
  • the contrast matrix MUST be a matrix not a data frame
  • Each column of the matrix represents a single separate derived parameter (contrasts are in columns
  • There must be exactly the same number of rows as there are parameters in the model

Standard deviations, standard errors and confidence intervals

Multiplying the contrast matrix by the model parameters derives new parameter estimates (as we have just seen). Multiplying the same contrast matrix by the model variance-covariance matrix derives estimates of the associated standard errors (and thus standard errors and confidence intervals). The variance-covariance matrix contains the variances of the estimated model parameters along the diagonals and the covariances between each parameter pair in the off-diagonals.

On a normal distribution, the mean plus or minus one standard deviation comprise approximately 68% of values. 95% of the data is encompassed by the span of data between the mean plus or minus approximately 2 standard deviations. To be more precise, the 95% of the data in a normal distribution span between the mean the following number of standard deviations:

> qnorm(c(0.025, 0.975))

[1] -1.959964  1.959964

Therefore, confidence intervals can be calculated from standard errors according to the following equation:

95% CI = μ ± 1.96 × SE
 

Rather than produce the standard errors and confidence intervals in isolation, it is more useful to combine them into a frame along with the derived estimates (cell means in this case).

In the following code snippet, the function outer() multiplies each of the elements of one array with each of the elements of another array. This is called the outer product of the two arrays. In the code below, outer is used to multiply each of the standard error estimates by the upper and lower critical values for a 95% confidence interval from a normal distribution.

> modelMeans <- fish1.lm$coef %*% t(mat)

> se <- sqrt(diag(mat %*% vcov(fish1.lm) %*% t(mat)))

> ci <- as.numeric(modelMeans) + outer(se, qnorm(c(0.025, 0.975)))

> colnames(ci) <- c("lwr", "upr")

> mat2 <- cbind(mu = as.numeric(modelMeans), se = se, ci)

> rownames(mat2) <- nms

> mat2

           mu       se      lwr      upr
a1b1 40.23763 1.350002 37.59168 42.88359
a1b2 45.55109 1.350002 42.90514 48.19705
a1b3 51.62901 1.350002 48.98306 54.27497
a2b1 28.68304 1.350002 26.03709 31.32900
a2b2 34.75708 1.350002 32.11112 37.40303
a2b3 43.83975 1.350002 41.19379 46.48570
a3b1 34.20286 1.350002 31.55691 36.84882
a3b2 43.89676 1.350002 41.25081 46.54272
a3b3 43.90636 1.350002 41.26040 46.55231
a4b1 46.05720 1.350002 43.41124 48.70315
a4b2 50.62681 1.350002 47.98085 53.27276
a4b3 57.80265 1.350002 55.15669 60.44860
a5b1 50.41613 1.350002 47.77018 53.06209
a5b2 60.96889 1.350002 58.32293 63.61484
a5b3 57.26748 1.350002 54.62153 59.91344

There are a number of wrappers that can be used to perform the above sequence of functions. These include:

  • predict() - most modelling functions have a predict() method. Unfortunately standard errors (and therefore confidence intervals) are only supplied for simple linear models.
  • esticon() - in the doBy package. Given the model (lm, glm, lme) and the contrast matrix, this function will calculate the estimates and their standard errors (confidence intervals)
  • popMeans() - also in the doBy package. Hence this is a convenient function for calculating population means (marginal or otherwise) from lm, glm and lme models (only).

As an example...

> library(doBy)

> popMeans(fish1.lm, c("FactB", "FactA"))

   beta0 Estimate Std.Error  t.value DF Pr(>|t|)    Lower    Upper
1      0 40.23763  1.350002 29.80561 45        0 37.51859 42.95667
2      0 45.55109  1.350002 33.74151 45        0 42.83205 48.27014
3      0 51.62901  1.350002 38.24366 45        0 48.90997 54.34806
4      0 28.68304  1.350002 21.24667 45        0 25.96400 31.40209
5      0 34.75708  1.350002 25.74595 45        0 32.03803 37.47612
6      0 43.83975  1.350002 32.47384 45        0 41.12070 46.55879
7      0 34.20286  1.350002 25.33542 45        0 31.48382 36.92191
8      0 43.89676  1.350002 32.51608 45        0 41.17772 46.61581
9      0 43.90636  1.350002 32.52319 45        0 41.18732 46.62540
10     0 46.05720  1.350002 34.11640 45        0 43.33816 48.77624
11     0 50.62681  1.350002 37.50129 45        0 47.90776 53.34585
12     0 57.80265  1.350002 42.81672 45        0 55.08360 60.52169
13     0 50.41613  1.350002 37.34523 45        0 47.69709 53.13517
14     0 60.96889  1.350002 45.16208 45        0 58.24984 63.68793
15     0 57.26748  1.350002 42.42030 45        0 54.54844 59.98652

Note, there is a slight difference between the confidence intervals that we calculated manually and those calculated by the popMeans() function. When we calculated the confidence intervals manually, we defined the 95% confidence intervals from a normal distribution. By contrast, the popMeans function defines the 95% confidence intervals from a t-distribution. For infinite degrees of freedom, the normal and t-distributions are identical. However, for smaller degrees of freedom (sample sizes), the t-distribution is relatively flatter and wider (and arguably a better reflection of the actual distribution). Consequently, the t-distribution might be considered to be more appropriate for models with small sample sizes.

Therefore, confidence intervals based on the t-distribution can be calculated from standard errors according to the following equation:

95% CI = μ ± t0.05,df × SE
 
where df is the t-distribution's degrees of freedom.

To achieve the same derived confidence intervals manually:

> mat <- unique(model.matrix(fish1.lm))

> modelMeans <- fish1.lm$coef %*% t(mat)

> se <- sqrt(diag(mat %*% vcov(fish1.lm) %*% t(mat)))

> ci <- as.numeric(modelMeans) + outer(se, qt(df = fish1.lm$df,

> c(0.025, 0.975)))

> colnames(ci) <- c("lwr", "upr")

> mat2 <- cbind(mu = as.numeric(modelMeans), se = se, ci)

> rownames(mat2) <- nms

> mat2

           mu       se      lwr      upr
a1b1 40.23763 1.350002 37.51859 42.95667
a1b2 45.55109 1.350002 42.83205 48.27014
a1b3 51.62901 1.350002 48.90997 54.34806
a2b1 28.68304 1.350002 25.96400 31.40209
a2b2 34.75708 1.350002 32.03803 37.47612
a2b3 43.83975 1.350002 41.12070 46.55879
a3b1 34.20286 1.350002 31.48382 36.92191
a3b2 43.89676 1.350002 41.17772 46.61581
a3b3 43.90636 1.350002 41.18732 46.62540
a4b1 46.05720 1.350002 43.33816 48.77624
a4b2 50.62681 1.350002 47.90776 53.34585
a4b3 57.80265 1.350002 55.08360 60.52169
a5b1 50.41613 1.350002 47.69709 53.13517
a5b2 60.96889 1.350002 58.24984 63.68793
a5b3 57.26748 1.350002 54.54844 59.98652

mu se lwr upr
a1b1 40.24 1.35 37.52 42.96
a1b2 45.55 1.35 42.83 48.27
a1b3 51.63 1.35 48.91 54.35
a2b1 28.68 1.35 25.96 31.40
a2b2 34.76 1.35 32.04 37.48
a2b3 43.84 1.35 41.12 46.56
a3b1 34.20 1.35 31.48 36.92
a3b2 43.90 1.35 41.18 46.62
a3b3 43.91 1.35 41.19 46.63
a4b1 46.06 1.35 43.34 48.78
a4b2 50.63 1.35 47.91 53.35
a4b3 57.80 1.35 55.08 60.52
a5b1 50.42 1.35 47.70 53.14
a5b2 60.97 1.35 58.25 63.69
a5b3 57.27 1.35 54.55 59.99

Interaction plot

If we add columns to this frame that indicate the levels of each of the factors, we can then create an interaction plot that incorporates confidence intervals in addition to the cell mean estimates.

> mat2 <- data.frame(with(fish1, expand.grid(FactB = levels(FactB),

> FactA = levels(FactA))), mat2)

> mat2

     FactB FactA       mu       se      lwr      upr
a1b1    b1    a1 40.23763 1.350002 37.51859 42.95667
a1b2    b2    a1 45.55109 1.350002 42.83205 48.27014
a1b3    b3    a1 51.62901 1.350002 48.90997 54.34806
a2b1    b1    a2 28.68304 1.350002 25.96400 31.40209
a2b2    b2    a2 34.75708 1.350002 32.03803 37.47612
a2b3    b3    a2 43.83975 1.350002 41.12070 46.55879
a3b1    b1    a3 34.20286 1.350002 31.48382 36.92191
a3b2    b2    a3 43.89676 1.350002 41.17772 46.61581
a3b3    b3    a3 43.90636 1.350002 41.18732 46.62540
a4b1    b1    a4 46.05720 1.350002 43.33816 48.77624
a4b2    b2    a4 50.62681 1.350002 47.90776 53.34585
a4b3    b3    a4 57.80265 1.350002 55.08360 60.52169
a5b1    b1    a5 50.41613 1.350002 47.69709 53.13517
a5b2    b2    a5 60.96889 1.350002 58.24984 63.68793
a5b3    b3    a5 57.26748 1.350002 54.54844 59.98652

> par(mar = c(4, 5, 0, 0))

> lims <- with(mat2, range(c(lwr, upr)))

> plot(0, 0, xlim = c(0.5, 5.5), ylim = lims, type = "n", axes = F,

> ann = F)

> with(mat2[mat2$FactB == "b1", ], arrows(as.numeric(FactA), lwr,

> as.numeric(FactA), upr, ang = 90, len = 0.05, code = 3))

> lines(mu ~ FactA, data = mat2[mat2$FactB == "b1", ], lty = 1)

> points(mu ~ FactA, data = mat2[mat2$FactB == "b1", ], pch = 21,

> bg = "black", col = "black")

> with(mat2[mat2$FactB == "b2", ], arrows(as.numeric(FactA), lwr,

> as.numeric(FactA), upr, ang = 90, len = 0.05, code = 3))

> lines(mu ~ FactA, data = mat2[mat2$FactB == "b2", ], lty = 2)

> points(mu ~ FactA, data = mat2[mat2$FactB == "b2", ], pch = 21,

> bg = "grey", col = "black")

> with(mat2[mat2$FactB == "b3", ], arrows(as.numeric(FactA), lwr,

> as.numeric(FactA), upr, ang = 90, len = 0.05, code = 3))

> lines(mu ~ FactA, data = mat2[mat2$FactB == "b3", ], lty = 3)

> points(mu ~ FactA, data = mat2[mat2$FactB == "b3", ], pch = 21,

> bg = "white", col = "black")

> axis(1, at = 1:5, lab = levels(mat2$FactA))

> mtext("FactA", 1, cex = 1.25, line = 3)

> axis(2, las = 1)

> mtext("Mean response", 2, cex = 1.25, line = 3)

> legend("bottomright", title = "FactB", leg = c("b1", "b2", "b3"),

> bty = "n", lty = 1:3, pch = 21, pt.bg = c("black", "grey",

> "white"))

> box(bty = "l")

Unfortunately, for more complex designs (particularly those that are hierarchical), identifying or even approximating sensible estimates of degrees of freedom (and thus the t-distributions from which to estimate confidence intervals) becomes increasingly more problematic (and some argue, borders on quess work). As a result, more complex models do not provide any estimates of residual degrees of freedom and thus, it is necessary to either base confidence intervals on the normal distribution or provide your own degrees of freedom (and associated justification). Furthermore, as a result of these complications, the popMeans() function only supports models fitted via lm.

Marginal means

The true flexibility of the contrasts is the ability to derive other estimates, such as the marginal means for the levels of each of the factors. The contrasts for calculating marginal means are the average (averaged over the levels of the specific factor) contrast values for each parameter.

  • FactA marginal means

    > library(plyr)

    > cmat

    > cmat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactA,

    > function(df) mean(df))

      fish1$FactA (Intercept) FactAa2 FactAa3 FactAa4 FactAa5   FactBb2   FactBb3
    1          a1           1       0       0       0       0 0.3333333 0.3333333
    2          a2           1       1       0       0       0 0.3333333 0.3333333
    3          a3           1       0       1       0       0 0.3333333 0.3333333
    4          a4           1       0       0       1       0 0.3333333 0.3333333
    5          a5           1       0       0       0       1 0.3333333 0.3333333
      FactAa2:FactBb2 FactAa3:FactBb2 FactAa4:FactBb2 FactAa5:FactBb2
    1       0.0000000       0.0000000       0.0000000       0.0000000
    2       0.3333333       0.0000000       0.0000000       0.0000000
    3       0.0000000       0.3333333       0.0000000       0.0000000
    4       0.0000000       0.0000000       0.3333333       0.0000000
    5       0.0000000       0.0000000       0.0000000       0.3333333
      FactAa2:FactBb3 FactAa3:FactBb3 FactAa4:FactBb3 FactAa5:FactBb3
    1       0.0000000       0.0000000       0.0000000       0.0000000
    2       0.3333333       0.0000000       0.0000000       0.0000000
    3       0.0000000       0.3333333       0.0000000       0.0000000
    4       0.0000000       0.0000000       0.3333333       0.0000000
    5       0.0000000       0.0000000       0.0000000       0.3333333

    > class(cmat)

    [1] "data.frame"


    # We need to convert this into a matrix and remove the first column (which was only used to help aggregate)
    # We also need to transpose the matrix such that the contrasts are in columns rather than rows

    > cmat <- t(as.matrix(cmat[, -1]))


    # Use this contrast matrix to derive new parameters and their standard errors

    > FactAs <- fish1.lm$coef %*% cmat

    > se <- sqrt(diag(t(cmat) %*% vcov(fish1.lm) %*% cmat))

    > ci <- as.numeric(FactAs) + outer(se, qt(df = fish1.lm$df, c(0.025,

    > 0.975)))

    > colnames(ci) <- c("lwr", "upr")

    > mat2 <- cbind(mu = as.numeric(FactAs), se, ci)

    > rownames(mat2) <- levels(fish1$FactA)

    Contrast matrixMarginal means

    1 2 3 4 5
    (Intercept) 1.00 1.00 1.00 1.00 1.00
    FactAa2 0.00 1.00 0.00 0.00 0.00
    FactAa3 0.00 0.00 1.00 0.00 0.00
    FactAa4 0.00 0.00 0.00 1.00 0.00
    FactAa5 0.00 0.00 0.00 0.00 1.00
    FactBb2 0.33 0.33 0.33 0.33 0.33
    FactBb3 0.33 0.33 0.33 0.33 0.33
    FactAa2:FactBb2 0.00 0.33 0.00 0.00 0.00
    FactAa3:FactBb2 0.00 0.00 0.33 0.00 0.00
    FactAa4:FactBb2 0.00 0.00 0.00 0.33 0.00
    FactAa5:FactBb2 0.00 0.00 0.00 0.00 0.33
    FactAa2:FactBb3 0.00 0.33 0.00 0.00 0.00
    FactAa3:FactBb3 0.00 0.00 0.33 0.00 0.00
    FactAa4:FactBb3 0.00 0.00 0.00 0.33 0.00
    FactAa5:FactBb3 0.00 0.00 0.00 0.00 0.33

    mu se lwr upr
    a1 45.81 0.78 44.24 47.38
    a2 35.76 0.78 34.19 37.33
    a3 40.67 0.78 39.10 42.24
    a4 51.50 0.78 49.93 53.07
    a5 56.22 0.78 54.65 57.79

  • FactB marginal means

    > cmat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactB,

    > function(df) mean(df))

    > cmat <- as.matrix(cmat[, -1])

    > FactBs <- fish1.lm$coef %*% t(cmat)

    > se <- sqrt(diag(cmat %*% vcov(fish1.lm) %*% t(cmat)))

    > ci <- as.numeric(FactBs) + outer(se, qt(df = fish1.lm$df, c(0.025,

    > 0.975)))

    > colnames(ci) <- c("lwr", "upr")

    > mat2 <- cbind(mu = as.numeric(FactBs), se, ci)

    > rownames(mat2) <- levels(fish1$FactB)

    Contrast matrixMarginal means

    1 2 3
    (Intercept) 1.00 1.00 1.00
    FactAa2 0.20 0.20 0.20
    FactAa3 0.20 0.20 0.20
    FactAa4 0.20 0.20 0.20
    FactAa5 0.20 0.20 0.20
    FactBb2 0.00 1.00 0.00
    FactBb3 0.00 0.00 1.00
    FactAa2:FactBb2 0.00 0.20 0.00
    FactAa3:FactBb2 0.00 0.20 0.00
    FactAa4:FactBb2 0.00 0.20 0.00
    FactAa5:FactBb2 0.00 0.20 0.00
    FactAa2:FactBb3 0.00 0.00 0.20
    FactAa3:FactBb3 0.00 0.00 0.20
    FactAa4:FactBb3 0.00 0.00 0.20
    FactAa5:FactBb3 0.00 0.00 0.20

    mu se lwr upr
    b1 39.92 0.60 38.70 41.14
    b2 47.16 0.60 45.94 48.38
    b3 50.89 0.60 49.67 52.11

  • FactA by FactB interaction marginal means (NOTE, as there are only two factors, this gives the marginal means of the highest order interaction - ie, the cell means)

    > cmat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactA +

    > fish1$FactB, function(df) mean(df))

    > cmat <- as.matrix(cmat[, -1:-2])

    > FactAFactBs <- fish1.lm$coef %*% t(cmat)

    > se <- sqrt(diag(cmat %*% vcov(fish1.lm) %*% t(cmat)))

    > se <- sqrt(diag(cmat %*% vcov(fish1.lm) %*% t(cmat)))

    > ci <- as.numeric(FactAFactBs) + outer(se, qt(df = fish1.lm$df,

    > c(0.025, 0.975)))

    > colnames(ci) <- c("lwr", "upr")

    > mat2 <- cbind(mu = as.numeric(FactAFactBs), se, ci)

    > rownames(mat2) <- interaction(with(fish1, expand.grid(levels(FactB),

    > levels(FactA))[, c(2, 1)]), sep = "")

    Contrast matrix

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
    (Intercept) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
    FactAa2 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    FactAa3 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
    FactAa4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00
    FactAa5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00
    FactBb2 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00
    FactBb3 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00
    FactAa2:FactBb2 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    FactAa3:FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    FactAa4:FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
    FactAa5:FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00
    FactAa2:FactBb3 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    FactAa3:FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
    FactAa4:FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00
    FactAa5:FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00

    Marginal means

    mu se lwr upr
    a1b1 40.24 1.35 37.52 42.96
    a1b2 45.55 1.35 42.83 48.27
    a1b3 51.63 1.35 48.91 54.35
    a2b1 28.68 1.35 25.96 31.40
    a2b2 34.76 1.35 32.04 37.48
    a2b3 43.84 1.35 41.12 46.56
    a3b1 34.20 1.35 31.48 36.92
    a3b2 43.90 1.35 41.18 46.62
    a3b3 43.91 1.35 41.19 46.63
    a4b1 46.06 1.35 43.34 48.78
    a4b2 50.63 1.35 47.91 53.35
    a4b3 57.80 1.35 55.08 60.52
    a5b1 50.42 1.35 47.70 53.14
    a5b2 60.97 1.35 58.25 63.69
    a5b3 57.27 1.35 54.55 59.99

Again, for simple models, it is possible to use the popMeans function to get these marginal means:

> popMeans(fish1.lm, c("FactA"))

  beta0 Estimate Std.Error  t.value DF Pr(>|t|)    Lower    Upper
1     0 45.80591 0.7794239 58.76894 45        0 44.23607 47.37575
2     0 35.75995 0.7794239 45.87998 45        0 34.19011 37.32980
3     0 40.66866 0.7794239 52.17785 45        0 39.09882 42.23850
4     0 51.49555 0.7794239 66.06873 45        0 49.92571 53.06539
5     0 56.21750 0.7794239 72.12699 45        0 54.64766 57.78734

> popMeans(fish1.lm, c("FactB"))

  beta0 Estimate Std.Error  t.value DF Pr(>|t|)    Lower    Upper
1     0 39.91937 0.6037392 66.12023 45        0 38.70338 41.13537
2     0 47.16012 0.6037392 78.11341 45        0 45.94413 48.37612
3     0 50.88905 0.6037392 84.28979 45        0 49.67306 52.10504

Not surprisingly, for simple designs (like simple linear models), multiple bright members of the R community have written functions that derive the marginal means as well as the standard errors of the marginal mean effect sizes. This function is called model.tables().

> model.tables(aov(fish1.lm), type = "means", se = T)

Tables of means
Grand mean
         
45.98952 
 FactA 
FactA
   a1    a2    a3    a4    a5 
45.81 35.76 40.67 51.50 56.22 
 FactB 
FactB
   b1    b2    b3 
39.92 47.16 50.89 
 FactA:FactB 
     FactB
FactA b1    b2    b3   
   a1 40.24 45.55 51.63
   a2 28.68 34.76 43.84
   a3 34.20 43.90 43.91
   a4 46.06 50.63 57.80
   a5 50.42 60.97 57.27
Standard errors for differences of means
         FactA  FactB FactA:FactB
        1.1023 0.8538      1.9092
replic.     12     20           4

Effect sizes - differences between means

To derive effect sizes, you simply subtract one contrast set from another and use the result as the contrast for the derived parameter.

Effect sizes - for simple main effects

For example, to derive the effect size (difference between two parameters) of a1 vs a2 for the b1 level of Factor B:

THE NAMES IN THE CONTRAST MATRIX ARE NOT CORRECT

> mat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactA +

> fish1$FactB, function(df) mean(df))

> mat <- as.matrix(mat[, -1:-2])

> mat1 <- mat[1, ] - mat[4, ]

> effectSizes <- fish1.lm$coef %*% mat1

> se <- sqrt(diag(mat1 %*% vcov(fish1.lm) %*% mat1))

> ci <- as.numeric(effectSizes) + outer(se, qt(df = fish1.lm$df,

> c(0.025, 0.975)))

> colnames(ci) <- c("lwr", "upr")

> mat2 <- cbind(mu = as.numeric(effectSizes), se = se, ci)

> mat2

           mu       se      lwr     upr
[1,] 11.55459 1.909191 7.709281 15.3999

Contrast matrix for cell Contrast matrix forEffect size for
means (a1b1 & a2b1)a1b1 - a2b1a1b1 - a2b1

1 2
(Intercept) 1.00 1.00
FactAa2 0.00 1.00
FactAa3 0.00 0.00
FactAa4 0.00 0.00
FactAa5 0.00 0.00
FactBb2 0.00 0.00
FactBb3 0.00 0.00
FactAa2:FactBb2 0.00 0.00
FactAa3:FactBb2 0.00 0.00
FactAa4:FactBb2 0.00 0.00
FactAa5:FactBb2 0.00 0.00
FactAa2:FactBb3 0.00 0.00
FactAa3:FactBb3 0.00 0.00
FactAa4:FactBb3 0.00 0.00
FactAa5:FactBb3 0.00 0.00


minus

A
(Intercept) 0.00
FactAa2 -1.00
FactAa3 0.00
FactAa4 0.00
FactAa5 0.00
FactBb2 0.00
FactBb3 0.00
FactAa2:FactBb2 0.00
FactAa3:FactBb2 0.00
FactAa4:FactBb2 0.00
FactAa5:FactBb2 0.00
FactAa2:FactBb3 0.00
FactAa3:FactBb3 0.00
FactAa4:FactBb3 0.00
FactAa5:FactBb3 0.00


equals

mu se lwr upr
1 11.55 1.91 7.71 15.40

Whilst it is possible to go through and calculate all the rest of the effects individually, we can loop through and create a matrix that contains all the appropriate contrasts.

Effect sizes - for marginal effects (no interactions)

Effect sizes of FactA marginal means

> cmat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactA,

> function(df) mean(df))

> cmat <- as.matrix(cmat[, -1])

> cm <- NULL

> for (i in 1:(nrow(cmat) - 1)) {

> for (j in (i + 1):nrow(cmat)) {

> nm <- paste(levels(fish1$FactA)[j], "-", levels(fish1$FactA)[i])

> eval(parse(text = paste("cm <- cbind(cm,'", nm, "'=c(cmat[j,]-cmat[i,]))")))

> }

> }

> FactAEffects <- fish1.lm$coef %*% cm

> se <- sqrt(diag(t(cm) %*% vcov(fish1.lm) %*% cm))

> ci <- as.numeric(FactAEffects) + outer(se, qnorm(c(0.025, 0.975)))

> colnames(ci) <- c("lwr", "upr")

> mat2 <- cbind(mu = as.numeric(FactAEffects), se, ci)

Contrast matrix

a2 - a1 a3 - a1 a4 - a1 a5 - a1 a3 - a2 a4 - a2 a5 - a2 a4 - a3 a5 - a3 a5 - a4
(Intercept) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
FactAa2 1.00 0.00 0.00 0.00 -1.00 -1.00 -1.00 0.00 0.00 0.00
FactAa3 0.00 1.00 0.00 0.00 1.00 0.00 0.00 -1.00 -1.00 0.00
FactAa4 0.00 0.00 1.00 0.00 0.00 1.00 0.00 1.00 0.00 -1.00
FactAa5 0.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 1.00 1.00
FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
FactAa2:FactBb2 0.33 0.00 0.00 0.00 -0.33 -0.33 -0.33 0.00 0.00 0.00
FactAa3:FactBb2 0.00 0.33 0.00 0.00 0.33 0.00 0.00 -0.33 -0.33 0.00
FactAa4:FactBb2 0.00 0.00 0.33 0.00 0.00 0.33 0.00 0.33 0.00 -0.33
FactAa5:FactBb2 0.00 0.00 0.00 0.33 0.00 0.00 0.33 0.00 0.33 0.33
FactAa2:FactBb3 0.33 0.00 0.00 0.00 -0.33 -0.33 -0.33 0.00 0.00 0.00
FactAa3:FactBb3 0.00 0.33 0.00 0.00 0.33 0.00 0.00 -0.33 -0.33 0.00
FactAa4:FactBb3 0.00 0.00 0.33 0.00 0.00 0.33 0.00 0.33 0.00 -0.33
FactAa5:FactBb3 0.00 0.00 0.00 0.33 0.00 0.00 0.33 0.00 0.33 0.33

Marginal means

mu se lwr upr
a2 - a1 -10.05 1.10 -12.21 -7.89
a3 - a1 -5.14 1.10 -7.30 -2.98
a4 - a1 5.69 1.10 3.53 7.85
a5 - a1 10.41 1.10 8.25 12.57
a3 - a2 4.91 1.10 2.75 7.07
a4 - a2 15.74 1.10 13.58 17.90
a5 - a2 20.46 1.10 18.30 22.62
a4 - a3 10.83 1.10 8.67 12.99
a5 - a3 15.55 1.10 13.39 17.71
a5 - a4 4.72 1.10 2.56 6.88

Effect sizes plot

> plot(seq(-22, 22, l = nrow(mat2)), 1:nrow(mat2), type = "n",

> ann = F, axes = F)

> abline(v = 0, lty = 2)

> segments(mat2[, "lwr"], 1:nrow(mat2), mat2[, "upr"], 1:nrow(mat2))

> points(mat2[, "mu"], 1:nrow(mat2), pch = 16)

> text(mat2[, "lwr"], 1:nrow(mat2), lab = rownames(mat2), pos = 2)

> axis(1)

Effect sizes of FactB marginal means

> cmat <- ddply(as.data.frame(model.matrix(fish1.lm)), ~fish1$FactB,

> function(df) mean(df))

> cmat <- as.matrix(cmat[, -1])

> cm <- NULL

> for (i in 1:(nrow(cmat) - 1)) {

> for (j in (i + 1):nrow(cmat)) {

> nm <- paste(levels(fish1$FactB)[j], "-", levels(fish1$FactB)[i])

> eval(parse(text = paste("cm <- cbind(cm,'", nm, "'=c(cmat[j,]-cmat[i,]))")))

> }

> }

> FactBEffects <- fish1.lm$coef %*% cm

> se <- sqrt(diag(t(cm) %*% vcov(fish1.lm) %*% cm))

> ci <- as.numeric(FactBEffects) + outer(se, qnorm(c(0.025, 0.975)))

> colnames(ci) <- c("lwr", "upr")

> mat2 <- cbind(mu = as.numeric(FactBEffects), se, ci)

Contrast matrix

b2 - b1 b3 - b1 b3 - b2
(Intercept) 0.00 0.00 0.00
FactAa2 0.00 0.00 0.00
FactAa3 0.00 0.00 0.00
FactAa4 0.00 0.00 0.00
FactAa5 0.00 0.00 0.00
FactBb2 1.00 0.00 -1.00
FactBb3 0.00 1.00 1.00
FactAa2:FactBb2 0.20 0.00 -0.20
FactAa3:FactBb2 0.20 0.00 -0.20
FactAa4:FactBb2 0.20 0.00 -0.20
FactAa5:FactBb2 0.20 0.00 -0.20
FactAa2:FactBb3 0.00 0.20 0.20
FactAa3:FactBb3 0.00 0.20 0.20
FactAa4:FactBb3 0.00 0.20 0.20
FactAa5:FactBb3 0.00 0.20 0.20

Marginal means

mu se lwr upr
b2 - b1 7.24 0.85 5.57 8.91
b3 - b1 10.97 0.85 9.30 12.64
b3 - b2 3.73 0.85 2.06 5.40

Simple main effects - effects associated with one factor at specific levels of another factor

But what do we do in the presence of an interaction then - when effect sizes of marginal means will clearly be underrepresenting the patterns. The presence of an interaction implies that the effect sizes differ according to the combinations of the factor levels.

For example, lets explore the effects of b1 vs b3 at each of the levels of Factor A

> fits <- NULL

> for (fA in levels(fish1$FactA)) {

> mat <- model.matrix(~FactA * FactB, fish1[fish1$FactA ==

> fA & fish1$FactB %in% c("b1", "b3"), ])

> mat <- unique(mat)

> dat <- fish1[fish1$FactA == fA & fish1$FactB %in% c("b1",

> "b3"), ]

> dat$FactB <- factor(dat$FactB)

> cm <- NULL

> for (i in 1:(nrow(mat) - 1)) {

> for (j in (i + 1):nrow(mat)) {

> nm <- with(dat, paste(levels(FactB)[j], "-", levels(FactB)[i]))

> eval(parse(text = paste("cm <- cbind(cm,'", nm, "'=c(mat[j,]-mat[i,]))")))

> }

> }

> es <- fish1.lm$coef %*% cm

> se <- sqrt(diag(t(cm) %*% vcov(fish1.lm) %*% cm))

> ci <- as.numeric(es) + outer(se, qnorm(c(0.025, 0.975)))

> mat2 <- cbind(mu = as.numeric(es), se, ci)

> fits <- rbind(fits, mat2)

> }

> colnames(fits) <- c("ES", "se", "lwr", "upr")

> rname <- rownames(fits)

> fits <- data.frame(fits)

> fits$FactA <- levels(fish1$FactA)

> fits$Comp <- factor(rname)

> fits

  ES se lwr upr FactA Comp
1 11.4 1.9 7.6 15 a1 b3 - b1
2 15.2 1.9 11.4 19 a2 b3 - b1
3 9.7 1.9 6.0 13 a3 b3 - b1
4 11.7 1.9 8.0 15 a4 b3 - b1
5 6.9 1.9 3.1 11 a5 b3 - b1

Compare each level of FactB at each level of FactA

> oo <- outer(FactAs, FactBs, FUN = "+")/2

> dat <- fish1[fish1$FactA %in% levels(fish1$FactA) & fish1$FactB ==

> "b1", ]

> mat <- unique(model.matrix(~FactA * FactB, fish1 = dat))

> fish1.lm$coef %*% t(mat)

159131721252933374145495357
40 46 52 29 35 44 34 44 44 46 51 58 50 61 57

> model.tables(aov(fish1.lm), type = "means", se = T)

Tables of means
Grand mean
         
45.98952 
 FactA 
FactA
   a1    a2    a3    a4    a5 
45.81 35.76 40.67 51.50 56.22 
 FactB 
FactB
   b1    b2    b3 
39.92 47.16 50.89 
 FactA:FactB 
     FactB
FactA b1    b2    b3   
   a1 40.24 45.55 51.63
   a2 28.68 34.76 43.84
   a3 34.20 43.90 43.91
   a4 46.06 50.63 57.80
   a5 50.42 60.97 57.27
Standard errors for differences of means
         FactA  FactB FactA:FactB
        1.1023 0.8538      1.9092
replic.     12     20           4

-->

Welcome to the end of this tutorial