Mon - Fri : 08:00 AM - 5:00 PM
734.544-8038

# Estimating HLM Models Using R: Part 3

### Note: For a fuller treatment, download our series of lectures Hierarchical Linear Models.

#### Random Coefficient Model

Next, R&B present a model in which student-level SES is included instead of average SES, and they treat the slope of student SES as random. One complication is that R&B present results after group-mean centering student SES. Group-mean centering means that the average SES for each student’s school is subtracted from each student’s individual SES. Unfortunately the meanses variable is coded -1, 0, 1 and is therefore only a rough indicator of each school’s average. To get a better estimate of the school average, one can take advantage of the aggregate function. The following illustrates one way of group mean centering.

> grpmeans<-aggregate(hsb$ses,list(hsb$id),mean,na.rm=TRUE)
> names(grpmeans)<-c("id","grpmnses")
> hsb<-merge(hsb,grpmeans,by="id")
> hsb$cent.ses<-hsb$ses-hsb\$grpmnses

The aggregate function collapses grouped cases into summary statistics. Its first argument is the variable to be collapsed, in this case the ses variable in the hsb data frame. The second argument lists the grouping variables. In this case, only the groupings defined by the id variable will be used. The next argument specifies the function to apply to each grouping. The final argument is useful in the presence of missing data (which is not the case here). It tells R to drop missing cases before taking the mean.

After estimating the group means and storing the results in an object named grpmeans, the next line supplies names to the grpmeans columns. In order for the merge to take place in the next line, it is necessary that the grouping variable have the same name in both objects that will be combined. The merge function completes the merge, and the final line adds the group-mean centered ses variable to the hsb data frame.

The level-1 equation is the following:

$$\tag{6} Y_{ij} = \beta_{0j} + \beta_{1j}(SES) + e_{ij}$$

The intercept β0j can be modeled as a grand mean γ00 plus random error, u0j. Similarly, the slope β1j can be modelled as having a grand mean γ10 plus random error u1j.

$$\tag{7} \beta_{0j} = \gamma_{00} + u_{0j}$$

$$\tag{8} \beta_{1j} = \gamma_{10} + u_{1j}$$

Combining (7) and (8) into (6) produces:

$$\tag{9} Y_{ij} = \gamma_{00} + \gamma_{10}(SES) + u_{0j} + u_{1j}(SES) + e_{ij}$$

The syntax to estimate the model is the following:

> model3<-lme(mathach ~ cent.ses, random = ~ cent.ses | id, data=hsb)

The fixed effects portion is specified in the same manner as for Model 2, but the predictor has changed from mean SES to the group-mean centered version. In addition, the cent.ses variable now follows the tilde in the random effects portion of the model to specify that the slope should be random. Unlike Stata and PASW, R defaults to an unstructured covariance matrix for the random effects. This means that variance components are estimated for both the intercept and the slope as well as their covariance (R reports the correlation). The VarCorr function again recovers the variance components on both the variance and standard deviation scales.

The R output is the following:

> summary(model3)
Linear mixed-effects model fit by REML
Data: hsb
AIC      BIC    logLik
46726.23 46767.51 -23357.12

Random effects:
Formula: ~cent.ses | id
Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr
(Intercept) 2.946348 (Intr)
cent.ses    0.833061 0.019
Residual    6.058069

Fixed effects: mathach ~ cent.ses
Value Std.Error   DF  t-value p-value
(Intercept) 12.636193 0.2445038 7024 51.68098       0
cent.ses     2.193196 0.1282587 7024 17.09979       0
Correlation:
(Intr)
cent.ses 0.009

Standardized Within-Group Residuals:
Min          Q1         Med          Q3         Max
-3.09679618 -0.73192910  0.01855311  0.75385602  2.89923918

Number of Observations: 7185
Number of Groups: 160
>

> VarCorr(model3)
id = pdLogChol(cent.ses)
Variance   StdDev   Corr
(Intercept)  8.6809672 2.946348 (Intr)
cent.ses     0.6939906 0.833061 0.019
Residual    36.7001996 6.058069

These results correspond to Table 4.4 in R&B. See also the variance-covariance components at the bottom of their page 77.

The final model R&B present is an intercept- and slopes-as-outcomes model.