mlmpower
Package to
Conduct Multilevel Power AnalysisThe first illustration demonstrates a power analysis for a cross-sectional application of the following multilevel model. To provide a substantive context, consider a prototypical education example where students are nested in schools. Intraclass correlations for achievement-related outcomes often range between .10 and .25 (Hedges & Hedberg, 2007; Hedges & Hedberg, 2013; Sellstrom & Bremberg, 2006; Spybrook et al., 2011; Stockford, 2009). To accommodate uncertainty about this important parameter, the power simulations investigate intraclass correlation values of .10 and .25.
The multilevel model for the illustration is
\[ \begin{split} Y_{ij} &= \left( \beta_{0} + b_{0j} \right) + \left( \beta_{1} + b_{1j} \right)\left( X_{1ij} - {\overline{X}}_{1j} \right) + \beta_{2}\left( X_{2ij} - {\overline{X}}_{2} \right) \\ &\phantom{=}+ \ \beta_{3}\left( Z_{1j} - {\overline{Z}}_{1} \right) + \ \beta_{4}\left( Z_{2j} - {\overline{Z}}_{2} \right) + \beta_{5}\left( X_{1ij} - {\overline{X}}_{1j} \right)\left( Z_{1j} - {\overline{Z}}_{1} \right) + \varepsilon_{ij}\\[1em] \mathbf{b}_{j}\ &\sim\ N\left( 0,\mathbf{\Sigma}_{\mathbf{b}} \right)\ \ \ \ \ \mathbf{\Sigma}_{\mathbf{b}}=\begin{pmatrix} \sigma_{b_{0}}^{2} & \\ \sigma_{b_{1},b_{0}} & \sigma_{b_{1}}^{2} \\ \end{pmatrix}\mathbf{\ \ \ \ \ }\varepsilon_{ij}\ \sim\ N(0,\sigma_{\varepsilon}^{2}) \end{split} \]
where \(Y_{ij}\) is the outcome score for observation i in cluster j, \(X_{1ij}\) is a focal within-cluster predictor, \({\overline{X}}_{1j}\) is the variable’s level-2 cluster mean, \(X_{2ij}\) is a grand mean centered level-1 covariate, \(Z_{1j}\) is a level-2 moderator score for cluster j, and \(Z_{2j}\) is a level-2 covariate. Turning to the residual terms, \(b_{0j}\) and \(b_{1j}\) are between-cluster random effects that capture residual variation in the cluster-specific intercepts and slopes, and \(\varepsilon_{ij}\) is a within-cluster residual. By assumption, the random effects follow a multivariate normal distribution with a between-cluster covariance matrix \(\mathbf{\Sigma}_{\mathbf{b}}\), and within-cluster residuals are normal with constant variance \(\sigma_{\varepsilon}^{2}\).
Importantly, group mean centering \(X_{1}\) yields a pure within-cluster predictor, whereas grand mean centering \(X_{2}\) gives a predictor with two sources of variation. Although it need not be true in practice, the within- and between-cluster parts of any level-1 predictor variables with non-zero intraclass correlations share a common slope coefficient. To simplify notation, Equation 1 can be rewritten as
\[Y_{ij} = \left( \beta_{0} + b_{0j} \right) + \left( \beta_{1} + b_{1j} \right)X_{1ij}^{w} + \beta_{2}\left( X_{2ij}^{w} + X_{2j}^{b} \right) + \beta_{3}Z_{1j}^{b} + \beta_{4}Z_{2j}^{b} + \beta_{5}X_{1ij}^{w}Z_{1j}^{b} + \varepsilon_{ij}\]
where the w and b superscripts reference within- and between-cluster deviation scores, respectively.
The key inputs to the model object are the unconditional intraclass correlation and effect size values. This illustration investigates power for intraclass correlation values of .10 and .25. The within- and between-cluster fixed effects are set to \(R_{w}^{2}\) = .065 and \(R_{b}^{2}\) = .065, respectively, the sum of which corresponds to Cohen’s (1988) medium effect size benchmark. Note that \(R_{b}^{2}\) cannot exceed the outcome’s intraclass correlation, as this would imply that between-cluster predictors explain more than 100% of the available variance. Following conventional wisdom that interaction effects tend to produce small effects (Aguinis, Beaty, Boik, & Pierce, 2005; Chaplin, 1991), the cross-level product is assigned \(R_{p}^{2}\) = .01. Finally, the random coefficient effect size is set to \(R_{rc}^{2}\) = .03 based on values from Table 2 in Enders, Keller, and Woller (2023).
By default, mlmpower
assigns equal weights to all
quantities contributing to a particular source of variance. To assign
non-uniform weights, assume \(X_{1}\)
and \(Z_{1}\) (the interacting
variables) are the focal predictors and \(X_{2}\) and \(Z_{2}\) are covariates. To mimic a
situation where the covariates explain a small amount of variation, the
fixed effect \(R^{2}\) values are
predominantly allocated to the focal predictors using weights of .80 and
.20. A small covariate allocation could be appropriate for a set of
background or sociodemographic characteristics that weakly predict the
outcome. Researchers often need power estimates for individual partial
regression slopes. Although the weights do not exactly carve \(R_{w}^{2}\) and \(R_{b}^{2}\) into additive components when
predictors are correlated, adopting weak associations among the
regressors allows us to infer that each focal predictor roughly accounts
for 5% of the explained variation at each level (i.e., \(.80 \times R_{w}^{2}\) or \(R_{b}^{2} \approx .05\)).
The following code block shows the mlmpower
model object
for the example.
example1 <- (
effect_size(
icc = c(.10, .25),
within = .065,
between = .065,
product = .01,
random_slope = .03
)
+ outcome('y', mean = 50, sd = 10)
+ within_predictor('x1', icc = 0, weight = .80)
+ within_predictor('x2', weight = .20)
+ between_predictor('z1', weight = .80)
+ between_predictor('z2', weight = .20)
+ product('x1','z1', weight = 1)
+ random_slope('x1', weight = 1)
)
The mlmpower
package groups \(R^{2}\) values and intraclass correlations
into a single object called effect_size(), with the five inputs
separated by commas. The within argument corresponds to \(R_{w}^{2}\), the between argument aligns
with \(R_{b}^{2}\), the product
argument specifies \(R_{p}^{2}\), and
the random_slope argument corresponds to \(R_{rc}^{2}\). The icc argument assigns a
global intraclass correlation to all level-1 variables, and separate
simulations are performed for each requested level.
Variable attributes are referenced by adding the following objects:
outcome()
, within_predictor()
,
between_predictor()
, product()
, and
random_slope()
. All five objects have an argument for the
variable name, weight (weight =
), mean
(mean =
), and standard deviation (sd =
).
Level-1 variables additionally have an intraclass correlation argument
(icc =
) that supersedes the global setting in
effect_size()
.
The previous code block assigns explicit weights to all variables
contributing to a given effect. The unit weights in the
product()
and random_slope()
objects result
from a single variable determining those sources of variation. Next, the
within_predictor('x1', icc = 0, weight = .80)
object
overrides the global intraclass correlation setting, defining \(X_{1}\) as a pure within-cluster deviation
variable with no between-cluster variation. Finally, with the exception
of the outcome variable, which has a mean and standard deviation of 50
and 10, the script accepts the default settings for the means and
variances of the predictors (0 and 1, respectively).
The multilevel model parameters also require three sets of
correlations: within-cluster correlations among level-1 predictors,
between-cluster correlations among level-2 predictors, and random effect
correlations. These correlations are optional, but they can specified
using the correlations()
object. The earlier code block
omits this object, thereby accepting the default specification. When the
correlations()
object is omitted, the mlmpower
package iteratively samples all correlations from a uniform distribution
between .10 and .30, such that the resulting power estimates average
over a distribution of possible associations. This default range spans
Cohen’s (1988) small to medium effect size benchmarks, and it brackets
common correlation values from published research (Bosco, Aguinis,
Singh, Field, & Pierce, 2015; Funder & Ozer, 2019; Gignac &
Szodorai, 2016).
The default specifications could be invoked by explicitly appending
the correlations()
object to the earlier script, as
follows.
example1 <- (
effect_size(
icc = c(.10, .25),
within = .065,
between = .065,
product = .01,
random_slope = .03
)
+ outcome('y', mean = 50, sd = 10)
+ within_predictor('x1', icc = 0, weight = .80)
+ within_predictor('x2', weight = .20)
+ between_predictor('z1', weight = .80)
+ between_predictor('z2', weight = .20)
+ product('x1','z1', weight = 1)
+ random_slope('x1', weight = 1)
+ correlations(
within = random(0.1, 0.3),
between = random(0.1, 0.3),
randeff = random(0.1, 0.3)
)
)
Researchers can modify the upper and lower limits of each correlation
range, or they can specify a constant correlation by inputting a scalar
value. For example, specifying randeff = 0
would define
\(\mathbf{\Sigma}_{\mathbf{b}}\) as a
diagonal matrix. The illustrative simulations presented in Enders et
al. (2023) suggest that predictor and random effect correlations tend
not to matter very much.
It may be instructive to inspect the population parameters prior to
running the simulation. Executing summary(example1)
returns
tabular summaries of the multilevel model parameters.
summary(example1)
#> ! Parameters are solved based on average correlation.
#> $`icc = 0.1`
#> $r2
#> Proportion
#> Variance Within-Cluster Fixed Effects 0.065000000
#> Variance Cross-Level Interaction Effects 0.010000000
#> Variance Random Slopes 0.030000000
#> Within-Cluster Error Variance 0.795000000
#> Variance Between-Cluster Fixed Effects 0.071476494
#> Incremental Variance Level-2 Predictors 0.065000000
#> Between Variance Level-1 Covariates 0.006476494
#> Variance Random Intercepts 0.028523506
#>
#> $phi_b
#> x2_b z1 z2
#> x2_b 0.10000000 0.06324555 0.06324555
#> z1 0.06324555 1.00000000 0.20000000
#> z2 0.06324555 0.20000000 1.00000000
#>
#> $phi_p
#> x1_w*z1
#> x1_w*z1 1
#>
#> $phi_w
#> x1_w x2_w
#> x1_w 1.0000000 0.1897367
#> x2_w 0.1897367 0.9000000
#>
#> $mean_Z
#> z1 z2
#> 0 0
#>
#> $mean_X
#> x1 x2
#> 0 0
#>
#> $var_e
#> Value
#> Res. Var. 79.5
#>
#> $tau
#> Icept x1_w
#> Icept 2.8523506 0.5850488
#> x1_w 0.5850488 3.0000000
#>
#> $gammas
#> Value
#> Icept 50.0000000
#> x1_w 2.3646137
#> x2_w 0.6231304
#> x1_w*z1 1.0000000
#> x2_b 0.6231304
#> z1 2.4308622
#> z2 0.6077155
#>
#> $mean_Y
#> y
#> 50
#>
#>
#> $`icc = 0.25`
#> $r2
#> Proportion
#> Variance Within-Cluster Fixed Effects 0.065000000
#> Variance Cross-Level Interaction Effects 0.010000000
#> Variance Random Slopes 0.030000000
#> Within-Cluster Error Variance 0.645000000
#> Variance Between-Cluster Fixed Effects 0.074006354
#> Incremental Variance Level-2 Predictors 0.065000000
#> Between Variance Level-1 Covariates 0.009006354
#> Variance Random Intercepts 0.175993646
#>
#> $phi_b
#> x2_b z1 z2
#> x2_b 0.25 0.1 0.1
#> z1 0.10 1.0 0.2
#> z2 0.10 0.2 1.0
#>
#> $phi_p
#> x1_w*z1
#> x1_w*z1 1
#>
#> $phi_w
#> x1_w x2_w
#> x1_w 1.0000000 0.1732051
#> x2_w 0.1732051 0.7500000
#>
#> $mean_Z
#> z1 z2
#> 0 0
#>
#> $mean_X
#> x1 x2
#> 0 0
#>
#> $var_e
#> Value
#> Res. Var. 64.5
#>
#> $tau
#> Icept x1_w
#> Icept 17.599365 1.453246
#> x1_w 1.453246 3.000000
#>
#> $gammas
#> Value
#> Icept 50.0000000
#> x1_w 2.3646137
#> x2_w 0.6826052
#> x1_w*z1 1.0000000
#> x2_b 0.6826052
#> z1 2.4308622
#> z2 0.6077155
#>
#> $mean_Y
#> y
#> 50
Having specified the target model, you next use the
power_analysis()
function to conduct simulations. The
function requires four inputs: the model argument specifies the
parameter value object (e.g., example1
), the replications
input specifies the number of artificial data sets,
n_between
is a vector of level-2 sample sizes, and
n_within
is a vector of level-1 sample sizes (i.e., the
number of observations per cluster). The code block below pairs four
level-2 sample size values (\(J\) = 30,
60, 90, and 120) with three level-1 sample sizes (\(n_{j}\) = 10, 20, or 30 observations per
cluster), and it requests 2,000 artificial data sets for each
combination.
# Set seed for replicable results
set.seed(2318971)
# Run Power Analysis
powersim1 <-
power_analysis(
model = example1,
replications = 2000,
n_between = c(30, 60, 90, 120),
n_within = c(10, 20, 30)
)
The package uses lme4
(Bates et al., 2021) for model
fitting, and it defaults to an alpha level of .05 for all significance
tests. Significance tests of random slope variation use a likelihood
ratio test with a mixture chi-square reference distribution (i.e., a
chi-bar distribution; Snijders & Bosker, 2012, p. 99), as
implemented in the varTestnlme
package (Baey & Kuhn,
2022). Executing summary(powersim1)
returns the tabular
summaries of the simulation results shown
summary(powersim1)
#>
#> ── Power Analysis Specification ──
#>
#> Replications = 2000
#> N_Within = 10, 20, and 30
#> N_Between = 30, 60, 90, and 120
#>
#> ── lme4 Model Syntax
#>
#> y ~ 1 + cwc(x1) + cgm(x2) + cwc(x1):cgm(z1) + cgm(z1) + cgm(z2) + (1 + cwc(x1)
#> | `_id`)
#>
#> ℹ cwc() = centering within cluster
#> ℹ cgm() = centering with grand mean
#>
#> ── Effect Sizes
#>
#> • WITHIN = 0.065
#> • BETWEEN = 0.065
#> • RANDOM SLOPE = 0.03
#> • PRODUCT = 0.01
#>
#> ── Power Analysis Results ──
#>
#> ── global icc = 0.1, n_within = 10, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 0.94 ± 0.01
#> Fixed: cgm(x2) 0.23 ± 0.02
#> Fixed: cgm(z1) 0.92 ± 0.01
#> Fixed: cgm(z2) 0.15 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.29 ± 0.02
#> Random: Slopes Test 0.17 ± 0.02
#>
#> ── global icc = 0.25, n_within = 10, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 0.95 ± 0.01
#> Fixed: cgm(x2) 0.26 ± 0.02
#> Fixed: cgm(z1) 0.66 ± 0.02
#> Fixed: cgm(z2) 0.11 ± 0.01
#> Fixed: cwc(x1):cgm(z1) 0.36 ± 0.02
#> Random: Slopes Test 0.23 ± 0.02
#>
#> ── global icc = 0.1, n_within = 20, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 0.99 ± 0.00
#> Fixed: cgm(x2) 0.35 ± 0.02
#> Fixed: cgm(z1) 0.98 ± 0.01
#> Fixed: cgm(z2) 0.22 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.48 ± 0.02
#> Random: Slopes Test 0.46 ± 0.02
#>
#> ── global icc = 0.25, n_within = 20, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 0.99 ± 0.00
#> Fixed: cgm(x2) 0.42 ± 0.02
#> Fixed: cgm(z1) 0.75 ± 0.02
#> Fixed: cgm(z2) 0.12 ± 0.01
#> Fixed: cwc(x1):cgm(z1) 0.54 ± 0.02
#> Random: Slopes Test 0.59 ± 0.02
#>
#> ── global icc = 0.1, n_within = 30, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 0.99 ± 0.00
#> Fixed: cgm(x2) 0.53 ± 0.02
#> Fixed: cgm(z1) 0.99 ± 0.00
#> Fixed: cgm(z2) 0.26 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.56 ± 0.02
#> Random: Slopes Test 0.73 ± 0.02
#>
#> ── global icc = 0.25, n_within = 30, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.60 ± 0.02
#> Fixed: cgm(z1) 0.75 ± 0.02
#> Fixed: cgm(z2) 0.12 ± 0.01
#> Fixed: cwc(x1):cgm(z1) 0.59 ± 0.02
#> Random: Slopes Test 0.83 ± 0.02
#>
#> ── global icc = 0.1, n_within = 10, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.40 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.25 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.56 ± 0.02
#> Random: Slopes Test 0.31 ± 0.02
#>
#> ── global icc = 0.25, n_within = 10, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.43 ± 0.02
#> Fixed: cgm(z1) 0.94 ± 0.01
#> Fixed: cgm(z2) 0.15 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.63 ± 0.02
#> Random: Slopes Test 0.46 ± 0.02
#>
#> ── global icc = 0.1, n_within = 20, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.63 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.40 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.78 ± 0.02
#> Random: Slopes Test 0.80 ± 0.02
#>
#> ── global icc = 0.25, n_within = 20, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.70 ± 0.02
#> Fixed: cgm(z1) 0.96 ± 0.01
#> Fixed: cgm(z2) 0.17 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.83 ± 0.02
#> Random: Slopes Test 0.90 ± 0.01
#>
#> ── global icc = 0.1, n_within = 30, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.81 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.45 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.86 ± 0.02
#> Random: Slopes Test 0.95 ± 0.01
#>
#> ── global icc = 0.25, n_within = 30, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.87 ± 0.01
#> Fixed: cgm(z1) 0.97 ± 0.01
#> Fixed: cgm(z2) 0.19 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.90 ± 0.01
#> Random: Slopes Test 0.99 ± 0.00
#>
#> ── global icc = 0.1, n_within = 10, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.54 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.35 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.74 ± 0.02
#> Random: Slopes Test 0.46 ± 0.02
#>
#> ── global icc = 0.25, n_within = 10, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.60 ± 0.02
#> Fixed: cgm(z1) 0.99 ± 0.00
#> Fixed: cgm(z2) 0.21 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.79 ± 0.02
#> Random: Slopes Test 0.64 ± 0.02
#>
#> ── global icc = 0.1, n_within = 20, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.81 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.53 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.94 ± 0.01
#> Random: Slopes Test 0.91 ± 0.01
#>
#> ── global icc = 0.25, n_within = 20, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.86 ± 0.02
#> Fixed: cgm(z1) 0.99 ± 0.00
#> Fixed: cgm(z2) 0.23 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.95 ± 0.01
#> Random: Slopes Test 0.98 ± 0.01
#>
#> ── global icc = 0.1, n_within = 30, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.93 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.65 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.97 ± 0.01
#> Random: Slopes Test 0.99 ± 0.00
#>
#> ── global icc = 0.25, n_within = 30, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.96 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.25 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.98 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.1, n_within = 10, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.67 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.47 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.86 ± 0.02
#> Random: Slopes Test 0.58 ± 0.02
#>
#> ── global icc = 0.25, n_within = 10, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.69 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.26 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.91 ± 0.01
#> Random: Slopes Test 0.74 ± 0.02
#>
#> ── global icc = 0.1, n_within = 20, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.91 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.66 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.97 ± 0.01
#> Random: Slopes Test 0.97 ± 0.01
#>
#> ── global icc = 0.25, n_within = 20, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.94 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.30 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.98 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.1, n_within = 30, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.98 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.77 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.99 ± 0.00
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 30, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cwc(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.99 ± 0.00
#> Fixed: cgm(z1) 1.00 ± 0.00
#> Fixed: cgm(z2) 0.31 ± 0.02
#> Fixed: cwc(x1):cgm(z1) 0.99 ± 0.00
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ℹ Margin of error (MOE) computed as ± 1.96 * standard error
The second illustration demonstrates a power analysis for a longitudinal growth curve model with a pair of cross-level interactions involving a binary level-2 moderator. Intraclass correlations for longitudinal and intensive repeated measures data often reach values of .40 or higher (Arend & Schäfer, 2019; Bolger & Laurenceau, 2013; Singer & Willett, 2003). To accommodate uncertainty about this important parameter, the simulation investigates intraclass correlation values of .40 and .60.
The multilevel model for the illustration is
\[ \begin{split} Y_{ij} &= \left( \beta_{0} + b_{0j} \right) + \left( \beta_{1} + b_{1j} \right)X_{1ij}^{w} + \beta_{2}\left( X_{2ij}^{w} + X_{2j}^{b} \right) + \beta_{3}\left( X_{3ij}^{w} + X_{3j}^{b} \right)\\ &\phantom{=}+ \ \beta_{4}Z_{1j}^{b} + \ \beta_{5}Z_{2j}^{b} + \beta_{6}Z_{3j}^{b} + \beta_{7}X_{1ij}^{w}Z_{1j}^{b} + \varepsilon_{ij} \\ \mathbf{b}_{j}\ &\sim\ N\left( 0,\mathbf{\Sigma}_{\mathbf{b}} \right)\ \ \ \ \ \mathbf{\Sigma}_{\mathbf{b}}=\begin{pmatrix} \sigma_{b_{0}}^{2} & \\ \sigma_{b_{1},b_{0}} & \sigma_{b_{1}}^{2} \\ \end{pmatrix}\mathbf{\ \ \ \ \ }\varepsilon_{ij}\ \sim\ N(0,\sigma_{\varepsilon}^{2}) \end{split} \]
Following established notation (see Illustration 1), the w and b superscripts reference within- and between-cluster deviation scores, respectively. The explanatory variables include a time score predictor with a random coefficient, a pair of time-varying covariates, a binary level-2 moderator, a pair of level-2 covariates, and a cross-level (group-by-time) interaction. For the purposes of weighting, we designated \(X_{1}\) and \(Z_{1}\) (the interacting variables) as focal predictors, \(X_{2}\) and \(X_{3}\) as level-1 (time-varying) covariates, and \(Z_{2}\) and \(Z_{3}\) as level-2 covariates. To mimic the scaling of a typical temporal index, we assume the major time increments are coded \(X_{1}^{w}\) = (0, 1, 2, 3, 4).
A brief discussion of \(Z_{1}\) (the binary moderator) is warranted before continuing. First, like other level-2 variables, this variable’s population mean must be 0 (i.e., \(Z_{1}\) is centered in the population) in order to maintain the orthogonality of the cross-level interaction terms. Grand mean centering a level-2 binary predictor creates an ANOVA-like contrast code, such that \(\beta_{0}\) is the grand mean and \(\beta_{4}\) is the predicted group mean difference when \(X_{1}^{w}\) (the time score predictor) equals 0. In this case, a code of 0 corresponds to the baseline assessment. Because \(\beta_{4}\) is a conditional effect that represents the group mean difference at the first occasion, \(Z_{1}\)’s contribution to the between-cluster effect size depends on whether we view this regressor as a naturally occurring classification or an intervention assignment indicator. In the former case, we might expect a group mean difference at baseline, and the presence of such a difference would require a non-zero weight. In contrast, random assignment to conditions would eliminate a group mean difference at baseline, and \(Z_{1}\)’s weight would equal 0. Note that this conclusion changes if the time scores are coded differently. For illustration purposes, we assume \(Z_{1}\) is an intervention assignment indicator.
The key inputs to the model object are the unconditional intraclass correlation and effect size values. This illustration investigates power for intraclass correlation values of .40 and .60. The within- and between-cluster fixed effects are set to \(R_{w}^{2}\) = .13 and \(R_{b}^{2}\) = .065, respectively; the former corresponds to Cohen’s (1988) medium effect size benchmark. Note that \(R_{b}^{2}\) cannot exceed the outcome’s intraclass correlation, as this would imply that between-cluster predictors explain more than 100% of the available variance. Following conventional wisdom that interaction effects tend to produce small effects (Aguinis et al., 2005; Chaplin, 1991), \(R_{p}^{2}\) = .05 is assigned to the pair of cross-level product terms. Finally, the random coefficient effect size is set to \(R_{rc}^{2}\) = .03 based on values from Table 2 in Enders et al. (2023).
To refresh, we designated \(X_{1}\) and \(Z_{1}\) (the interacting variables) as focal predictors, \(X_{2}\) and \(X_{3}\) as level-1 (time-varying) covariates, and \(Z_{2}\) and \(Z_{3}\) as level-2 covariates. To mimic a situation where the linear change predominates the level-1 model, we used weights of .50, .25, .25 to allocate the within-cluster \(R^{2}\) to \(X_{1}\), \(X_{2}\), and \(X_{3}\). At level-2, we used weights equal to 0, .50, .50 to allocate the between-cluster \(R^{2}\) to \(Z_{1}\), \(Z_{2}\), and \(Z_{3}\). As noted previously, \(Z_{1}\)’s slope represents the group mean difference at baseline, and we are assuming that random assignment nullifies this effect. Finally, \(R_{p}^{2}\) and \(R_{rc}^{2}\) do not require weights because a single variable determines each source of variation.
To illustrate how to modify default correlation settings, we sampled within-cluster predictor correlations between the range of .20 and .40 under the assumption that the time scores could have stronger associations with other time-varying predictors. We similarly sampled the random effect correlations between the range of .30 and .50 to mimic a scenario where higher baseline scores (i.e., random intercepts) are associated with higher (more positive) growth rates. Finally, we adopted the default correlation range for the between-cluster predictors. The simulations from the previous cross-sectional example suggest that the correlations are somewhat arbitrary and would not have a material impact on power estimates.
The following code block shows the mlmpower
model object
for this example.
example2 <- (
effect_size(
icc = c(.40, .60),
within = .13,
between = .065,
product = .03,
random_slope = .10
)
+ outcome('y', mean = 50, sd = 10)
+ within_time_predictor('x1', weight = .50, values = 0:4)
+ within_predictor('x2', weight = .25)
+ within_predictor('x3', weight = .25)
+ between_binary_predictor('z1', proportion = .50, weight = 0)
+ between_predictor('z2', weight = .50)
+ between_predictor('z3', weight = .50)
+ product('x1','z1', weight = 1)
+ random_slope('x1', weight = 1)
+ correlations(
within = random(.20, .40),
between = random(.10, .30),
randeff = random(.30, .50)
)
)
The mlmpower
package groups \(R^{2}\) values and intraclass correlations
into a single object called effect_size()
, with the five
inputs separated by commas. The within argument corresponds to \(R_{w}^{2}\), the between argument aligns
with \(R_{b}^{2}\), the product
argument specifies \(R_{p}^{2}\), and
the random_slope argument corresponds to \(R_{rc}^{2}\). The icc argument assigns a
global intraclass correlation to all level-1 variables, and separate
simulations are performed for each requested level.
Variable attributes are referenced by adding the following base
objects: outcome()
, within_predictor()
,
between_predictor()
, product()
, and
random_slope()
. All five objects have an argument for the
variable name, weight (weight =
), mean
(mean =
), and standard deviation (sd =
).
Level-1 variables additionally have an intraclass correlation argument
(icc =
) that supersedes the global setting in
effect_size()
.
This illustration additionally uses the
within_time_predictor()
object to specify a set of fixed
time scores for \(X_{1}\), and it uses
between_binary_predictor()
object to define the level-2
moderator \(Z_{1}\) as a binary
predictor. In addition to a name and weight, the
within_time_predictor()
object requires a vector of time
scores as an argument (values =
). The
between_binary_predictor()
object requires a name, weight,
and the highest category proportion (proportion =
).
First, the within_time_predictor()
object specifies
\(X_{1}\) as a temporal predictor with
the fixed set of time scores described earlier. The
values = 0:4
argument specifies an integer sequence, but
unequally spaced increments can also be specified using a vector as
input (e.g., values = c(0,1,3,6)
). This object does not
require a mean or standard deviation argument, as these quantities are
determined from the time scores. Additionally, the variable’s intraclass
correlation is automatically fixed to 0 because the time scores are
constant across level-2 units. Next the
between_binary_predictor('z1', proportion = .50, weight = 0)
object specifies \(Z_{1}\) (the
moderator variable) as a binary predictor with a 50/50 split. This
object does not require a mean or standard deviation argument, as the
category proportions determine these quantities. Finally, within the
exception of the outcome, the code block uses default values for all
means and standard deviations (0 and 1, respectively). The means and
standard deviations of the time scores and binary predictor are
automatically determined by the user inputs.
The multilevel model parameters also require three sets of
correlations: within-cluster correlations among level-1 predictors,
between-cluster correlations among level-2 predictors, and random effect
correlations. These correlations are optional, but they can specified
using the correlations()
object. This example samples
within-cluster predictor correlation values between .20 and .40 (e.g.,
to mimic a situation where the time scores have salient correlations
with other repeated measures predictors). Random effect correlation
values are similarly sampled between the range of .30 and .50 to mimic a
scenario where higher baseline scores (i.e., random intercepts) are
associated with higher (more positive) growth rates. Finally, the script
specifies the default setting for between-cluster correlations, which is
to iteratively sample correlation values between .10 and .30.
It may be instructive to inspect the population parameters prior to
running the simulation. Executing summary(example2)
returns
tabular summaries of the multilevel model parameters.
summary(example2)
#> ! Parameters are solved based on average correlation.
#> $`icc = 0.4`
#> $r2
#> Proportion
#> Variance Within-Cluster Fixed Effects 0.13000000
#> Variance Cross-Level Interaction Effects 0.03000000
#> Variance Random Slopes 0.10000000
#> Within-Cluster Error Variance 0.34000000
#> Variance Between-Cluster Fixed Effects 0.12474120
#> Incremental Variance Level-2 Predictors 0.06500000
#> Between Variance Level-1 Covariates 0.05974120
#> Variance Random Intercepts 0.03964212
#>
#> $phi_b
#> x2_b x3_b z1_binary z2 z3
#> x2_b 0.40000000 0.12000000 0.06324555 0.12649111 0.12649111
#> x3_b 0.12000000 0.40000000 0.06324555 0.12649111 0.12649111
#> z1_binary 0.06324555 0.06324555 0.25000000 0.07978846 0.07978846
#> z2 0.12649111 0.12649111 0.07978846 1.00000000 0.20000000
#> z3 0.12649111 0.12649111 0.07978846 0.20000000 1.00000000
#>
#> $phi_p
#> x1_w*z1_binary
#> x1_w*z1_binary 0.5
#>
#> $phi_w
#> x1_w x2_w x3_w
#> x1_w 2.0000000 0.3286335 0.3286335
#> x2_w 0.3286335 0.6000000 0.1800000
#> x3_w 0.3286335 0.1800000 0.6000000
#>
#> $mean_Z
#> z1 z2 z3
#> 0.5 0.0 0.0
#>
#> $mean_X
#> x1 x2 x3
#> 2 0 0
#>
#> $var_e
#> Value
#> Res. Var. 34
#>
#> $tau
#> Icept x1_w
#> Icept 3.964212 1.780834
#> x1_w 1.780834 5.000000
#>
#> $gammas
#> Value
#> Icept 46.600654
#> x1_w 1.699673
#> x2_w 1.551582
#> x3_w 1.551582
#> x1_w*z1_binary 2.449490
#> x2_b 1.551582
#> x3_b 1.551582
#> z2 1.737198
#> z3 1.737198
#>
#> $mean_Y
#> y
#> 50
#>
#>
#> $`icc = 0.6`
#> $r2
#> Proportion
#> Variance Within-Cluster Fixed Effects 0.1300000
#> Variance Cross-Level Interaction Effects 0.0300000
#> Variance Random Slopes 0.1000000
#> Within-Cluster Error Variance 0.1400000
#> Variance Between-Cluster Fixed Effects 0.1696753
#> Incremental Variance Level-2 Predictors 0.0650000
#> Between Variance Level-1 Covariates 0.1046753
#> Variance Random Intercepts 0.1589955
#>
#> $phi_b
#> x2_b x3_b z1_binary z2 z3
#> x2_b 0.60000000 0.18000000 0.07745967 0.15491933 0.15491933
#> x3_b 0.18000000 0.60000000 0.07745967 0.15491933 0.15491933
#> z1_binary 0.07745967 0.07745967 0.25000000 0.07978846 0.07978846
#> z2 0.15491933 0.15491933 0.07978846 1.00000000 0.20000000
#> z3 0.15491933 0.15491933 0.07978846 0.20000000 1.00000000
#>
#> $phi_p
#> x1_w*z1_binary
#> x1_w*z1_binary 0.5
#>
#> $phi_w
#> x1_w x2_w x3_w
#> x1_w 2.0000000 0.2683282 0.2683282
#> x2_w 0.2683282 0.4000000 0.1200000
#> x3_w 0.2683282 0.1200000 0.4000000
#>
#> $mean_Z
#> z1 z2 z3
#> 0.5 0.0 0.0
#>
#> $mean_X
#> x1 x2 x3
#> 2 0 0
#>
#> $var_e
#> Value
#> Res. Var. 14
#>
#> $tau
#> Icept x1_w
#> Icept 15.89955 3.56646
#> x1_w 3.56646 5.00000
#>
#> $gammas
#> Value
#> Icept 46.600654
#> x1_w 1.699673
#> x2_w 1.900292
#> x3_w 1.900292
#> x1_w*z1_binary 2.449490
#> x2_b 1.900292
#> x3_b 1.900292
#> z2 1.737198
#> z3 1.737198
#>
#> $mean_Y
#> y
#> 50
Having specified the target model, you next use the
power_analysis()
function to conduct simulations. The
function requires four inputs: the model argument specifies the
parameter value object (e.g., example2
), the
replications
input specifies the number of artificial data
sets, n_between
is a vector of level-2 sample sizes, and
n_within
is a vector of level-1 sample sizes (i.e., the
number of observations per cluster). The code block below specifies six
level-2 sample size conditions (\(J =\)
50, 60, 70, 80, 90, and 100), each with five repeated measurements and
2,000 replications.
# Set seed for replicable results
set.seed(12379)
# Run Power Analysis
powersim2 <-
power_analysis(
model = example2,
replications = 2000,
n_between = c(50, 60, 70, 80, 90, 100),
n_within = 5
)
The package uses lme4
(Bates et al., 2021) for model
fitting, and it defaults to an alpha level of .05 for all significance
tests. Significance tests of random slope variation use a likelihood
ratio test with a mixture chi-square reference distribution (i.e., a
chi-bar distribution; Snijders & Bosker, 2012, p. 99), as
implemented in the varTestnlme
package (Baey & Kuhn,
2022). Executing summary(powersim2)
returns the tabular
summaries of the simulation results shown below.
summary(powersim2)
#>
#> ── Power Analysis Specification ──
#>
#> Replications = 2000
#> N_Within = 5
#> N_Between = 50, 60, 70, 80, 90, and 100
#>
#> ── lme4 Model Syntax
#>
#> y ~ 1 + x1 + cgm(x2) + cgm(x3) + x1:cgm(z1) + cgm(z1) + cgm(z2) + cgm(z3) + (1
#> + x1 | `_id`)
#>
#> ℹ cwc() = centering within cluster
#> ℹ cgm() = centering with grand mean
#>
#> ── Effect Sizes
#>
#> • WITHIN = 0.13
#> • BETWEEN = 0.065
#> • RANDOM SLOPE = 0.1
#> • PRODUCT = 0.03
#>
#> ── Power Analysis Results ──
#>
#> ── global icc = 0.4, n_within = 5, n_between = 50
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 0.97 ± 0.01
#> Fixed: cgm(x2) 0.84 ± 0.02
#> Fixed: cgm(x3) 0.84 ± 0.02
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.69 ± 0.02
#> Fixed: cgm(z3) 0.67 ± 0.02
#> Fixed: x1:cgm(z1) 0.82 ± 0.02
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.6, n_within = 5, n_between = 50
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 0.99 ± 0.00
#> Fixed: cgm(x2) 0.98 ± 0.01
#> Fixed: cgm(x3) 0.99 ± 0.00
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.61 ± 0.02
#> Fixed: cgm(z3) 0.62 ± 0.02
#> Fixed: x1:cgm(z1) 0.92 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.4, n_within = 5, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 0.98 ± 0.01
#> Fixed: cgm(x2) 0.90 ± 0.01
#> Fixed: cgm(x3) 0.89 ± 0.01
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.75 ± 0.02
#> Fixed: cgm(z3) 0.76 ± 0.02
#> Fixed: x1:cgm(z1) 0.89 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.6, n_within = 5, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 0.99 ± 0.00
#> Fixed: cgm(x2) 1.00 ± 0.00
#> Fixed: cgm(x3) 1.00 ± 0.00
#> Fixed: cgm(z1) 0.06 ± 0.01
#> Fixed: cgm(z2) 0.72 ± 0.02
#> Fixed: cgm(z3) 0.69 ± 0.02
#> Fixed: x1:cgm(z1) 0.96 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.4, n_within = 5, n_between = 70
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 0.99 ± 0.00
#> Fixed: cgm(x2) 0.94 ± 0.01
#> Fixed: cgm(x3) 0.94 ± 0.01
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.82 ± 0.02
#> Fixed: cgm(z3) 0.81 ± 0.02
#> Fixed: x1:cgm(z1) 0.94 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.6, n_within = 5, n_between = 70
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 1.00 ± 0.00
#> Fixed: cgm(x3) 1.00 ± 0.00
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.79 ± 0.02
#> Fixed: cgm(z3) 0.79 ± 0.02
#> Fixed: x1:cgm(z1) 0.98 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.4, n_within = 5, n_between = 80
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 0.96 ± 0.01
#> Fixed: cgm(x3) 0.97 ± 0.01
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.88 ± 0.01
#> Fixed: cgm(z3) 0.86 ± 0.02
#> Fixed: x1:cgm(z1) 0.96 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.6, n_within = 5, n_between = 80
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 1.00 ± 0.00
#> Fixed: cgm(x3) 1.00 ± 0.00
#> Fixed: cgm(z1) 0.04 ± 0.01
#> Fixed: cgm(z2) 0.83 ± 0.02
#> Fixed: cgm(z3) 0.84 ± 0.02
#> Fixed: x1:cgm(z1) 0.99 ± 0.00
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.4, n_within = 5, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 0.98 ± 0.01
#> Fixed: cgm(x3) 0.97 ± 0.01
#> Fixed: cgm(z1) 0.06 ± 0.01
#> Fixed: cgm(z2) 0.90 ± 0.01
#> Fixed: cgm(z3) 0.91 ± 0.01
#> Fixed: x1:cgm(z1) 0.98 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.6, n_within = 5, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 1.00 ± 0.00
#> Fixed: cgm(x3) 1.00 ± 0.00
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.87 ± 0.01
#> Fixed: cgm(z3) 0.88 ± 0.01
#> Fixed: x1:cgm(z1) 1.00 ± 0.00
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.4, n_within = 5, n_between = 100
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 0.99 ± 0.00
#> Fixed: cgm(x3) 0.99 ± 0.00
#> Fixed: cgm(z1) 0.04 ± 0.01
#> Fixed: cgm(z2) 0.95 ± 0.01
#> Fixed: cgm(z3) 0.93 ± 0.01
#> Fixed: x1:cgm(z1) 0.98 ± 0.01
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ── global icc = 0.6, n_within = 5, n_between = 100
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: x1 1.00 ± 0.00
#> Fixed: cgm(x2) 1.00 ± 0.00
#> Fixed: cgm(x3) 1.00 ± 0.00
#> Fixed: cgm(z1) 0.05 ± 0.01
#> Fixed: cgm(z2) 0.92 ± 0.01
#> Fixed: cgm(z3) 0.90 ± 0.01
#> Fixed: x1:cgm(z1) 1.00 ± 0.00
#> Random: Slopes Test 1.00 ± 0.00
#>
#> ℹ Margin of error (MOE) computed as ± 1.96 * standard error
The third vignette demonstrates a power analysis for a cluster-randomized design (Raudenbush, 1997) where level-2 units are randomly assigned to one of two experimental conditions. To provide a substantive context, consider a prototypical education example where students are nested in schools, and schools are the unit of randomization. Intraclass correlations for achievement-related outcomes often range between .10 and .25 (Hedges & Hedberg, 2007; Hedges & Hedberg, 2013; Sellström & Bremberg, 2006; Spybrook et al., 2011; Stockford, 2009). To accommodate uncertainty about this important parameter, the simulation investigates intraclass correlation values of .10 and .25.
The multilevel model for the illustration is
\[ \begin{split} Y_{ij} &= \beta_{0} + \beta_{1}\left( X_{1ij}^{w} + X_{1j}^{b} \right) + \beta_{2}\left( X_{2ij}^{w} + X_{2j}^{b} \right)\\ &\phantom{=}+ \ \beta_{3}\left( X_{3ij}^{w} + X_{3j}^{b} \right) + \beta_{4}\left( X_{4ij}^{w} + X_{4j}^{b} \right) + \ \beta_{5}Z_{1j}^{b} + b_{0j} + \varepsilon_{ij} \end{split} \]
where \(X_{1}\) to \(X_{4}\) are grand mean centered level-1 covariates, and \(Z_{1}\) is a binary intervention assignment indicator. Following established notation, the w and b superscripts reference within- and between-cluster deviation scores, respectively. The notation conveys that all level-1 predictors contain within- and between-cluster variation. By default, all predictors are centered, including the binary dummy code. Grand mean centering a level-2 binary predictor creates an ANOVA-like contrast code, such that \(\beta_{0}\) is the grand mean and \(\beta_{5}\) is the predicted mean difference, adjusted for the covariates. Turning to the residual terms, \(b_{0j}\) is a between-cluster random effect that captures residual variation in the cluster-specific intercepts, and \(\varepsilon_{ij}\) is a within-cluster residual. By assumption, both residuals are normal with constant variance.
The key inputs to the model object are the unconditional intraclass correlation and effect size values. This illustration investigates power for intraclass correlation values of .10 and .25. To mimic a scenario with a strong covariate set (e.g., one that includes a pretest measure of the outcome), the within-cluster effect size is set to \(R_{w}^{2}\) = .18 (a value roughly midway between Cohen’s small and medium benchmarks). Most of this variation was allocated to \(X_{1}\) (the pretest) by assigning it a weight of .70, and the remaining three predictors had their within-cluster weights set to .10. We previously argued that the allocation of the weights among covariates is arbitrary because these predictors are not the focus. To demonstrate that point, we conducted a second simulation that weighted the four level-1 covariates equally.
Turning to the level-2 predictor, researchers often prefer the Cohen’s (1988) d effect size when working with binary explanatory variables. To illustrate, consider power for d = .40, the midway point between Cohen’s small and medium benchmarks. Substituting this value into the conversion equation below returns \(R_{b}^{2}\) = .038.
\[R^{2} = \frac{d^{2}}{d^{2} + 4}\]
The following code block shows the mlmpower
model object
for the example.
example3 <- (
effect_size(
icc = c(.10, .25),
within = .18,
between = .038,
)
+ outcome('y')
+ within_predictor('x1', weight = .70)
+ within_predictor('x2', weight = .10)
+ within_predictor('x3', weight = .10)
+ within_predictor('x4', weight = .10)
+ between_binary_predictor('z1', proportion = .50, weight = 1)
)
The mlmpower
package groups \(R^{2}\) values and intraclass correlations
into a single object called effect_size(), with three inputs separated
by commas. The within argument corresponds to \(R_{w}^{2}\), the between argument aligns
with \(R_{b}^{2}\), and the icc
argument assigns a global intraclass correlation to all level-1
variables. Separate simulations are performed for each requested
level.
Variable attributes for this example require three objects:
outcome()
, within_predictor()
, and
between_binary_predictor()
. The first two objects have an
argument for the variable name, weight (weight =), mean
(mean =
), standard deviation (sd =
), and an
intraclass correlation argument (icc =
) that supersedes the
global setting in effect_size()
. The
between_binary_predictor()
object requires a name, weight,
and the highest category proportion (proportion =
).
The previous code block assigns explicit weights to all variables
contributing to a given effect, as described previously. The unit weight
in the between_binary_predictor()
object reflects the fact
that this \(Z_{1}\) solely determines
\(R_{b}^{2}\). Finally, the proportion
argument assigns a 50/50 split to the level-2 groups.
The multilevel model parameters also require two sets of
correlations: within-cluster correlations among level-1 predictors, and
between-cluster correlations among level-2 predictors (in this case, the
cluster means and the intervention assignment indicator). These
correlations are optional, but they can specified using the
correlations()
object. The earlier code block omits this
object, thereby accepting the default specification. When the
correlations()
object is omitted, the mlmpower
package iteratively samples all correlations from a uniform distribution
between .10 and .30, such that the resulting power estimates average
over a distribution of possible associations. This default range spans
Cohen’s (1988) small to medium effect size benchmarks, and it also
brackets common correlations from published research (Bosco et al.,
2015; Funder & Ozer, 2019; Gignac & Szodorai, 2016). The default
specifications could be invoked by explicitly appending the
correlations()
object to the earlier script, as
follows.
Researchers can modify the upper and lower limits of each correlation range, or they can specify a constant correlation by inputting a scalar value. The illustrative simulations presented in Enders et al. (2023) suggest that predictor correlations tend not to matter very much.
It may be instructive to inspect the population parameters prior to
running the simulation. Executing summary(example3)
returns
tabular summaries of the multilevel model parameters.
summary(example3)
#> ! Parameters are solved based on average correlation.
#> $`icc = 0.1`
#> $r2
#> Proportion
#> Variance Within-Cluster Fixed Effects 0.18000000
#> Variance Cross-Level Interaction Effects 0.00000000
#> Variance Random Slopes 0.00000000
#> Within-Cluster Error Variance 0.72000000
#> Variance Between-Cluster Fixed Effects 0.07703223
#> Incremental Variance Level-2 Predictors 0.03800000
#> Between Variance Level-1 Covariates 0.03903223
#> Variance Random Intercepts 0.02296777
#>
#> $phi_b
#> x1_b x2_b x3_b x4_b z1_binary
#> x1_b 0.10000000 0.02000000 0.02000000 0.02000000 0.03162278
#> x2_b 0.02000000 0.10000000 0.02000000 0.02000000 0.03162278
#> x3_b 0.02000000 0.02000000 0.10000000 0.02000000 0.03162278
#> x4_b 0.02000000 0.02000000 0.02000000 0.10000000 0.03162278
#> z1_binary 0.03162278 0.03162278 0.03162278 0.03162278 0.25000000
#>
#> $phi_p
#> <0 x 0 matrix>
#>
#> $phi_w
#> x1_w x2_w x3_w x4_w
#> x1_w 0.90 0.18 0.18 0.18
#> x2_w 0.18 0.90 0.18 0.18
#> x3_w 0.18 0.18 0.90 0.18
#> x4_w 0.18 0.18 0.18 0.90
#>
#> $mean_Z
#> z1
#> 0.5
#>
#> $mean_X
#> x1 x2 x3 x4
#> 0 0 0 0
#>
#> $var_e
#> Value
#> Res. Var. 18
#>
#> $tau
#> Icept
#> Icept 0.5741943
#>
#> $gammas
#> Value
#> Icept 10.0000000
#> x1_w 1.9943101
#> x2_w 0.2849014
#> x3_w 0.2849014
#> x4_w 0.2849014
#> x1_b 1.9943101
#> x2_b 0.2849014
#> x3_b 0.2849014
#> x4_b 0.2849014
#> z1_binary 2.0548047
#>
#> $mean_Y
#> y
#> 10
#>
#>
#> $`icc = 0.25`
#> $r2
#> Proportion
#> Variance Within-Cluster Fixed Effects 0.1800000
#> Variance Cross-Level Interaction Effects 0.0000000
#> Variance Random Slopes 0.0000000
#> Within-Cluster Error Variance 0.5700000
#> Variance Between-Cluster Fixed Effects 0.1278739
#> Incremental Variance Level-2 Predictors 0.0380000
#> Between Variance Level-1 Covariates 0.0898739
#> Variance Random Intercepts 0.1221261
#>
#> $phi_b
#> x1_b x2_b x3_b x4_b z1_binary
#> x1_b 0.25 0.05 0.05 0.05 0.05
#> x2_b 0.05 0.25 0.05 0.05 0.05
#> x3_b 0.05 0.05 0.25 0.05 0.05
#> x4_b 0.05 0.05 0.05 0.25 0.05
#> z1_binary 0.05 0.05 0.05 0.05 0.25
#>
#> $phi_p
#> <0 x 0 matrix>
#>
#> $phi_w
#> x1_w x2_w x3_w x4_w
#> x1_w 0.75 0.15 0.15 0.15
#> x2_w 0.15 0.75 0.15 0.15
#> x3_w 0.15 0.15 0.75 0.15
#> x4_w 0.15 0.15 0.15 0.75
#>
#> $mean_Z
#> z1
#> 0.5
#>
#> $mean_X
#> x1 x2 x3 x4
#> 0 0 0 0
#>
#> $var_e
#> Value
#> Res. Var. 14.25
#>
#> $tau
#> Icept
#> Icept 3.053152
#>
#> $gammas
#> Value
#> Icept 10.0000000
#> x1_w 2.1846572
#> x2_w 0.3120939
#> x3_w 0.3120939
#> x4_w 0.3120939
#> x1_b 2.1846572
#> x2_b 0.3120939
#> x3_b 0.3120939
#> x4_b 0.3120939
#> z1_binary 2.0548047
#>
#> $mean_Y
#> y
#> 10
Having specified the target model, you next use the
power_analysis()
function to conduct the simulations. The
function requires four inputs: the model argument specifies the
parameter value object (e.g., example3
), the
replications
input specifies the number of artificial data
sets, n_between
is a vector of level-2 sample sizes, and
n_within
is a vector of level-1 sample sizes (i.e., the
number of observations per cluster). The code block below pairs four
level-2 sample size values (\(J =\) 30,
60, 90, and 120) with two level-1 sample sizes (\(n_{j} =\) 15 or 30 observations per
cluster), and it requests 2,000 artificial data sets for each
combination.
The package uses lme4
(Bates et al., 2021) for model
fitting, and it defaults to an alpha level of .05 for all significance
tests. Executing summary(powersim)
returns the tabular
summaries of the simulation results shown below.
# Set seed for replicable results
set.seed(981723)
# Run Power Analysis
powersim3 <-
power_analysis(
model = example3,
replications = 2000,
n_between = c(30, 60, 90, 120),
n_within = c(15, 30)
)
The package uses lme4
(Bates et al., 2021) for model
fitting, and it defaults to an alpha level of .05 for all significance
tests. Executing summary(powersim3)
returns the tabular
summaries of the simulation results shown below.
summary(powersim3)
#>
#> ── Power Analysis Specification ──
#>
#> Replications = 2000
#> N_Within = 15 and 30
#> N_Between = 30, 60, 90, and 120
#>
#> ── lme4 Model Syntax
#>
#> y ~ 1 + cgm(x1) + cgm(x2) + cgm(x3) + cgm(x4) + cgm(z1) + (1 | `_id`)
#>
#> ℹ cwc() = centering within cluster
#> ℹ cgm() = centering with grand mean
#>
#> ── Effect Sizes
#>
#> • WITHIN = 0.18
#> • BETWEEN = 0.038
#> • RANDOM SLOPE = 0
#> • PRODUCT = 0
#>
#> ── Power Analysis Results ──
#>
#> ── global icc = 0.1, n_within = 15, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.27 ± 0.02
#> Fixed: cgm(x3) 0.25 ± 0.02
#> Fixed: cgm(x4) 0.26 ± 0.02
#> Fixed: cgm(z1) 0.98 ± 0.01
#>
#> ── global icc = 0.25, n_within = 15, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.30 ± 0.02
#> Fixed: cgm(x3) 0.31 ± 0.02
#> Fixed: cgm(x4) 0.31 ± 0.02
#> Fixed: cgm(z1) 0.76 ± 0.02
#>
#> ── global icc = 0.1, n_within = 30, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.45 ± 0.02
#> Fixed: cgm(x3) 0.46 ± 0.02
#> Fixed: cgm(x4) 0.48 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 30, n_between = 30
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.54 ± 0.02
#> Fixed: cgm(x3) 0.54 ± 0.02
#> Fixed: cgm(x4) 0.53 ± 0.02
#> Fixed: cgm(z1) 0.79 ± 0.02
#>
#> ── global icc = 0.1, n_within = 15, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.46 ± 0.02
#> Fixed: cgm(x3) 0.48 ± 0.02
#> Fixed: cgm(x4) 0.47 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 15, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.54 ± 0.02
#> Fixed: cgm(x3) 0.53 ± 0.02
#> Fixed: cgm(x4) 0.55 ± 0.02
#> Fixed: cgm(z1) 0.98 ± 0.01
#>
#> ── global icc = 0.1, n_within = 30, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.76 ± 0.02
#> Fixed: cgm(x3) 0.75 ± 0.02
#> Fixed: cgm(x4) 0.75 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 30, n_between = 60
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.83 ± 0.02
#> Fixed: cgm(x3) 0.83 ± 0.02
#> Fixed: cgm(x4) 0.84 ± 0.02
#> Fixed: cgm(z1) 0.99 ± 0.00
#>
#> ── global icc = 0.1, n_within = 15, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.64 ± 0.02
#> Fixed: cgm(x3) 0.63 ± 0.02
#> Fixed: cgm(x4) 0.63 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 15, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.72 ± 0.02
#> Fixed: cgm(x3) 0.71 ± 0.02
#> Fixed: cgm(x4) 0.71 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.1, n_within = 30, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.90 ± 0.01
#> Fixed: cgm(x3) 0.90 ± 0.01
#> Fixed: cgm(x4) 0.89 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 30, n_between = 90
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.95 ± 0.01
#> Fixed: cgm(x3) 0.95 ± 0.01
#> Fixed: cgm(x4) 0.95 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.1, n_within = 15, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.75 ± 0.02
#> Fixed: cgm(x3) 0.75 ± 0.02
#> Fixed: cgm(x4) 0.77 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 15, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.84 ± 0.02
#> Fixed: cgm(x3) 0.83 ± 0.02
#> Fixed: cgm(x4) 0.84 ± 0.02
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.1, n_within = 30, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.96 ± 0.01
#> Fixed: cgm(x3) 0.96 ± 0.01
#> Fixed: cgm(x4) 0.97 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ── global icc = 0.25, n_within = 30, n_between = 120
#> Power
#> Fixed: (Intercept) 1.00 ± 0.00
#> Fixed: cgm(x1) 1.00 ± 0.00
#> Fixed: cgm(x2) 0.98 ± 0.01
#> Fixed: cgm(x3) 0.99 ± 0.01
#> Fixed: cgm(x4) 0.98 ± 0.01
#> Fixed: cgm(z1) 1.00 ± 0.00
#>
#> ℹ Margin of error (MOE) computed as ± 1.96 * standard error