I am working on a 2 level SEM model with continuous observed variables loading onto a latent variable specified at Level 1 and 2, with numerous binary (e.g., sex) and continuous covariates (e.g., income) regressed on various predictors of the latent variable at both levels.
I have several questions regarding the 95CI and Effect size calculations.
1. Does Mplus offer an option to request confidence intervals for the standardized path coefficients?
2. Can effect sizes be calculated for individual unstandardized estimates and/or the R2? If “yes”, can you point me to a formula or reference that would illustrate the appropriate steps to calculate these?
1. The CINTERVAL option is for raw coefficients. You can use MODEL CONSTRAINT to create new parameters that are the standardized coefficients and then the CINTERVAL option will give confidence intervals for them.
2. If you ask for STANDARDIZED in the OUTPUT command, you will get R-square. You cannot get effect sizes automatically. See any standard statistics text for how to compute these.
I found a formula that explains how to calculate the effect size of a level 2 variable for multilevel analyses:
delta = 2 x B x SDpredictor/residual var at student level
However, I was wondering how you would calculate this with the output in Mplus when trying to calculate the effect size of a cross-level interaction? The B you can get from the regression of your random slope on your class-level variable (the B for the interaction effect), but how do you get the SD of your predictor when there is no actual predictor (i.e., you create a cross-level interaction by including a random slope and regressing it on your class-level variable, but you do not actually create a new predictor)? Or I am thinking in the wrong way?
I am conducting a Monte Carlo study for power analysis for a 2-level multigroup (2 treatment groups) model where I have partial nesting in one of the treatment arms. To accommodate the partial nesting, I am fixing the between residual variance for the control group to be 0 as recommended in Sterba et al (2014). This is throwing me off a little though in putting in appropriate estimates for my hypothesized medium tx effect of d=.5. I am sure my current numbers make for an implausibly high effect size. Any thoughts on how to responsibly estimate the treatment group’s between residual variance and intercept as well as the control group’s between intercept? I just added the last snippet of the model so you can see the treatment effect I am trying to estimate…
--snip-- MODEL: %WITHIN% y*1 (1); y on x1*.1 (2); y on x2*.1 (3);
%BETWEEN% y*.25; [y*.5] (mu_tx);
MODEL g1: %WITHIN% y*1 (1); y on x1*.1 (2); y on x2*.1 (3);
Sounds like you have a cluster-randomized trial. Wouldn't you compute effect size getting the denominator SD from the total y variance, that is, B+W? And would you use the control group variance for that?
Thank you for the quick reply. Honestly, I just don't know how to calculate the effect size for this case. It goes beyond what I have done before. I was wondering if you might know the formula? :-) That way, I can figure out what the effect size would be for the current numbers I have in there and the primary relationship I am interested in (mu_tx-mu_c) and recalibrate as necessary.
Just to be clear, it is a partially nested design--not a cluster-randomized study. In my study, there will be individual randomization but the treatment involves group therapy whereas the control involves individual services as usual. That means there are clusters of group therapy in the treatment arm, but "clusters of one" in the control arm.
You can see this reflected in the code. That's why I am fixing between only for the control group at 0. The analysis approach to address this is a really new approach that Sterba et al (2014) recently developed.