Message/Author |
|
|
I am working on a 2 level SEM model with continuous observed variables loading onto a latent variable specified at Level 1 and 2, with numerous binary (e.g., sex) and continuous covariates (e.g., income) regressed on various predictors of the latent variable at both levels. I have several questions regarding the 95CI and Effect size calculations. 1. Does Mplus offer an option to request confidence intervals for the standardized path coefficients? 2. Can effect sizes be calculated for individual unstandardized estimates and/or the R2? If “yes”, can you point me to a formula or reference that would illustrate the appropriate steps to calculate these? |
|
|
1. The CINTERVAL option is for raw coefficients. You can use MODEL CONSTRAINT to create new parameters that are the standardized coefficients and then the CINTERVAL option will give confidence intervals for them. 2. If you ask for STANDARDIZED in the OUTPUT command, you will get R-square. You cannot get effect sizes automatically. See any standard statistics text for how to compute these. |
|
|
Hello, I found a formula that explains how to calculate the effect size of a level 2 variable for multilevel analyses: delta = 2 x B x SDpredictor/residual var at student level However, I was wondering how you would calculate this with the output in Mplus when trying to calculate the effect size of a cross-level interaction? The B you can get from the regression of your random slope on your class-level variable (the B for the interaction effect), but how do you get the SD of your predictor when there is no actual predictor (i.e., you create a cross-level interaction by including a random slope and regressing it on your class-level variable, but you do not actually create a new predictor)? Or I am thinking in the wrong way? Thank you for your answer! |
|
|
I am not familiar with that formula. You can get the variance of a predictor from TYPE=TWOLEVEL BASIC; You may want to ask your question on Multilevel Net. |
|
|
Hello, I am conducting a Monte Carlo study for power analysis for a 2-level multigroup (2 treatment groups) model where I have partial nesting in one of the treatment arms. To accommodate the partial nesting, I am fixing the between residual variance for the control group to be 0 as recommended in Sterba et al (2014). This is throwing me off a little though in putting in appropriate estimates for my hypothesized medium tx effect of d=.5. I am sure my current numbers make for an implausibly high effect size. Any thoughts on how to responsibly estimate the treatment group’s between residual variance and intercept as well as the control group’s between intercept? I just added the last snippet of the model so you can see the treatment effect I am trying to estimate… Thanks! Susan --snip-- MODEL: %WITHIN% y*1 (1); y on x1*.1 (2); y on x2*.1 (3); %BETWEEN% y*.25; [y*.5] (mu_tx); MODEL g1: %WITHIN% y*1 (1); y on x1*.1 (2); y on x2*.1 (3); %BETWEEN% y@0; [y*0] (mu_c); model constraint: new txeff*.5; txeff=mu_tx-mu_c; output: tech9; |
|
|
Sounds like you have a cluster-randomized trial. Wouldn't you compute effect size getting the denominator SD from the total y variance, that is, B+W? And would you use the control group variance for that? |
|
|
Thank you for the quick reply. Honestly, I just don't know how to calculate the effect size for this case. It goes beyond what I have done before. I was wondering if you might know the formula? :-) That way, I can figure out what the effect size would be for the current numbers I have in there and the primary relationship I am interested in (mu_tx-mu_c) and recalibrate as necessary. Just to be clear, it is a partially nested design--not a cluster-randomized study. In my study, there will be individual randomization but the treatment involves group therapy whereas the control involves individual services as usual. That means there are clusters of group therapy in the treatment arm, but "clusters of one" in the control arm. You can see this reflected in the code. That's why I am fixing between only for the control group at 0. The analysis approach to address this is a really new approach that Sterba et al (2014) recently developed. thank you for any advice you might have! Susan |
|
|
Seems like an effect size could use (mu_tx-mu_c)/SD where SD is the control group variance of the outcome. But others may have a better view of that. |
|
|
Thank you very much, Dr. Muthen! Are you saying that I would treat the "residual variance" in mplus as if it is the "SD" in the cohen's d equation? So, I wouldn't have to do: (mu_tx-mu_c)/sqrt(variance) instead, I would do (mu_tx-mu_c)/variance? thank you for the clarification! Susan |
|
|
No, I meant SD = sqrt(var), where var is the (total, not residual) variance of the outcome. |
|
|
Thank you! Susan |
|
|
Dear professors, I’m running two level models and I’m interested in the effect of a dichotomous level 2 predictor on level 1 variables. To calculate the effect size I use the formula of Tymms, which is the beta devided by the square root of the within group variance (Tymms 2004). I want to report the results of the standardized model results (stdyx standardization) and so I took the estimates for the betas from the standardized model results to calculate the effect size. Now I wonder which within group variances I have to use – the estimates from the standardized model results or from the model results? Thank you so much in advance! Best regards, Alena |
|
|
To not confuse matters, why not use the raw beta and divide by the within group variance. |
|
Back to top |