You do a run where you fix all the parameters to the desired values and request RESIDUAL to get the estimated mean vector and covariance matrix. In this run you can use means and covariance input with dummy values, for example using zeros for the means and an identity matrix for the covariance matrix.
We have a couple of pages that describe the steps for Satorra-Saris power calculation in a growth model setting which we discuss in our courses and we will shortly post this information on our web site. This includes how to get the power value from the non-centrality parameter (the "chi-square" value).
Dear Dr. Muthen: In Muthen & Curran (1996) and Curran & Muthen (1996), parameter values in the growth model were chosen so that the difference between the treatment and control group means at the last time point scaled by the pooled s.d. at the last time point represented a effect size of 0.20. Could you please explain how to shoose parameter values (e.g., slope mean and variance, correlation between the intercept and slope, et al.) in the first step for Satorra-Saris power calculation for a given effect size (e.g., 0.20)? Thank you very much for your help.
Jichuan Wang, Ph.D. School of Medicine Wright State University Dayton, OH 45435
bmuthen posted on Tuesday, November 21, 2000 - 3:49 pm
You need to know how to calculate means and variances for the outcomes given a certain growth model with a certain choice of population parameter values. Given this, the simplest way is choosing parameter values that give the desired effect size using trial and error.
Hi there, I am new to mplus and am confused about setting the start values in monte carlo for an LTA model. I would like to run an mc for a power analysis and am trying to use example 8.13 as a guide.
I would like to see what power I have at N=300, 400 and 500 to test the possibility that x (continuous variable), is posititvely correlated with membership in high-risk drinking classes c2#1 and c1#1. I want to test the x effects as small, med and large effects over the different sample sizes. I originally wanted to use the classic cohen values, r=.1, .3 and .5 to represent small, med, and large effects, respectively, but realize that this is more like a logistic regression and am thus confused as to what start values I should use to test these different effect size levels.
In the example 8.13, you suggest "c2#1 ON c1#1*.5 x*1;" but I have no idea what x*1 would actually be in terms of an effect size. Can you tell me how best to represent different effect sizes in the mplus code?
I was also curious why "c2#1 on x*.2;" under the specific model "%c1#1%" while it appears differently in the %overall% command (i.e., c2#1 ON c1#1*.5 x*1;)
Effect size has a different meaning when the dependent variable is categorical and is not as easily settled. You are interested in an effect of x on c in the multinomial logistic regression of c on x. So instead you have to ask yourself how much the probability of being in a certain c class changes as a function of x changing, say by 1 SD. You have to decide what a small/medium/large probability change is. Note, however, that the probability change is different for a 1 SD x change at low, medium, and high x values. You can use UG Chapter 13 formulas to compute the probabilities.
You may also take a look at logistic regression books to see if any counterpart to effect size appears, but I doubt that.
Regarding your last question, the overall c2#1 on x will apply to the last class whereas the c1#1-specific statement refers to the first class. This is how you represent the model picture's broken line.
Find the example in Chapter 7 that is like your model. Then use the Monte Carlo counterpart of that example as a starting point. If you run into problems, send your output and license number to firstname.lastname@example.org.
The column labeled % Sig Coeff gives the proportion of replications for which the null hypothesis that a parameter is equal to zero is rejected at the .05 level (two-tailed test with a critical value of 1.96). The statistical test is the ratio of the parameter estimate to its standard error, an approximately normally distributed quantity (z-score) in large samples. For parameters with population values different from zero, this value is an estimate of power with respect to a single parameter, that is, the probability of rejecting the null hypothesis when it is false. For parameters with population values equal to zero, this value is an estimate of Type I error, that is, the probability of rejecting the null hypothesis when it is true.
So when I get zero for all parameters means that my null hypothesis is true but I reject it. But I already got p-value<0.05 for some of these parameters! Does that also mean the model does not have enough power to run with the current sample size?