Posthoc power analysis PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 jeon small posted on Friday, March 01, 2013 - 8:05 pm
How do you perform a posthoc power analysis. I have a sample of 438 girls and boys. We used 11 indicator variable to form four variables. The fit of the structural model was satisfactory with CFI=.954, RMSEA=.05, DF=40 CHI SQUARED=95.85 PROBABILITY=.0000
The standardized regression weights were significant and ranged in size form .65-.03.
 Linda K. Muthen posted on Saturday, March 02, 2013 - 2:30 pm
See Example 12.7 where the values from a real data analysis are saved and used as population parameter values for a Monte Carlo simulation study that can examine power. This method has its critics. You may want to explore this issue on SEMNET.
 jeon small posted on Monday, March 04, 2013 - 1:38 pm
I ran this model in Amos using maximum-likelihood estimation (for the missing data)--however, I can not get the model to converge in Mplus. What am I doing wrong? My output is below.


STRUCTURAL MODEL OF ADOLESCENT ALCOHOL USE AND PERCEPTION OF IPV

SUMMARY OF ANALYSIS

Number of groups 1
Number of observations 553

Number of dependent variables 10
Number of independent variables 0
Number of continuous latent variables 4


Estimator ML
Information matrix OBSERVED
Maximum number of iterations 1000
Convergence criterion 0.500D-04
Maximum number of steepest descent iterations 20
Maximum number of iterations for H1 2000
Convergence criterion for H1 0.100D-03

Input data file(s)
h:/M_russell/mplus10_08_12v1.csv

Input data format FREE
 Linda K. Muthen posted on Monday, March 04, 2013 - 3:37 pm
Please send the outputs and your license number to support@statmodel.com.
 Christoph Schaefer posted on Monday, January 15, 2018 - 6:07 am
Dear Professors Muthen,

I would like to estimate the power to detect that a path coefficient is different from zero and am trying to use
https://www.statmodel.com/power.shtml
as a procedure. Alas it hasn't worked so far.
A simplification of my model looks like the following, with three IVs and three DVs. Two of those are latent, four manifest:

IV1 by
Indicator1
Indicator2
Indicator3;

DV1 by
Indicator4
Indicator5
Indicator6;

DV1 on
IV1
IV2
IV3;

DV2 on
IV1
IV2
IV3;


When I follow step 1 of the procedure on
https://www.statmodel.com/power.shtml, fixing values for the factor loadings and the path coefficients, the resulting covariance matrix of the residuals contains zeros, which is not accepted by Mplus in step 2.
Do I have to fix more in step 1 than the factor loadings and the path coefficients?

(In step 1, I have used the most simple matrix possible, in the form of:
0 0 0 0 0 0 0 0
1
0 1
0 0 1
0 0 0 1
0 0 0 0 1
0 0 0 0 0 1
0 0 0 0 0 0 1
0 0 0 0 0 0 0 1
)
 Bengt O. Muthen posted on Monday, January 15, 2018 - 7:57 am
Please send the output from Step 1 to Support along with your license number.
 Lara Ditrich posted on Monday, February 10, 2020 - 12:03 am
Dear Professors Muthen,

I have a question concerning the interpretation of a post-hoc power analysis following the procedure described in your 2002 paper.
Suppose I have a population estimate for a specific path of 0.345 and the Monte Carlo Simulation gives a standard error average of 0.10, and a "%Sig Coeff" of 0.950.
Can I divide the estimate by the standard error average to obtain a z-value, transform this into d based on my sample size (say, N=200), and finally conclude that my study had 95% power to detect a medium sized path?
Or how would the results of this simulation be translated into "conventional post-hoc power analysis language"?
 Bengt O. Muthen posted on Monday, February 10, 2020 - 5:02 pm
I don't know what "d" is but in general, it is better to express in Model Constrain the quantity you are interested in and get the power for it the regular way under New parameters.
 Lara Ditrich posted on Tuesday, February 11, 2020 - 12:11 am
Thank you for your quick reply and sorry for not being clear enough in my previous post. With "d" I meant Cohen's d as an effect size estimate.

Just to make sure I understand you correctly: I am using the parameters from my model as population estimates for the Monte Carlo simulation. Your suggestion is to define a new parameter of a certain size under "model constraint" (e.g., y on x@some value) that is not part of the original model and look at the "%Sig Coeff" to determine how large the power was to detect this effect, right?

If this is right, I have two follow-up questions:
Should I define this parameter only in the "Model"-part of the MC simulation or also in the "Model Population" part?
And how do I know whether this parameter is "small", "medium", or "large" in size in terms of conventional rules of thumb?
 Bengt O. Muthen posted on Tuesday, February 11, 2020 - 5:17 pm
Cohen's d is (m1-m2)/sd, where the m's are means for two different conditions and sd is the relevant standard deviation. You can express d that way in Model Constraint and thereby get the power for the d. I don't know, however, what your two conditions are.
 Lara Ditrich posted on Wednesday, February 12, 2020 - 12:26 am
Thank you again for your helpful reply. It made me realize that expressing the effects as ds does not make much sense in my case, as I don't have a condition variable in my path model.

I assume it would be more appropriate to use r as an effect size in my case and then to include for example

Model constraint:
Y WITH X@.3

into the "Model" part of my MC simulation to check how high the power was to detect a medium sized relation. Is that correct?
 Bengt O. Muthen posted on Wednesday, February 12, 2020 - 4:54 pm
I would simply choose parameter values that make the variance of Y and X one. Then Y ON X will be in a standardized metric where I think the Cohen standard for small, medium, large effects would be relevant.
 Lara Ditrich posted on Monday, February 17, 2020 - 2:29 am
If I understand your reply correctly, this means that I should not use the parameter estimates from my model as input for the MC simulation, but should generate a mean vector and covariance matrix that ensures the variances of my variables are 1 as the input for Step 2. Is that correct?

P.S.: Just to be sure - does your reply mean that I can interpret the STDYX estimates in terms of Cohen's standards for d?
 Lara Ditrich posted on Tuesday, February 18, 2020 - 1:46 am
Sorry for posting again. I just wanted to check back with you whether a solution I came up with is in line with what you suggest:
I run Step 1 with z-standardized variables (which makes all their variances 1) and then feed the estimates into Step 2. I then interpret the column "%Sig Coeff" as my study's power to detect an effect of the estimate's size (which represents a beta-weight).
 Bengt O. Muthen posted on Tuesday, February 18, 2020 - 5:06 pm
Or, you could use the standardized solution and use those for the simulation.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: