Yao Wen posted on Sunday, March 29, 2015 - 8:45 pm
I generated two-level SEM data with random factor loadings in SAS and estimated the model in Mplus. All estimates were good except the variance of random factor loadings. The estimates of variance were near 0 (around .001) whereas the true values were around 0.13.
I tried to add priors to the variance but got errors no matter how I tried. I tried a beta distribution firstly and failed (not recognizable by Mplus). Then I switched to the inverse-Wishart distribution. But it's shown "THE PRIOR SPECIFICATION FOR PARAMETER 21 IS NOT AVAILABLE."
Any suggestions are welcome! Thanks a lot!
Here is the Mplus model I used to estimate:
Variable: Names are ID y1-y4; usevariables are y1-y4; CLUSTER = id;
ANALYSIS: TYPE = TWOLEVEL random; ESTIMATOR=BAYES; PROCESSORS = 2; BITERATIONS = (10000); MODEL: %WITHIN% s1-s4 | lit_w by y1-y4* ; lit_w@1; [lit_w@0]; %BETWEEN% s1-s4 (Var1-Var4); y1-y4; [y1-y4]; lit_b by y1-y4*; lit_b@1; [lit_b@0]; MODEL PRIORS: Var1-Var4 ~ iw(1,5);
OUTPUT: TECH1 TECH8;
Yao Wen posted on Sunday, March 29, 2015 - 8:49 pm
I'm familiar with Winbugs about running Bayesian models. But I didn't find a list of distribution names that Mplus uses for priors. I'm not sure if the way I set up priors are correct. If you have any suggestions or comments, I will be really appreciated!
The multivariate version of the inverse gamma is the inverse wishart and it would apply if the random loadings are correlated. By default the random loadings with the above setup are uncorrelated so this is why you need to replace IW with IG.
The list of available priors is on page 698 in the User's Guide. For these parameters the available priors are
Inverse Gamma Gamma Uniform Lognormal Normal
Unless the number of clusters is pretty small the prior is not the issue. Try to generate data within Mplus to avoid any data related issues. If you still get poor results send the Mplus montecarlo run to firstname.lastname@example.org. Take a look at this example in the Mplus installation directory
\Program Files\Mplus\Mplus Examples\Monte Carlo Counterparts\mcex9.19.inp
Yao Wen posted on Monday, March 30, 2015 - 7:12 pm
Thank you so much, Tihomir. I will dig more before I bother you again.
I have been experimenting with random factor loadings models with a single indicator variable given in the Mplus FAQs:
MODEL: %within% sigma | f by y; f; y@0; %between% [sigma@1]; sigma; y; sigma with y; sigma y on x;
I noticed that mixing appears to be better and convergence faster when the variable/s in the model are transformed to small numbers. In general, it seems like mixing is better when the variance of f is quite small, preferably smaller than 1.0 (as long as it is not close to 0). Assuming that I am not observing this by chance, can you tell me why this would be the case?