Message/Author 

Leanne Magee posted on Wednesday, October 06, 2004  1:31 pm



I am attempting to conduct confirmatory factor analyses using AMOS software on a data set collected from a 5point scale in which there is neither univariate nor multivariate normality. Realizing AMOS is not sufficient for these analyses, we considered MPLus. However, my sample size is too small for weighted least squares (WLS) categorical methods in MPlus, and the methods for continuous data are inappropriate because of the level of measurement of the item responses. We have considered fitting the model using polychoric correlations and unweighted least squares (ULS) in MPlus, because ULS might do better with a small sample than the otherwise preferable WLS methods. What would you suggest we do? 


I don't know how small your sample is but the WLSMV estimator has been shown to work well in small samples for some models. You can request the following reference from burnett@gseis.ucla.edu: Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Accepted for publication in Psychometrika. (#75) 

Leanne Magee posted on Thursday, October 07, 2004  10:00 am



Thank you for your prompt response. As it turns out, I am conducting the CFA in three samples  one has 110 participants, another 55, and the third 31. I have requested the article by contacting the given email address, but wanted to know if you had any opinion regarding the actual sizes of my sample before I am able to read the article. Thank you! 


With these very small samples, ULS is most likely the best approach. 


When using ordinal items in CFA models (samples >=250), it seems that a best practice would be to use the raw items and the WLSMV estimation procedure. However, I have seen some investigators use a polychoric correlation matrix as the data input and the ML estimation procedure. While I assume the two methods should produce very similar results, shouldn't the former approach produce more precise model resuts? Any references on this topic would be appreciated. 


If you use maximum likelihood with a polychoric correlation matrix, you will obtain consistent parameter estimates but standard errors and chisquare will not be correct. It is often the case that polychoric correlatino matrices are not positive defininite. 


Thank you Linda. And so, you'd recommend using raw items as input with WLSMV as a better approach than ML? 


Yes. Or maximum likelihood with raw data. Mplus has both estimators for categorical outcomes. 


I am conducting an EFA with 10 categorical indicators (some binary, some with 5 categories) on a sample of 1,085. The first model I ran involved using the ULS estimator, and I obtained a 2factor solution that seemed quite interpretable and made sense in terms of previous work. After doing some more reading, I discovered that WLSMV was considered to be a better estimator. When I ran the analysis using WLSMV, I obtained a different solution, and one that is less interpretable/useful. I still obtained a 2factor solution, it's just that I have more items double loading on both factors, and overall, a less clear picture about how the items hang together. Am I justified in using ULS? Why would the solutions be so different from one another? 


I don't think the two analyses should be that different. Please send the two outputs and your license number to support@statmodel.com. 

Lim Jie Xin posted on Thursday, July 17, 2014  3:19 am



Dear Muthen, Referring to your previous post (dated May 04, 2007) regarding FIML and Polychoric correlation, I am interested in the nonlinear CFA (e.g. Example 5.7 in the manual) with categorical data. I understand that LMS uses FIML. Does declaring the data as categorical produced inaccurate SE as well? 


No. 


I have two latent constructs, bullying and victimization, that are composed of four binary indicators (coded 0,1) that were measured at 2 time points with a sample of over 700. I ran a CFA for each construct at each time point as well as factorial invariance across time for each construct.However, my model fit statistics in many cases appear too good to be true (e.g., CFI 1.000 and RMSEA 0 or very close to 0). I used the WLSMV as an estimator as a result of the categorical nature of indicators. I just read that the polychoric correlation matrix as the data input should be included. However, I am not sure what this is, how to include it (syntax) and how this may influence the models. Any advice or suggestions would be greatly appreciated! Brett 


If you have ordinal variables, Mplus analyzes a polychoric correlation matrix. You do not need to provide this. Mplus uses the raw data to compute it. 


Thanks for the quick response Linda! So this polychoric correlation matrix is produced automatically if the indicators are labeled as categorical and an appropriate estimator is used (e.g., WLSMV)? Do you have any idea why my model fit indices appear to be so good? For example, here is the model fit indices for the victimization construct at one time point (similar results were found at the second time point): Number of Free Parameters 21 ChiSquare Test of Model Fit Value 8.763* Degrees of Freedom 15 PValue 0.8896 * The chisquare value for MLM, MLMV, MLR, ULSMV, WLSM and WLSMV cannot be used for chisquare difference testing in the regular way. MLM, MLR and WLSM chisquare difference testing is described on the Mplus website. MLMV, WLSMV,and ULSMV difference testing is done using the DIFFTEST option. RMSEA (Root Mean Square Error Of Approximation) Estimate 0.000 90 Percent C.I. 0.000 0.016 Probability RMSEA <= .05 1.000 CFI/TLI CFI 1.000 TLI 1.012 


Sometimes overly good fit is because correlations are low making it difficult to reject the model. 


Okay. Do you think it would be appropriate to publish findings with a fit statistics like this? Thanks! Brett 


If you explain why the fit is so good. 


Thanks! 

Back to top 