Message/Author 


I am evaluating measurement invariance with categorical indicators. I am following procedures in chapter 14 of the user manual in terms of model specification. Looking at partial invariance I am using modification indices and EPCs using the WLSMV estimator and TECH2 output using the MLR estimator. I wonder why, given the same model (aside from constraints for identification), only difference being the estimator, suggested parameter changes seem to disagree markedly. I had assumed results would be somewhat similar. Is my assumption incorrect? 


TECH2 gives the firstorder derivatives. This corresponds to the numerator of the MIs, that is, they are not scaled properly to be in a chisquare metric. 


Understood, thank you. Is there any way to obtain information using either estimator such that a similar decision can be made regarding which parameter changes are most prudent? As it stand WLSMV leads to markedly different conclusions regarding partial invariance than MLR and I am wondering if more agreement might be obtained. 


We'll add modification indices for MLR in this case to our list for a future version. As it is currently, you would have to do MLR for one item loading at a time and do two runs (with and without equality) then take 2 times the log likelihood difference as a chisquare test. 


P.S. You can also use MLR in a model where the grouping variable is a covariate. Then look for lack of intercept invariance by regressing each item on the covariate at a time. 


Thank you for including that request. I will explore both options. 

Martina Gere posted on Thursday, February 09, 2012  4:10 am



Dear Prof. Muthen, I get different parameters when using WLSMV and MLR too, but in a different context. I am running a model with two independent latent variables f1 and f2 (using ordinal item indicators), and dependent latent variable f3 (all continuous indicators). I am testing f3 ON f1 f2. I used WLSMV for model fit. Regression coefficients were as expected (f3 ON f1 sign, f3 ON f2 not sign). In addition I ran MLR to get the loglikelihood estimate (because in the next step I want to compare loglikelihood with a moderator model, where I add f3 ON f1xf2 using XWITH). However, in MLR, regression coefficients are the opposite (f3 ON f1 not sign., but f3 ON f2 sign.) Q1: What is the most appropriate estimator for my model, WLSMV, MLR, or something else? Q2: Can I use ML instead of MLR as estimator in the models with and without the interaction (to compare loglikelihood)? ML leads to regression coefficients similar to WLSMV and as expected. Q3: Why does MLR lead to different regression coefficients than ML or WLSMV? 


If you have a log of missing data and less than four factors, I would recommend maximum likelihood. If you don't have missing data and have many factors, I would recommend weighted least squares. Weighted least squares uses probit regression. Maximum likelihood uses logistic regression as the default but can also use probit regression. ML and MLR produce the same parameter estimates. The parameter estimates for ML and MLR should differ from WLSMV unless you are using the probit link with ML and MLR. I would need to see the outputs and your license number at support@statmodel.com to explain the differences. 


Do you mind if your comments such as the first paragraph of "Linda K. Muthen posted on Thursday, February 09, 2012  12:50 pm" are cited in papers? Thank you, Scott 


Not in a paper, I don't think. But this is drawing on the principles behind the two estimators. As we describe in the UG, WLSMV does not handle MAR, but ML does. 


Okay thank you. The UG is a good citation for missing data. My understanding is that if you have missing data, you should basically use ML or MLR. As far as the number of categories in the observed variable, Beauducel and Herzberg (2006) suggest WLSMV outperforms ML, is that your experience? Finally, as far as the number of factors goes, I've not been able to find a citation that ML should be used when you have less than 4 factors and WLSMV for more than 5 factors. Are you aware of one? Thank you for your help. Scott 


Sorry I meant Beauducel and Herzberg (2006) suggest WLSMV outperforms ML when the outcome variables have 2 or 3 variables. 


I did find a reference in Beauducel and Herzberg (2006,p. 201) and Dolan (1994). It appears that:  WLSMV outperforms ML on sample size  WLSMV outperforms ML with variables that had 2 or 3 categories  Other than chisquare, there appears to be no difference in model fit statistics  ML underestimates size of loadings when variables had only two or three categories  Factor loadings and standard errors not affected by number of factors  Even with small sample, large model and moderate loadings, WLMVS is better  5 categories is the minimum for 5 categories  "It is clear that a method like WLSMV, which was designed to deal with categorical variables, cannot outperform ML estimation when the number of categories is very large." Thus it seems to me that at least with binary indicators or 3 category categorical indicators, WLSMV is the recommended approach unless you have missing data, in which case, ML is likely better as it handles MAR. However, this seems contrary to Dr. Muthen's Feb 9, 2012 posting that WLSMV should be used if you don't have missing data and have many factors, I would recommend weighted least squares. Weighted least squares uses probit regression Does this sound correct? 


Yes, you can use the user's guide as a citation. You can also use multiple imputation to generate data and analyze it with WLSMV. You can have as many factors as you want with categorical indicators and maximum likelihood estimation. It may not be feasible to wait for more than four. It sounds to me like the paper compares weighted least squares treating the variables as categorical with maximum likelihood treating the variables as continuous. This does not compare estimators but compares treating variables as categorical versus continuous. The important and valid comparison would be treating the variables as categorical with both weighted least squares and maximum likelihood. 


Great. Thank you for clarifying. The Beauducel and Herzberg (2006) paper (in the SEM Journal) does compare categorical variables with WLSMV and ML. It does appear that the WLSMV estimator is the better option, at least for binary variables...and as you say, if the data is MAR, imputing the data first takes care of the need for the ML estimator. Thanks again, Scott 


We got the article. When he says ML, he means treat the variables as continuous. That is not maximum likelihood estimation with categorical variables. It is simply using the continuous variable maximum likelihood fitting function whereas the true categorical ML would use another fitting function. Many people are under the false impression that ML cannot be used with categorical dependent variables. This is a common mistake. The clue here is that he presents chisquare values which are not available when ML treats variables as categorical. 

Back to top 