Message/Author 


Deat MPlusteam, what would you consider to be the "best" estimator for a 2levelCFA with categorical observations? I'd like to do model evaluation and comparisons as well. So could you please give me a short briefing, which of the three estimators available is the appropriate one concerning ordinal data, model comparison and of course speed? Thank you very much. 


I would recommend MLR which is the default. Speed would be approximately the same because all available estimators are maximum likelihood. You would have to use the scaling factor which is provided in the output to do model comparisons. How to do this is described on the website. 


Dear Linda, I already thought about using this test but I can't find the scaling factor in the output :( I'm using, as I mentioned, categorical data containing missing values and an analysis specification like this: ANALYSIS: TYPE = TWOLEVEL MISSING; ESTIMATOR = MLR; ALGORITHM=INTEGRATION; INTEGRATION=GAUSSHERMITE(5); ADAPTIVE=ON; convergence=0.1; In the output the tests of model fit section looks like this. TESTS OF MODEL FIT Loglikelihood H0 Value 42189.823 Information Criteria Number of Free Parameters 26 Akaike (AIC) 84431.645 Bayesian (BIC) 84612.051 SampleSize Adjusted BIC 84529.428 After that the results are already reported, but there's no scaling factor at all. Perhaps I'm overlooking something? Thanks for your advice. 


Sorry. You don't get a chisquare for TWOLEVEL with maximum likelihood so you don't get a scaling factor. If you want to test two nested models, use  2 times the loglikelihood difference. 


Thank you for that information. So the test would be computing 2*(LL2LL1) (X²) with df being the difference in the free parameters (like in Snijders and Bosker, 1999)? Thank you for your patience. 


Yes. 


I've got several further question on the topic "estimator". Since my ordinal 2leveldata is unbalanced (clustersize varies from 9 to 390), are the results I obtain using MLR reliable? What happens to estimated parameters, when clustersize varies? (SEs will be larger on within level I'd expect.) What are the limitations I have to report, when writing about a factor structure obtained in this way? Thank you very much. Florian Fiedler. 


MLR does not require cluster sizes to be the same. I cannot think of any limitations to report in this regard. 


Regarding the scaling factor of MLR: McDonald & Ho (2002)wrote that itĀ“s useful to obtain a separate chiĀ²statistic for the structural part (path model) of a full SEM by subtracting the maximum likelihood chiĀ²/df of the measurement model from the chiĀ²/df of the full SEM. Is this also reasonable with MLR and type=complex? Many thanks for your thoughts! McDonald, R.P. & Ho, M.H.R. (2002). Principles and Practice in Reporting Structural Equation Analyses. Psychological Methods, 7(1), 6482. 


Yes, this would also work for MLR and TYPE=COMPLEX; 


OK, thanks a lot. Yet another question: I would like to estimate a LVinteraction in combination with TYPE=COMPLEX. Because of the XWITHcommand, there is no chi² and no scaling correction factor, but the SE are different from the ones I obtain from a MLanalysis with TYPE=GENERAL. Does that mean that MLR still adjusts the SE for the degree of nonindependence? In addition, does MLR adjusts also for nonnormality in this case, since the analysis of the indicatordistribution seems to be part of the LMSapproach (Klein & Moosbrugger, 2000) for LVinteraction itself ? 


If you have TYPE=COMPLEX;, MLR adjusts for nonindependence and nonnormality. With TYPE=GENERAL; it adjusts for nonnormality. ML does not adjust for nonnormality. 


OK, maybe I don't understand this correctly, but is there not a conflict between MLR (adjusting for nonindependence and nonnormality) and XWITH (analyzing the nonnormality and taking it explicitly into account)? Thanks for your patience! 


It is an interesting question. Consider a case without type = complex (no complex survey data)  if nonnormality is only due to the latent variable interaction handled by XWITH, I don't think the sandwich estimator used to compute SEs by MLR would necessarily do any better than ML. But MLR would probably not do worse. But if other parts of the model has nonnormal outcomes, MLR might do better than ML. With type=complex there is no choice but to use MLR given the need for the sandwich estimator to take care of the nonindependence. 


So you say, that MLR would probably do no harm in combination with XWITH. There is one more question: Does adding a LVinteraction via XWITH change the identification status of the model, since one more parameter has to be estimated? As far as I remember, the Klein & Moosbrugger (2000)  article doesn't mention this topic. Many thanks for your insights, they are really invaluable. 


Yes, this could affect identification. 


Do I have to rely on the usual signs of underidentification (very high SE, negative variance estimates, etc.), or is there a way to check the identification status? 


If the model is not identified, you should be notified by the program. 


I have a related concern as my reviewers question my identification status given the complexity of a model with interaction and some constructs with less than 3 items. The model converges fine and does meet the "trule" suggested by Bollen (1989), however, how can I rule out empirical underidentification? I found additional rules for establishing model identification for models with less than three indicators (O'Brien, 1994), however, it does not discuss interaction models specifically. So the question is: Does a model with interaction change the identification requirements in Mplus? 


This has not been studied as far as I know. With latent variable interactions, not only is the regular information from means, variances, and covariances used, but also higherorder moments. My conjecture is that (1) a model that is identified without the interaction is typically identified also with the interaction, whereas (2) a model that is not identified without the interaction cannot be identified when adding the interaction. For (1), there might still be cases of nonidentification, but hopefully the Mplus nonidentification check using the singularity check of the sums of squares and crossproducts of firstorder derivatives (the "MLF check") will flag such a model as nonidentified. A good empirical way to study identifiability is to do a Monte Carlo study and see if parameter estimates can be recovered well and if SEs are estimated well. For more information on this topic, you may want to contact Andreas Klein. 


Thank you so much  this is very useful. I am, however, running a singlelevel model with one interaction and thus it seems that the MLF estimator is not applicable here or? I did run a Monte Carlo study and it seems that the results are robust. I guess my main question now is: Is there another command/test I can use for a singlelevel study with interaction to test for model identification. The model is identified without the interaction and I would like to be able to say that Mplus did not flag the model as nonidentified. 


The MLF check is done irrespective of which ML estimator you use: ML, MLR, MLF. 

RDU posted on Wednesday, December 03, 2008  10:30 am



Hello. I am trying to perform a series of CFA models with ordinal indicators. The data are nested (e.g., students within schools). The sample size is around 600. Based on the articles I've read and from what I've seen on the Mplus discussion board, I was wondering if it is appropriate to use TYPE=COMPLEX in conjunction with the MLR Estimator, since Dr. Muthen stated earlier that MLR adjusts for nonindependence. Thus, if you have TYPE=COMPLEX, MLR adjusts for nonindependence and nonnormality. Here is a copy of my Mplus code. Thanks loads. TITLE: DATA: FILE IS G:\FactorAnalysis\ascii.dat; VARIABLE: NAMES school var1 var2 var3 var4; USEVARIABLES= school var1 var2 var3 var4; MISSING ARE ALL .; CATEGORICAL=var1 var2 var3 var4; CLUSTER = school; ANALYSIS: TYPE = COMPLEX; MODEL: F BY VAR1 VAR2 VAR3 VAR4; 


A factor model is not an aggregatable model so I would use TYPE=TWOLEVEL to account for clustering not TYPE=COMPLEX. 

RDU posted on Wednesday, December 03, 2008  12:03 pm



To make sure that I understand, you are saying that for nested data with latent continuous variables, one must use a multilevel model where the within and betweenlevel variance are disaggregated. In other words a sandwich estimator cannot be used in this case, and only a multilevel model or random effects model using the command "TYPE=TWOLEVEL" can be used (as opposed to looking at an aggregated model using TYPE=COMPLEX). Is this correct? Thanks. You've been incredibly helpful. 


The topic of aggregatability was discussed for factor analysis in Muthen & Satorra (1995), Sociological Methodology. 

RDU posted on Thursday, December 04, 2008  8:25 am



I'm sorry to keep at this, but I am still a bit confused. Muthen and Satorra (1995) state that the 2 methods for dealing with SEM/CFA models with complex sample data are: 1.) aggregated analysis and 2.) disaggregated analysis (i.e., multilevel cfa/sem). Furthermore, Ch. 9 of the user's guide states that Muthen and Satorra (1995) discuss these 2 approaches, where the first aggregated approach corresponds to using the TYPE=COMPLEX command. The second approach (disaggregated approach) is said to use the TYPE=TWOLEVEL. Since my aim is to correct the standard errors for my categorical CFA models and not to look at models for both the student and the schoollevels of my data, I do not understand why the TYPE=COMPLEX command was not recommended earlier. Perhaps I am not understanding everything, so could you please clarify this for me? Also, if it is allright to use the TYPE=COMPLEX command, then I was also wondering whether it was advisable to use MLR estimation for a categorical CFA model in conjunction with the TYPE=COMPLEX command. I believe the default estimation for this is WLS. Thank you. 


Using Type=Complex is better than ignoring the nested nature of the data, giving better SEs. It is, however, taking an "aggregated" approach to the modeling which implies that the parameter estimates may be a bit distorted relative to those of a "disaggregated" approach using Type=Twolevel. This is discussed a bit in MuthenSatorra (1995) on pages 290291. The discussion says that if a twolevel model with equal factor loading matrices on the within and between levels hold, then the aggregated approach is correct. But if within and between have different numbers of factors  which is often the case  the aggregated approach is distorted to some degree. Say that a simple structure 2factor model holds for Sigma_W and a 1factor model holds for Sigma_B. This does not result in a simple structure factor model for Sigma_T, Sigma_T being the covariance matrix in the aggregated approach. Often, however, the distortion is not large. And again, the disaggregated approach of Type=Complex is better than ignoring the nesting. 


Hi I currently ran a very simple path analysis model using complex survey data, just to test it out in Mplus, as I will then have a much more complicated model. I have covariates (linked to both of my predictor variables), 2 predictor variables, one mediating variable, and 1 outcome variable. My predictor, mediating, and outcome variables are all continuous variables. My model ran well, and actually had a good fit. However, I wasn't sure what the difference between specificying Estimator to be "MLR" and not specifying it was? Given that my next step will be to test this same model as a multiple group path analysis model (males and females), I'm wondering how specifying particular estimators will work in terms of the chisquare test difference test. Any suggestions would be helpful! Thank you, Kristine 


Each analysis situation has a default estimator. If your specify ESTIMATOR=MLR when it is not the default, it overrides the default. For ML and WLS, regular difference testing is used. For estimators ending in MV, the DIFFTEST option is used. For estimators ending in M and for MLR, a scaling correction factor is used in difference testing. This is described on the website. 


Hi Linda, Thank you so much for your response. I guess I'm confused then as to whether I should leave out the "ESTIMATOR=MLR" after: STRATIFICATION IS SESTRAT; CLUSTER IS NSECLUTR; WEIGHT IS NEWWEIGHT; ANALYSIS: TYPE=COMPLEX; I'm guessing that because I am running multiple group analysis with complex data, that I would have to use "MLR"? Would I still be able to do the regular difference testing using the default estimator (if I leave MLR out) or is MLR the default for this type of analysis? From what I read in the User's Guide, it says "for all types of outcomes, robust estimation of standard errors and robust chisquare tests of model fit are provided. These procedures take into account nonnormality of outcomes and nonindependence of observations due to clustering sampling." So I'm just not sure what the default estimator for Type=Complex and whether I have to specify a specific one when using complex samples with multiplegroup analysis. Thanks! 


You never need to specify an estimator. Just leave the option out and the default will be used. The defaults are shown on pages 482483. You don't give enough information for me to know what the default would be for you. 


Dear Mplus team, I am trying to compare two groups on measurement invariance. I am using a MLM estimator as some of my variables are not normally distributed. But with the statistics of the unconstrained base model Chisq =0 and df=0 I am not sure how to apply the scaling factor to the data. The data for my constrained model are as follows: ChiSquare = 11.82 Df = 6 pvalue = .07 Scaling correction = 1.05 Your help would be greatly appreciated. 


Please send the two outputs showing the constrained and unconstrained models and your license number to support@statmodel.com. 

Hans Leto posted on Friday, April 13, 2012  9:45 am



Dr. Muthen I am performing different interaction effects. My data has not multivariant normality, so I am using a robust estimator (MLM). When I test the model, Mplus gives me the following error "THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NONPOSITIVE DEFINITE FISHER INFORMATION MATRIX. CHANGE YOUR MODEL AND/OR STARTING VALUES.THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH..." suggesting that the results are presented for the MLF estimator. I used "estimator=MLF" in the model command and I had nice results without error. My question is, Can I use the MLF estimator in this analyisis? (nonnormality). Thank you in advance. 


MLF is not robust to nonnormality. You can try MLR. 


Dr. Muthen, My research team and I are considering using MLR vs. WLSMV. After reading some articles and going through your discussion board posts, we think that WLSMV would be sufficiently robust, but we are having a hard time understanding why and if there are any citations we can use. Our SEM analysis is multilevel with both categorical and continuous variables and data is assumed MAR and nonnormal. Our DV is a latent construct made up of 9 variables that are scaled on 13 based on frequency (not at all, occasionally, frequently). Thank you ahead of time for your guidance. 


WLSMV requires MCAR. I would use MLR. 


Thank you for your quick reply. Since it's difficult to confirm MCAR, what would be the inherent consequences to our output if we use WLSMV vs. MLR? 


If MCAR does not hold, but MAR does, WLSMV estimates will be biased and MLR estimates ok. If it is computationally not too heavy (not too many latent variables), MLR is better than WLSMV due to using full information. Another alternative is Bayes, which is as good as MLR, but can handle more latent variables. Just to be sure since you use the word "robust", when you say MLR I hope that you mean treating the variables as categorical just like WLSMV does. Sometimes people say ML (or MLR) when they really mean treating the variables as continuous. 


Our DV is being treated as a continuous latent variables. Actually all of our latent variables (four total) are treated as continous. Then we have both continuous and categorical observed variables (about 20 total), and the categorical variables are identified as such in our syntax. When you say that WLSMV is "better" would it be possible to still justify its use over MLR? One hesitation that we have with MLR is how long it's taking to run given our large sample, and also we understand that using MLR precludes us from deriving the indirect effect estimates. Thank you! 


Also, we did try running MLR, but it did not provide fit indices. 


WLSMV may give a good approximation to MLR/Bayes if there isn't that much missing data. Note the MLR can give indirect effects when they are defined, say as a*b, in MODEL CONSTRAINT. You can compare WLSMV and MLR (and Bayes) estimates on a key model to make sure your analysis is good. 


Thank you. Last question: Is there a citation we can reference in regards to WLSMV being a good approximation if there isn't too much missing data? 


I don't recall that having been studied. It is just my conjecture. 


My colleague tried rerunning using MLR but wasn't given fit indices beyond chi square. Is there an additional command to get the RMSEA? 


It sounds like you are getting chisquare values for the frequency table for your categorical outcomes. This is not the chisquare that compares the unrestricted H1 and the H0 models. This chisquare and related fit statistics are not available unless means, variances, and covariances are sufficient statistics for model estimation which is not the case with maximum likelihood and categorical outcomes. 


When it was run using MLR it was done so as a singlelevel model. Can you get fit indices in this situation? 


Not with categorical outcomes and maximum likelihood. No absolute fit measures are available. 


How does one assess if the data fits the model run without fit indices when using MLR? 


When absolute fit indices are not developed, nested models can be tested using  2 times the loglikelihood difference which is distributed as chisquare. 

Back to top 