Message/Author 


Hello, I'm reporting SEM results that were calculated with the MLR estimator. While this may seem trivial, I haven't been able to find what the MLR acronym stands for. I'd guess that it stands for "Maximum Likelihood Robust", but I want to ensure that I cite it properly. If you could please let me know, I'd appreciate it. Thanks! 


I don't think MLR is an acronym. It is an Mplus option for maximum likelihood estimation with robust standard errors. 


I am running a two level MLSEM. I have slightly nonnormal continuous data and from what I understand, using a SatorraBentler x2 with robust standard errors should be used. Mplus has this under the MLM estimator. However, in a two level analysis, MLM is not available, but MLR is. In the manual, MLR also provides robust standard errors. My question is: how is MLR related to MLM (in short how do I write this up aside from saying that I used a maximum likelihood estimator with robust standard errors)? 


MLM – maximum likelihood parameter estimates with standard errors and a meanadjusted chisquare test statistic that are robust to nonnormality. The MLM chisquare test statistic is also referred to as the SatorraBentler chisquare. MLR – maximum likelihood parameter estimates with standard errors and a chisquare test statistic (when applicable) that are robust to nonnormality and nonindependence of observations when used with TYPE=COMPLEX. The MLR standard errors are computed using a sandwich estimator. The MLR chisquare test statistic is asymptotically equivalent to the YuanBentler T2* test statistic. See the Yuan and Bentler paper referenced in the user's guide. MLR is an extension of MLM that can include missing data. 


Thanks for the info. I have a follow up question. I am using MPLUS 5.2 and it displays the twotailed p value how is it possible that in the unstandardized output it is nonsignificant (p>.05) and then in the standardized results, it is significant (p<.05)? I am modeling achievement (ACHW and ACHB) defined by reading and math at two levels (student and school level) and I am using the presence of basic facilities at the school level as a predictor (i.e., presence of electricity, 1=yes, 0=no). Unstd ACHB ON ELECTRIC 1.438 0.864 1.664 0.096 STDY Standardization ACHB ON ELECTRIC 1.185 0.585 2.028 0.043 StdYX ACHB ON ELECTRIC 0.488 0.241 2.027 0.043 Thank you. 


The unstandardized and standardized values have different sampling distributions and can give somewhat different z values. 


If that is the case, which one should be 'trusted' and interpreted? 


I would go with the tests for the unstandardized coefficients, but I haven't seen this studied. It could be a good methods research project, simulating data to see for which type of coefficient the z tests behave best at different sample sizes. 


I was just wondering, if you use mlr as the estimator method on a regression or path analysis is it still helpful to center explanatory variables? 


I don't see that the MLR choice and centering choice are related. 


Thanks for your quick reply. 


Hello, If you use the estimator MLR without using the Type=Complex option, can you still get standard errors that are robust to nonnormality and nonindepenence of observations? Thanks Wayne 


No, without TYPE=COMPLEX MLR is robust only to nonnormality. 


hello, is the type=complex option required in the case of missing data (mcar or mar) or not. Thanks alex 


All missing data estimation using maximum likelihood assumes MAR. 

Till posted on Tuesday, September 13, 2011  11:47 am



Dear Mrs. or Mr. Muthén, I'm running a latent growth curve analysis. This is the Input: Variable: names= g1 e1 n1 g2 e2 n2 g3 e3 n3 l01 l02 l03 l04 l05 l06 l07 l08 l09; usevariable=all; missing=all(99); model: i s  l01@0 l02 l03 l04 l05@1 l06 l07 l08 l09; F1 by n1 n2 n3; F2 by e1 e2 e3; F3 by g1 g2 g3; i s on F1 F2 F3; Analysis: Estimator=MLR; output: samp standardized tech4; I would like to use the MLR estimator because the mardia coefficient shows me that I can't assume multivariate normal distribution for my data. Is the use of the MLR Estimator appropriate here or do I have to use the normal ML? Thank you in advance Till 


Dear Dr. Bengt and Dr. Linda In my model, I have 41 variables. 4 of them have kurtosis values > 3 (3.6, 3.6, 5.6 and 6.8). Do I need to run my model using MLM or MLV estimators? What is the rule of thumb to use the MLM/MLV instead of ML? What is the difference between MLM and MLV? Thanks 


There are three estimators that are robust to nonnormality. Following are brief descriptions. Only MLR is available with missing data. This is what I would recommend. • MLM – maximum likelihood parameter estimates with standard errors and a meanadjusted chisquare test statistic that are robust to nonnormality. The MLM chisquare test statistic is also referred to as the SatorraBentler chisquare. • MLMV – maximum likelihood parameter estimates with standard errors and a mean and varianceadjusted chisquare test statistic that are robust to nonnormality • MLR – maximum likelihood parameter estimates with standard errors and a chisquare test statistic (when applicable) that are robust to nonnormality and nonindependence of observations when used with TYPE=COMPLEX. The MLR standard errors are computed using a sandwich estimator. The MLR chisquare test statistic is asymptotically equivalent to the YuanBentler T2* test statistic. 


Hi Linda, I just follow some of your previous suggestions about using WLSMV for a combination of continuous and categorical variables, in non normal distribution data. However, I´m not sure how to interpret the results as I didn´t get RMSEA, CFI,TLI or SRMR as I use to see using MLR estimate. Thanks for your help! Susana 


WLSMV gives you chi2, RMSEA, CFI, TLI. Perhaps you have an older version, or perhaps the run had a problem. 


Hi Linda, Thanks for your response. I´m trying another versión of the program, but I had this warning message: *** ERROR in VARIABLE command The CATEGORICAL option is used for dependent variables only. The following variable is an independent variable in the model. Problem with: NSE *** ERROR in VARIABLE command The CATEGORICAL option is used for dependent variables only. The following variable is an independent variable in the model. Problem with: EDUCA I´m using some demographics as education, sex and age to predict physical activity levels in my model. I don´t understand why categorical variables can only be dependent variables. Many many thanks for your help! Susana 


It is not that categorical variables can only be dependent variables. It is that the scale is only an issue for dependent variables. In regression, covariates can be binary or continuous. In all cases, they are treated as continuous and the model is estimated conditioned on them so that no distributional assumptions are made about them. 


But I didn´t get any results with this data, only the warning message. That means that I should treat my categorical variables as continuous? In that sense, use MLR and do not introduce them as categorical? Many thanks! Susana 


If you remove the CATEGORICAL option that has covariates on it, I think you will then get results. You cannot use WLSMV if you have no categorical dependent variables. Then you should use MLR. 

Daniel Lee posted on Thursday, September 29, 2016  10:54 am



Hi, I am conducting a latent growth model with 4 time points. At each time point, the the observed variables are skewed and departs from normality. In such a case, would you recommend I use MLR instead of ML? Thank you! 


Yes. 


Dear Linda and Bengt, My sample is 508, and I am running a full mediation model: P>B>D. When estimating an indirect effect, bootstrapping cannot be used with MLR. Since it is known that MLR is robust to nonnormality, I was wondering if MLR or MLM are also robust to the nonnormality of the product term (PB * BD) in this case? In other words, can I be confident that a p value associated with an indirect effect is accurate when MLR or MLM are used? If so, is there any reference to back this up? Thanking you in advance Alex 


To get bootstrapped SEs and CIs, you should use Estimator = ML. The acronym MLR is reserved for another type of SEs. 


Dear Linda and Bengt, For my PhD, I am performing multigroup analysis, as I want to test a model in two different countries. The dependent variable of my model is categorical. Firstly, I checked the invariance for each construct and now I am examining the invariance of regression coefficients and of one mediation. Because my data is not normal, and I have some missing values, for checking the invariance of my independent variables (which are continuous), I used the MLR estimator, and I used the WLMSV for the dependent variable. Could you please tell me if I can use the MLR with individual data (and not only with mixture models, which is the default estimator)? Secondly, I also checked the full SEM model in each group, and I used the WLMSV and the bootstrap (because I have a mediation). Thus, could you please tell me if it is correct to use the MLR for examining the measurement invariance in each independent variable, and then in a later stage of my analysis, use the bootstrap procedure (for assessing if the mediation is significant)together with other estimator? I am asking this, because I know that the MLR estimator could not be run with bootstrap. Once again, many thanks for your help, 


You can use MLR. MLR can also be used with bootstrapping unless you have multilevel data or sampling weights. 


Thank you so much Bengt. 


Hi Bengt, I have a quick follow up question to your response from Filipa. I am trying to run a mediated GMM, with a 3 class nominal outcome. I'm using dummy variables to account for school level clustering as I only have 7 total clusters. When I try to add boostrap to my Analysis command, I get an error that "Bootstrap is not available for estimators MLM, MLMV, MLF and MLR", but your comment above makes it sound like it is? Any help would be wonderful. Best, Katie 


Use ML. The other estimators have the same ML parameter estimates. ML with bootstrap gives you ML parameter estimates and bootstrapped standard errors. 

Jesus Garcia posted on Saturday, December 01, 2018  8:33 am



Hi Dr Muthen, I am trying to execute my model to determine the existing correlation between the 9 factors (80 variables Likert scale 05) and other factors measured in % referring to the use of transport modes, but the solution does not converge. I would like to know if the output is correct or that I should change so that my model is adjusted. This is the output: USEVARIABLES ARE act4 act5 act6 act9 act10 act11 act14 act15 act20 act26 act29 act30 act31 act34 act35 act36 act39 act40 act44 act45 act46 act49 act50 act54 act55 act56 act57 act59 act60 act64 act65 act66 act67 act68 act69 act70 act72 act73 act74 act75 act76 act77 act78 act79 act80 act152 act153 act154 act155 act156; MISSING = ALL (99); ANALYSIS: TYPE = COMPLEX; ESTIMATOR = MLR; MODEL: F1 BY act6 act11 act26 act31 act36 act46 act56 act66 act76; F4 BY act4 act9 act14 act29; F5 BY act5 act10 act15 act20 act30; F9 BY act34 act39 act44 act49 act54; F10 BY act35 act40 act45 act50 act55; F12 BY act57 act67 act72 act77; F13 BY act68 act73 act78; F14 BY act59 act64 act69 act74 act79; F15 BY act60 act65 act70 act75 act80; COCHE BY act152; COCHE_C BY act153; TTP BY act154; BICI BY act155; PIE BY act156; COCHE ON F1; OUTPUT: STDYX; TECH4; MODINDICES(ALL); Thank you! 


You have singleindicator factors: TTP BY act154; BICI BY act155; PIE BY act156; This gives you a factor variance and a residual variance for the indicator, creating a nonidentification because you have only 1 observed variance. Either fix to zero the residual variance or don't create these factors but work directly with the observed variables. If this doesn't help, send your output to Support along with your license number. 

rgm smeets posted on Sunday, December 16, 2018  1:39 pm



Dear, Could I use the mlf estimator on a dataset of more than 10,000 observations with skewed variables? 

rgm smeets posted on Monday, December 17, 2018  3:08 am



In addition to my last question, can the mlf estimator also deal with missing data? 


Yes  the whole ML family deals with missing data in the standard way assuming MAR ("FIML"). 

rgm smeets posted on Tuesday, December 18, 2018  12:09 am



Dear mister Muthen, I changed the estimator from MLR to MLF as I received the following warning: WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH. AN ADJUSTMENT TO THE ESTIMATION OF THE INFORMATION MATRIX HAS BEEN MADE. THE CONDITION NUMBER IS  xxx THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE MCONVERGENCE OR LOGCRITERION OPTIONS OR BY CHANGING THE STARTING VALUES OR BY USING THE MLF ESTIMATOR. THE MODEL ESTIMATION TERMINATED NORMALLY. I have a dataset with more than ten thousand observations and some variables are skewed. If I use the MLF estimator, the statistical indicators and the model results are exactly the same as running the analysis with the MLR estimator (except for the SE). What would you advise me to do? Use the MLF estimator? Or stick to the MLR estimator and ignore the warning? 


I would stick with the MLR estimator. The message may appear when convergence is not totally complete. You can sharpen Mconvergence to see if the logL improves and the message goes away. If not, ignore the message and stay with MLR. 

rgm smeets posted on Wednesday, December 19, 2018  10:08 am



Dear mister Muthen, Thank you for this useful advice. How should I sharpen this Mconvergence? I read somewhere that I have to use: mconv = 0.0000001? 

rgm smeets posted on Wednesday, December 19, 2018  11:27 am



I decreased the value of mconv with factor 10 two times. A mconv of 0.00000001 makes the warning of the saddle point disappear but the LogL and the BIC is worse. I assume that I now best stick with the MLR and ignore the warning of the saddle point? 


We need to see your output  send to Mplus Support along with your license number. 


Hello Mplus team, 1Q) What is the difference between the "Convergence criterion" in the MLR & "Convergence criteria" in the EM algorithm? 


Please send your output to Support along with your license number. 


Dear Sir, I have not started the data analysis yet. Before analyzing the data, I wanted to understand these terms. Could you suggest some articles, textbooks, where I can learn about the terms such as: 1)Convergence Criterion of MLR vs EM 2)Iteration vs Steepest Descent Iteration 3)Loglikelihood change vs Relative Loglikelihood change 4)Derivative 


I recommend the book Skrondal, A. and RabeHesketh, S. (2004). Generalized Latent Variable Modeling: Multilevel, Longitudinal and Structural Equation Models. Boca Raton, FL: Chapman & Hall/CRC. 


Thank you, Sir. 

Ryan Veal posted on Tuesday, August 27, 2019  12:12 am



Dr Prof. Muthen, Am i correct in thinking that MLR and MLM are for continuous variables only, and should not be used with ordinal indicators? That is, only WLSMV should be used. My indicators have only 3point ordinal scale. Thanks 


No, that's incorrect but a common misunderstanding  ML(R) is also suitable for categorical outcomes. For an overview of estimator choices with categorical outcomes, see my FAQ on our website: Estimator choices with categorical outcomes 

Ryan Veal posted on Thursday, August 29, 2019  1:37 am



Thank you very much for your reply. I have read your article. Can you confirm that observed variables with only three levels on an ordinal scale, (scores of which, with so few levels, therefore cannot be normally distributed) are suitable as categorical variables using MLR? Sorry to be precise, but as you point out, this seems to be contrary to what is common in many texts. Thanks. 


Yes, I can confirm that. An example is the large field of Item Response Theory where ordinal outcomes are common and are almost always handled by ML or MLR. I'd be interested in which text show that misunderstanding  quotes included. 

Ryan Veal posted on Sunday, September 01, 2019  3:52 am



Thank you very much for clarifying Professor Muthen. An example text I often refer to is: Byrne, B. M. (2012). Structural equation modeling with Mplus : basic concepts, applications, and programming: New York : Routledge. "Thus far in the book, analyses have been based on ML estimation and MLM estimation. An important assumption underlying both of these estimation procedures is that the scale of the observed variables is continuous." (pp. 126128). Byrne then makes the case that having less than five categories in an ordinal scale cannot be considered continuous. 

Ryan Veal posted on Sunday, September 01, 2019  3:55 am



Neumann, C. S., Kosson, D. S., & Salekin, R. T. (2017). Exploratory and Confirmatory Factor Analysis of the Psychopathy Construct: Methodological and Conceptual Issues. In H. Herve & J. C. Yuille (Eds.), The Psychopath : Theory, Research, and Practice (2nd edition. ed., pp. 79104). Mahwah, NJ: Routledge. The above reference is specific to the threelevel ordinal scale I am researching using CFA and SEM. Neumann et al. (2007/2017) reported: 

Ryan Veal posted on Sunday, September 01, 2019  3:55 am



"Finally, one of the most underrecognized problems in applied factor analytic research concerns the nature of the items used to assess various constructs. In many areas of psychology and psychiatry, individuals are assessed with instruments using symptom or trait ratings that are based on an ordinal rather than an interval scale. For example, psychopathy ratings usually involve a determination of whether a trait is absent (0), present but below a clinical threshold (1), or present at a clinically significant level (2)... 

Ryan Veal posted on Sunday, September 01, 2019  3:56 am



...In such cases, there is no clear interval scale between the ordered ratings. Of course, it may be that each trait being rated is continuously distributed in the population, and, thus, ratings are not conceptualized as categorical variables. Instead, such ordinal ratings arise from thresholding an underlying continuous variable (Everitt & Dunn, 2001). Nevertheless, such ordinal variables are not ideally suited to factor analysis, which relies on the maximum likelihood procedure (Everitt & Dunn, 2001; West, Finch, & Curran, 1995). Attenuation of parameter estimates can occur when few ordinal categories are used (fewer than five), and such variables are skewed (West et al., 1995). In particular, factor loadings and factor correlations will be underestimated to the extent that the ordinal variables are skewed (Everitt & Dunn, 2001; West et al., 1995). Of note is the fact that error variance parameters may be severely biased, and spurious correlations may occur between variables whose error variances reflect a similar degree of skewness (West et al., 1995). As a consequence, attenuation of parameters and spurious correlations will probably contribute to model misspecification and adversely affect model fit." (p. 90) 

Ryan Veal posted on Sunday, September 01, 2019  3:56 am



The authors then illustrate the difference by testing a model using both ML and RWLS in which the ML method was said to have underestimated relative fit and loadings. Please let me know if you believe I am misinterpreting the texts. Thanks again for your time. 


Byrne's book is wrong about ML being limited to continuous variables (MLM is, however, for continuous variables only). The Neumann et al book also does not make it clear that ML can be used for noncontinuous variables. I think you have interpreted the books correctly. I think the issue is that authors talk about analyzing correlation or covariance matrices which are suitable only for continuous variables and then conflate that with ML analysis, not making it clear that ML is a general estimation approach that can be used for any type of outcome. It is a common problem in portraying estimators and outcome types. Here is how we write about estimator choices with categorical outcomes see our FAQ: Estimator choices with categorical outcomes 

Ryan Veal posted on Monday, September 02, 2019  6:59 pm



Very well. Thank you very much for your time, Professor Muthen. Cheers, Ryan 

Ryan Veal posted on Monday, September 09, 2019  10:08 pm



Hello Professor Muthen, As a follow up question to my previous ones on the most appropriate estimator choice for categorical indicators, your FAQ: "Estimator choices with categorical outcomes" reports that the use of ULSMV can be advantageous to WLSMV in small samples. Am i correct in thinking that when specifying ULSMV in Mplus with categorical indicators, Mplus automatically controls for nonnormality and provides model fit information robust to nonnormality? Thank you, Ryan 


Right. 

Ryan Veal posted on Wednesday, September 11, 2019  3:22 pm



Thank you very much. 


Dear Drs. Muthen, I am running a latent growth curve model with sibling data, so I want robust standarderrors to account for the nonindependence of data. Therefore, I decided to use MLR estimator. I am also using multiple imputation to help with the missing values in my dataset. Is it okay to use both the MLR estimator and imputation? I ask because unless I am understanding incorrectly, MLR is using FIML, which should be helping with missing values. However, if I run the analysis without imputing, the covariance coverage was too low and I cannot run the analysis without imputing. Thank you for your help! 


I would recommend Type=Complex and MLR. If you have multiple imputation data, you don't have any missing data so that FIML is not relevant. If you have problems with covariance coverage, multiple imputation merely hides the problem  it does not resolve the fact that you have too much missing data for trustworthy analysis results. 

Back to top 