Questions about MLR PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Anonymous posted on Monday, March 19, 2012 - 11:17 am
I've got a few questions about the MLR output:

The Mplus output for MLR includes, if I am correct,

• Regression coefficient estimates using maximum likelihood estimation
• Robust standard errors computed with Huber-White 'sandwich' estimator
• Robust chi-square test of model fit using an extension of the Yuan-Bentler T2 test statistic
• MLR uses Full Information Maximum Likelihood Estimation to handle missing data
• Hypothesis is tested by computing the ratio of the estimate by its standard error ("Est./S.E."), and corresponding p-value. If I am correct, this would be my t-test/z-test? ª

ª If if this my t-test/z-test, would it be more appropriate to report as a t-test or a z-test (n = 326)?

Does this seem accurate, and is there anything I should probably know from this output?

Also, just to clarify, does this method still count as 'multiple linear regression'? I saw someone commented in the forum saying that it used logistic regression, so now I am doubting I should using this in the first place.

I would appreciate clarification/confirmation on this, please.

Thanks in advance,

JL
 Linda K. Muthen posted on Monday, March 19, 2012 - 1:29 pm
The ratio of the parameter estimate to its standard error is a z-test in large samples.

MLR estimates linear regression if the dependent variable is continuous and logistic regression if the dependent variable is categorical.
 Anonymous posted on Monday, March 19, 2012 - 4:43 pm
Ah I see, that's genius (how it knows which one to use). z-test it is then.

So it is safe to say I would still be able this regression equation, Y = i + aX + bM + cXM + E?
 Linda K. Muthen posted on Monday, March 19, 2012 - 5:00 pm
Yes.
 Michael posted on Tuesday, May 29, 2012 - 4:29 am
Dear Professor/s,

I´m new to Mplus. I have use the MLR-estimator, because of the non-normality in the indicator-variables.
Now I´m trying to find some guilines for the cutoff criterias for the fit indexes. Is it ok to use the suffestions from "Hu & Betnler (1999). Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives, SEM 6(1), 1-55". They appear to be only for the ML-Estimator.
In other words: Are the recommendations for the cutoff criterias for ML-estimator the same as for MLR?

Many Thanks in advance

Michael
 Linda K. Muthen posted on Tuesday, May 29, 2012 - 8:39 am
I don't believe there have been any studies specific to MLR. The Hu and Bentler cutoffs are probably the best you can do.
 Michael posted on Tuesday, May 29, 2012 - 10:46 am
Thank you very much for the quick response!
 Marketa Krenek  posted on Sunday, March 09, 2014 - 8:55 am
Hi there,

I'm running a series of Bivariate latent change score models and am wondering whether to use ml or mlr estimation. My n=200. My dependent variables are non normally distributed (skewness around 2-3, kurtosis between 2 and 13) and data are MAR (12 missing data patterns, lowest covariance coverage is .861 ). Is mlr recommended in this case? Or is ml adequate?

Thank you!
 Linda K. Muthen posted on Sunday, March 09, 2014 - 3:32 pm
I would use MLR. It is robust to non-normality. ML is not.
 Wim Beyers posted on Thursday, December 03, 2015 - 10:05 am
Running a simple moddle (regression with covariates) on a small sample (n = 38). When using different estimators, I get same parameter estimates (as expected, since they all are calculated on ML basis), but very different p-values. Why?

Just one example: b = .238
- ML p = .006
- MLR p = .056
- MLM p = .053
- bootstrap ML p = .068

Is this huge difference between ML and the other algorithms due to the fact that in the latter three corrections for nonnormality and other biases in the data are made (which probably occur in such a small sample), whereas this is not the case in ML? Or is there another reason?
 Linda K. Muthen posted on Thursday, December 03, 2015 - 5:54 pm
The p-value is for the z-score that is the ratio of the parameter estimate to the standard error. All four estimators give different standard errors. Given that ML is the most different, I would guess that the dependent variable is not normally distributed.
 Wim Beyers posted on Thursday, December 03, 2015 - 11:35 pm
Thanks for confirming what I suspected!
 Mike Zyphur posted on Sunday, September 04, 2016 - 8:45 pm
Hi Linda and Bengt,
As you know, some researchers use LL values to compute pseudo-R2 as a ratio involving null and alternate models. I am using this approach to examine pseudo-R2 changes with blocks of predictors in an SEM with something like the McFadden formula:

R2 = 1 - LLF/LLI

However, I am using MLR. Therefore, it seems that adjusting for scaling correction factors is important. Without doing so, I can't see how the LL values or the ratios have comparable meanings. Any thoughts on this approach are greatly appreciated.

Thanks for your time and help!
Mike
 Bengt O. Muthen posted on Monday, September 05, 2016 - 3:55 pm
It seems like McFadden's R2 formula is driven only by how the parameter estimates change the loglikelihood whereas scaling correction factors are related to inference (chi-square and indirectly SEs). The logL is the same with ML and MLR. So I don't see a rational for changing the R2 formula - but I may be wrong. I don't know off-hand how one would investigate it, though.
 Mike Zyphur posted on Monday, September 05, 2016 - 4:19 pm
Thanks Bengt,
I see your point. Perhaps simply looking at SRMR would also be useful here.

Cheers,
Mike
 Javed Ashraf posted on Wednesday, May 16, 2018 - 12:32 pm
Hi
I am having confusion regarding how to test and justify the model fit for a cfa with categorical observed variables using MLR as estimator. Can you please explain and provide a simple reference for this.
Thanks
 Bengt O. Muthen posted on Wednesday, May 16, 2018 - 1:43 pm
Use TECH10 information from bivariate frequency tables.
 Jeremy Saenz posted on Friday, June 08, 2018 - 3:27 pm
Hi all, could someone help me understand the pros and cons of using MLR versus the default ML?
 Bengt O. Muthen posted on Friday, June 08, 2018 - 5:10 pm
The short story is that they give the same parameter estimates but the MLR SEs are robust against deviation from normality of the outcomes and are also robust to certain forms of model misspecification. MLR SEs are required with Type=Complex to account for clustering.
 Jeremy Saenz posted on Friday, June 08, 2018 - 5:47 pm
Thank you for your answer. Is there a pro to using the default instead of MLR?
 Tihomir Asparouhov posted on Monday, June 11, 2018 - 11:37 am
First note that MLR is the Mplus default in most situations, see Table on page 666-667 in the User's Guide. ML is the default only for the most introductory models. The ML uses simpler LRT testing for nested models and in some situations is a useful alternative to MLR, sort of as a back up estimator in case the MLR has a problem. Generally there are no big advantages for the ML estimator.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: