Anonymous posted on Monday, March 19, 2012 - 11:17 am
I've got a few questions about the MLR output:
The Mplus output for MLR includes, if I am correct,
• Regression coefficient estimates using maximum likelihood estimation • Robust standard errors computed with Huber-White 'sandwich' estimator • Robust chi-square test of model fit using an extension of the Yuan-Bentler T2 test statistic • MLR uses Full Information Maximum Likelihood Estimation to handle missing data • Hypothesis is tested by computing the ratio of the estimate by its standard error ("Est./S.E."), and corresponding p-value. If I am correct, this would be my t-test/z-test? ª
ª If if this my t-test/z-test, would it be more appropriate to report as a t-test or a z-test (n = 326)?
Does this seem accurate, and is there anything I should probably know from this output?
Also, just to clarify, does this method still count as 'multiple linear regression'? I saw someone commented in the forum saying that it used logistic regression, so now I am doubting I should using this in the first place.
I would appreciate clarification/confirmation on this, please.
I´m new to Mplus. I have use the MLR-estimator, because of the non-normality in the indicator-variables. Now I´m trying to find some guilines for the cutoff criterias for the fit indexes. Is it ok to use the suffestions from "Hu & Betnler (1999). Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives, SEM 6(1), 1-55". They appear to be only for the ML-Estimator. In other words: Are the recommendations for the cutoff criterias for ML-estimator the same as for MLR?
I'm running a series of Bivariate latent change score models and am wondering whether to use ml or mlr estimation. My n=200. My dependent variables are non normally distributed (skewness around 2-3, kurtosis between 2 and 13) and data are MAR (12 missing data patterns, lowest covariance coverage is .861 ). Is mlr recommended in this case? Or is ml adequate?
I would use MLR. It is robust to non-normality. ML is not.
Wim Beyers posted on Thursday, December 03, 2015 - 10:05 am
Running a simple moddle (regression with covariates) on a small sample (n = 38). When using different estimators, I get same parameter estimates (as expected, since they all are calculated on ML basis), but very different p-values. Why?
Just one example: b = .238 - ML p = .006 - MLR p = .056 - MLM p = .053 - bootstrap ML p = .068
Is this huge difference between ML and the other algorithms due to the fact that in the latter three corrections for nonnormality and other biases in the data are made (which probably occur in such a small sample), whereas this is not the case in ML? Or is there another reason?
The p-value is for the z-score that is the ratio of the parameter estimate to the standard error. All four estimators give different standard errors. Given that ML is the most different, I would guess that the dependent variable is not normally distributed.
Wim Beyers posted on Thursday, December 03, 2015 - 11:35 pm
Thanks for confirming what I suspected!
Mike Zyphur posted on Sunday, September 04, 2016 - 8:45 pm
Hi Linda and Bengt, As you know, some researchers use LL values to compute pseudo-R2 as a ratio involving null and alternate models. I am using this approach to examine pseudo-R2 changes with blocks of predictors in an SEM with something like the McFadden formula:
R2 = 1 - LLF/LLI
However, I am using MLR. Therefore, it seems that adjusting for scaling correction factors is important. Without doing so, I can't see how the LL values or the ratios have comparable meanings. Any thoughts on this approach are greatly appreciated.
It seems like McFadden's R2 formula is driven only by how the parameter estimates change the loglikelihood whereas scaling correction factors are related to inference (chi-square and indirectly SEs). The logL is the same with ML and MLR. So I don't see a rational for changing the R2 formula - but I may be wrong. I don't know off-hand how one would investigate it, though.
Mike Zyphur posted on Monday, September 05, 2016 - 4:19 pm
Thanks Bengt, I see your point. Perhaps simply looking at SRMR would also be useful here.