Bootstrap or MLR PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Meike Slagt posted on Tuesday, January 13, 2015 - 1:06 pm
Dear Dr. Muthén,

I am in doubt about whether to choose an MLR estimator or a bootstrap procedure for my SEM model. I know that both can provide standard errors that are robust to nonnormality, and bootstrapping can even provide assymetric confidence intervals. What are the pro's and con's of each? Are there situations in which it is better to use one compared to the other?

I have now used an MLR estimator, but one of my reviewers says I should bootstrap, to check whether my relatively small sample size (N=176) has limited the power to detect significant findings. I am not confident that bootstrapping can solve the power problems accompanying a small sample size though.

Thank you for your view on this!
 Bengt O. Muthen posted on Tuesday, January 13, 2015 - 3:10 pm
As opposed to MLR, bootstrapping offers non-symmetric confidence intervals which can be important with parameter estimates that have non-normal sampling distributions, such as for variances and indirect effects, particularly for small samples.

I don't recall papers making direct comparisons between MLR and bootstrap. Anyone else?

There is also the possibility to do Bayesian estimation which has the same advantage as bootstrap. You may want to try all 3 approaches to get a feeling for the range of results.
 Margarita  posted on Tuesday, March 10, 2015 - 4:46 pm
Dear Dr. Muthén,

I compared the results of a serial mediation model from ML with bootstrapping vs. MLR (N =360). One of the paths was found to be significant when MLR was used but non-significant with bootstrapping. Hence, considering the discrepancy in the results I am not sure which of the two methods to use. I would appreciate any input on this.

Thank you!
 Bengt O. Muthen posted on Wednesday, March 11, 2015 - 9:51 am
Bootstrapping may be more accurate, but perhaps conservative. Try Estimator = Bayes as well to adjudicate. It may fall somewhere in between.
 Margarita  posted on Wednesday, March 11, 2015 - 12:52 pm
When I use more bootstrap samples (e.g. 5000) then my results are similar to the ones from the MLR. So, I will probably use bootstrap with 5000 and also try the Bayes estimator. Thank you for your help!
 Reem posted on Tuesday, September 19, 2017 - 12:43 am
Dear Professor Muthen,

I am running a regression using both MLR and bootstrapping, to compare results. I am also looking at diagnostics statistics - using the outliers option to examine outliers and influential cases and the Tech10 option to request standardized residuals; in addition to plots of standardized residuals against standardized predicted values to test for homoscedasticity.

However, I obtained the following error message with bootstrapping (summarized below):

*** WARNING in SAVEDATA command
SAVE settings LOGLIKELIHOOD, COOKS, INFLUENCE, and MAHALANOBIS
are not available with bootstrap. Requests for SAVE will be ignored.
*** WARNING in PLOT command
The OUTLIERS option is not available with BOOTSTRAP.
Requests for OUTLIERS will be ignored.
*** WARNING in OUTPUT command
TECH10 option is available only with estimators ML, MLF, and MLR.
Request for TECH10 is ignored.
5 WARNING(S) FOUND IN THE INPUT INSTRUCTIONS”

Is this because bootstrapping corrects for bias from outliers/ influential cases whereas MLR does not?
Also, are there any other diagnostics statistics that I should check for when using the MLR and/or bootstrapping?

Thanks for your help!
 Bengt O. Muthen posted on Tuesday, September 19, 2017 - 6:00 pm
Some facts may be helpful:

- MLR parameter estimates are the same as ML and ML using bootstrap

- Bootstrap influences only SEs

- both bootstrap and MLR improve the SEs compared to ML SEs when outliers are present

- Outliers are not defined/developed with bootstrap
 Reem posted on Monday, September 25, 2017 - 2:30 am
Thanks a lot Professor Muthen. I'd like to kindly ask two follow-up questions.
a) Does MLR correct for non-normality of errors and heteroscedasticity?
b) Does MPlus produce a measure to test independence of errors like Durbin-Watson?
Many thanks
 Bengt O. Muthen posted on Monday, September 25, 2017 - 5:39 pm
a) Yes.

b) No, but if you can model the non-independence e.g. by auto-correlations, then you can estimate the non-independence.
 DtB posted on Monday, November 27, 2017 - 5:02 am
Dear Professor Muthen,
I have 3 questions:
a) Is there any difference in how MLR and bootstrapping+ML handle missing data?
b) I have a sample size of N=90 at T1 and N=80 at T2 (MAR). I use bootstrapping to evaluate the confidence intervals of the indirect effects present in the model. However, I also noticed other differences between the two methods. The fit is better when I use bootstrapping+ML, however, some strong (e.g., .80)estimates of direct effects become non-significant when using the bootstrap+ML method. Would this be an indication that the bootstrap method is too conservative in my case? Bayes estimator yields more similar p-values to MLR.
c) In this case, would you advise reporting the fit-statistics and p-values of estimates (direct effects) from the bootstrapping+ML or from the MLR method?

Thanks,
 Bengt O. Muthen posted on Monday, November 27, 2017 - 5:06 pm
Send the relevant outputs to Support along with your license number.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: