Bootstrapping and confidence intervals PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Sarah Vogel posted on Tuesday, January 28, 2020 - 10:52 am
I am working on a multiple mediation analysis using structural equation modelling. I have two latent variables (L1 and L2) mediation the association between two exogenous variables (main predictor S and outcome E). When I run the model without bootstrapping, the model fit is good, my direct effects are all as expected, and the indirect of S on E is significant through L1 but not L2.

However, when I run the same analysis using bootstrapping to calculate confidence intervals, I find that my p-values for some of my direct effects and all of my indirect effects change very significantly, and many of my findings are no longer significant. However, the confidence intervals for these effects do not pass through zero, which would indicate to me that the results are significant. The standard errors in many cases are increasing by an order of magnitude or more. The fit statistics are still good for the model.

What is the best practice here? Are the confidence intervals more trustworthy than the p-values?
 Bengt O. Muthen posted on Wednesday, January 29, 2020 - 5:28 pm
Q1: Bootstrapping is the best practice although Bayes can also be used because it also gives the desired non-symmetric confidence intervals. An effect is significant if the confidence interval does not include zero.

Q2: Yes - at least if all bootstraps have converged properly. Very large SEs might indicate that some bootstraps had problems.
 Sarah Vogel posted on Thursday, January 30, 2020 - 8:46 am
Dear Dr. Muthen,

Thank you for your reply. I don't think my bootstraps are all converging properly, which is resulting in very large standard errors. I'm requesting 5000 bootstraps and getting 4496 completed bootstraps. Do you have any idea why this might be happening and what I can do to correct the issue?

Thank you for your help.
 Tihomir Asparouhov posted on Thursday, January 30, 2020 - 1:17 pm
This usually happens when the data is small or not diverse enough (i.e. if you have similar observations in the data they don't contribute to the estimation process in terms of providing additional information). Every bootstrap draw is less diverse than the original data and if you start low you can get critically low (on information).

The problem is not that 4 draws out of 5000 didn't converge - the problem is that the SE are too large. If anything can help is to have more draws be considered non-converging.

Here are some things that you might try but if these don't work I would suggest to abandon bootstrap altogether.

1. Decrease mconvegence and convergence options

2. Change the estimator (especially if you are using WLSMV, switching to ML has a good chance to fix the problem).

3. Look for weak spots in the model that could be improved, for example factors with two indicators, categories with very few observations, loadings that have large SE and may switch signs between the different bootstrap draws, etc.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: