Is it possible in version 3 to get the per-run output from a bootstrap? I'm interested in a bootstrap CI for a difference between two total effects, and I'm not seeing any good way to get that. Thanks.
bmuthen posted on Thursday, September 02, 2004 - 5:12 pm
No. Perhaps there is some modeling trick to parameterize the difference between two total effects but I couldn't think of one today (while keeping the car straight on the road).
Uh-oh -- better be careful what I ask during commuting times! Thanks.
Michael Eid posted on Tuesday, February 22, 2005 - 12:01 am
Is it possible to get bootstrap confidence intervals for r-square in mplus?
Thuy Nguyen posted on Wednesday, February 23, 2005 - 11:15 am
No, bootstrap confidence intervals for R-square is not available in Mplus.
Mary Campa posted on Tuesday, March 15, 2005 - 6:53 am
I am running a multilevel path analysis with a mix of categorical and continuous variables with an interaction included. I am also estimating indirect effects and am having trouble getting the bootstrapping to work. I get the following error messages.
*** FATAL ERROR THE WEIGHT MATRIX PART OF VARIABLE EVREPGD IS NON-INVERTIBLE. THIS MAY BE DUE TO ONE OR MORE CATEGORIES HAVING TOO FEW OBSERVATIONS. CHECK YOUR DATA AND/OR COLLAPSE THE CATEGORIES FOR THIS VARIABLE. PROBLEM INVOLVING THE REGRESSION OF EVREPGD ON U_T. THE PROBLEM MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION.
THE WEIGHT MATRIX PART OF VARIABLE BIRTH19 IS NON-INVERTIBLE. THIS MAY BE DUE TO ONE OR MORE CATEGORIES HAVING TOO FEW OBSERVATIONS. CHECK YOUR DATA AND/OR COLLAPSE THE CATEGORIES FOR THIS VARIABLE. PROBLEM INVOLVING THE REGRESSION OF BIRTH19 ON U_T. THE PROBLEM MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION.
The variables in question are dicotomous and as such cannot be collapsed any further. The U_T variable is an interaction of two other dichotomous variables (niether of which are causing the error).
Is there any way to get the bootstrapping to work with this problem??
It's hard to know what is happening from the information provided. Please send the full output and data if possible to firstname.lastname@example.org.
Anonymous posted on Friday, April 01, 2005 - 7:18 am
Is it possible to bootstrap the RMSEA in Mplus? I want to replicate a fit statistics study that Nevitt and Hancock wrote a few years back in Structural Equation Modeling. More to the point, I would like to be able to demonstrate that the results I have found are stable, without moving directly to a Monte Carlo format.
No, this is not possible in the current version of Mplus. Only standard errors can be bootstrapped at this time.
Lily posted on Saturday, April 26, 2008 - 12:14 am
Dear Dr. Muthen,
It says in the Mplus guide page 496 that standard bootstrapping is available for the ML, WLS, WLSM,WLSMV, ULS, and GLS estimators.
However, when I tried to do bootstrapping on my IRT model, single factor with 5 categorical dependent variables using ML estimator(which requires numerical integration)I got the following error message: "BOOTSTRAP is not allowed with ALGORITHM = INTEGRATION"
Is there tricks to get Mplus to do bootstrapping for my model?
Hello, I am testing a path model with 3 time points. My sample is not very large (N = 112) so I am running my model with bootstrapped standard errors. I have two questions/problems: 1. When I run the model with bootstrapped st. errors, the CFI value is 0.00! Other values are normal and indicative of a good model (e.g., RMSEA, all expected important paths are significan, etc). When I run the model without the bootstrapped st. errors command on, the CFI is .96. Why is this? 2. The model fit and the paths' significance levels are much better if I leave out time 2 (post) scores, and just use Time 1 (pre) and Time 3 (follow-up) scores. Once I add the Time 2 scores, the important paths are no longer significant and the fit indices become worse. Shouldn't the model be better if I used more information? Is it okay to leave out Time 2 scores, when I am in fact interested in the Time 3 scores and not so much Time 2 scores. Thank you.
1. This should not happen. Please send the full outputs and your license number to email@example.com. 2. It's not possible to say much about this. When the model changes, you expect different results. You need to try to understand why your particular changes occur.
I'm running a SEM and included "BOOTSTRAP = 1000 (RESIDUAL);".
In my output file, I can see that only 788 (of 1000) bootstrap draws were completed. In TECH9, I can see that this is mainly due to convergence problems ("NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.").
How reliable are the results I get? E.g. my Bootstrap P-Value is 0.1967 - can I trust this value?
I'm using the bias-corrected bootstrap in the analysis of my latent growth model. My decision to use bootstrap resampling is based on recommendations made by Mackinnon and others for the analysis of indirect effects, and also because the continuous outcomes and observed variables in my model are not well behaved from a distributional point of view. My sample comprises 413 individuals and the bootstrap is based on 10,000 draws.
Firstly, it is interesting to note that the standard errors of the model parameters obtained under the bootstrap approach are much larger than those obtained using the 'usual' ML analysis, with a number of previously significant effects becoming non-significant when using the bootstrap; this probably indicates that the smaller standard errors previously obtained were a result of the poor distributional properties of the data.
My question relates to the asymmetric confidence intervals generated from the BCbootstrap; two of the four intervals for specific indirect effects that I've examined do not contain zero, therefore providing evidence of a significant effect, however the corresponding probability values are non-significant. For example, the 95% CI for one specific indirect effect is (-2.885, -0.232), with a p-value of 0.085. Should I rely on the confidence intervals or the p-values for a bias-corrected bootstrap? Thank you for your time. Susy Harrigan
Bootstrap standard errors are larger that maximum likelihood standard errors in most cases. I think if you use the MLR estimator you will find the standard errors closer to the bootstrap standard errors.
The bootstrap employs sampling with replacement by definition.
Cecily Na posted on Friday, August 10, 2012 - 9:28 pm
Hello, Professors, I'm working on a large survey data set with 500 bootstrapping weights (replicate weights). Obviously, Mlus only takes 500 variables. I'm wondering if I can use only part of the 500 weights, say 80? How may that affect the standard error estimates? thanks!
Kofan Lee posted on Wednesday, December 05, 2012 - 12:04 pm
I have run CINTERVAL (BCBOOTSTRAP) in Output command and obtain the confidence intervals of estimates. Also, I use SAVEDATA to create another file. The output only shows the numbers. I am confused to interpret it. Do you have any suggestion
Similar to Maren Winkler above, I have had a situation whereby the bootstap algorithm does not run the samples requested but stops around 1139 (out of 5000 requested). The reason from tech 9 output appears to be that one of the variables has 0 frequencies of a category and the sample is discarded
In this case would it be acceptable to increase the number of bootstrap samples requested in order to obtain a larger number of valid samples, (e.g. those without cell count of 0 for a category), until the number of obtained bootstrap samples is 5000)?
I have shortened the variable in a way that does not lose the meaning: Basically collapsing two of the smallest levels of the variable. This hasn't made much of an improvement. I could dichotomise to improve it further, but fear I would lose information in the process.
Is forcing 5000 bootstrap samples is problematic as it would bias the confidence intervals in favour of samples that contain at least 1 instance of the problem category?
Furthermore I have three outcomes in my mediation model. When I run the analysis with only 1 outcome variable the bootstrap completes 5000 draws without a problem. Would it be acceptable to run separate analyses, 1 for each DV, to avoid the bootstrap issue?
I think it's because of a combination of things: Small sample size and ordinal outcome variables that are restricted in range in the lower end of the variable. It maybe that I will have to live with the lower than desired number of Bootstrap samples. I was thinking about using Bayesian estimator to deal with the skew in the DVs but that prevents bootstrapping, because of numerical integration requirement.
I thought that bootstrapping allowed one to compensate for bias arising from small sample size. I still don't understand why one shouldn't or couldn't ramp up the bootstraps to achieve a higher number of samples, and ignore the invalid draws. Apart from 'it takes longer to run' are there any known statistical reasons why one should definitely not do this?
Bayes does not need bootstrapping in that it both allows non-normal distributions for estimates and works well with small samples. Bayes does not use numerical integration.
No statistical reasons against using many bootstraps. But there is the practical issue of having many chances of drawing "difficult" samples in cases where the data-model relationship makes "difficult" more likely.
1. That depends on so many factors that you are best off doing a Monte Carlo simulation in Mplus to decide it for your particular setting. See UG chapter 12.
2. Ignoring the invalid samples does pose a danger of distorting the results.
2.3. I would try to figure out why some bootstrap samples have problems. This will clarify why the data-model situation is difficult.
Tracy Witte posted on Thursday, April 11, 2013 - 7:56 am
Similar to Winkler's 2010 inquiry above, I am finding that chi-square and all other fit indices are identical with bootstrapping (residual), compared to the ML results. I thought that the bootstrapping (residual) command gives us the Bollen and Stine transformed bootstrap, which adjusts chi-square to reflect the fact that chi-square is not centrally distributed with bootstrapping. Is this an anomaly with my dataset? Or is it typical for the ML fit statistics to be identical to the boostrapped fit statistics?