Is it possible in version 3 to get the per-run output from a bootstrap? I'm interested in a bootstrap CI for a difference between two total effects, and I'm not seeing any good way to get that. Thanks.
bmuthen posted on Thursday, September 02, 2004 - 5:12 pm
No. Perhaps there is some modeling trick to parameterize the difference between two total effects but I couldn't think of one today (while keeping the car straight on the road).
Uh-oh -- better be careful what I ask during commuting times! Thanks.
Michael Eid posted on Tuesday, February 22, 2005 - 12:01 am
Is it possible to get bootstrap confidence intervals for r-square in mplus?
Thuy Nguyen posted on Wednesday, February 23, 2005 - 11:15 am
No, bootstrap confidence intervals for R-square is not available in Mplus.
Mary Campa posted on Tuesday, March 15, 2005 - 6:53 am
I am running a multilevel path analysis with a mix of categorical and continuous variables with an interaction included. I am also estimating indirect effects and am having trouble getting the bootstrapping to work. I get the following error messages.
*** FATAL ERROR THE WEIGHT MATRIX PART OF VARIABLE EVREPGD IS NON-INVERTIBLE. THIS MAY BE DUE TO ONE OR MORE CATEGORIES HAVING TOO FEW OBSERVATIONS. CHECK YOUR DATA AND/OR COLLAPSE THE CATEGORIES FOR THIS VARIABLE. PROBLEM INVOLVING THE REGRESSION OF EVREPGD ON U_T. THE PROBLEM MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION.
THE WEIGHT MATRIX PART OF VARIABLE BIRTH19 IS NON-INVERTIBLE. THIS MAY BE DUE TO ONE OR MORE CATEGORIES HAVING TOO FEW OBSERVATIONS. CHECK YOUR DATA AND/OR COLLAPSE THE CATEGORIES FOR THIS VARIABLE. PROBLEM INVOLVING THE REGRESSION OF BIRTH19 ON U_T. THE PROBLEM MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION.
The variables in question are dicotomous and as such cannot be collapsed any further. The U_T variable is an interaction of two other dichotomous variables (niether of which are causing the error).
Is there any way to get the bootstrapping to work with this problem??
It's hard to know what is happening from the information provided. Please send the full output and data if possible to email@example.com.
Anonymous posted on Friday, April 01, 2005 - 7:18 am
Is it possible to bootstrap the RMSEA in Mplus? I want to replicate a fit statistics study that Nevitt and Hancock wrote a few years back in Structural Equation Modeling. More to the point, I would like to be able to demonstrate that the results I have found are stable, without moving directly to a Monte Carlo format.
No, this is not possible in the current version of Mplus. Only standard errors can be bootstrapped at this time.
Lily posted on Saturday, April 26, 2008 - 12:14 am
Dear Dr. Muthen,
It says in the Mplus guide page 496 that standard bootstrapping is available for the ML, WLS, WLSM,WLSMV, ULS, and GLS estimators.
However, when I tried to do bootstrapping on my IRT model, single factor with 5 categorical dependent variables using ML estimator(which requires numerical integration)I got the following error message: "BOOTSTRAP is not allowed with ALGORITHM = INTEGRATION"
Is there tricks to get Mplus to do bootstrapping for my model?
Hello, I am testing a path model with 3 time points. My sample is not very large (N = 112) so I am running my model with bootstrapped standard errors. I have two questions/problems: 1. When I run the model with bootstrapped st. errors, the CFI value is 0.00! Other values are normal and indicative of a good model (e.g., RMSEA, all expected important paths are significan, etc). When I run the model without the bootstrapped st. errors command on, the CFI is .96. Why is this? 2. The model fit and the paths' significance levels are much better if I leave out time 2 (post) scores, and just use Time 1 (pre) and Time 3 (follow-up) scores. Once I add the Time 2 scores, the important paths are no longer significant and the fit indices become worse. Shouldn't the model be better if I used more information? Is it okay to leave out Time 2 scores, when I am in fact interested in the Time 3 scores and not so much Time 2 scores. Thank you.
1. This should not happen. Please send the full outputs and your license number to firstname.lastname@example.org. 2. It's not possible to say much about this. When the model changes, you expect different results. You need to try to understand why your particular changes occur.
I'm running a SEM and included "BOOTSTRAP = 1000 (RESIDUAL);".
In my output file, I can see that only 788 (of 1000) bootstrap draws were completed. In TECH9, I can see that this is mainly due to convergence problems ("NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.").
How reliable are the results I get? E.g. my Bootstrap P-Value is 0.1967 - can I trust this value?
I'm using the bias-corrected bootstrap in the analysis of my latent growth model. My decision to use bootstrap resampling is based on recommendations made by Mackinnon and others for the analysis of indirect effects, and also because the continuous outcomes and observed variables in my model are not well behaved from a distributional point of view. My sample comprises 413 individuals and the bootstrap is based on 10,000 draws.
Firstly, it is interesting to note that the standard errors of the model parameters obtained under the bootstrap approach are much larger than those obtained using the 'usual' ML analysis, with a number of previously significant effects becoming non-significant when using the bootstrap; this probably indicates that the smaller standard errors previously obtained were a result of the poor distributional properties of the data.
My question relates to the asymmetric confidence intervals generated from the BCbootstrap; two of the four intervals for specific indirect effects that I've examined do not contain zero, therefore providing evidence of a significant effect, however the corresponding probability values are non-significant. For example, the 95% CI for one specific indirect effect is (-2.885, -0.232), with a p-value of 0.085. Should I rely on the confidence intervals or the p-values for a bias-corrected bootstrap? Thank you for your time. Susy Harrigan
Bootstrap standard errors are larger that maximum likelihood standard errors in most cases. I think if you use the MLR estimator you will find the standard errors closer to the bootstrap standard errors.
The bootstrap employs sampling with replacement by definition.
Cecily Na posted on Friday, August 10, 2012 - 9:28 pm
Hello, Professors, I'm working on a large survey data set with 500 bootstrapping weights (replicate weights). Obviously, Mlus only takes 500 variables. I'm wondering if I can use only part of the 500 weights, say 80? How may that affect the standard error estimates? thanks!
Kofan Lee posted on Wednesday, December 05, 2012 - 12:04 pm
I have run CINTERVAL (BCBOOTSTRAP) in Output command and obtain the confidence intervals of estimates. Also, I use SAVEDATA to create another file. The output only shows the numbers. I am confused to interpret it. Do you have any suggestion
Similar to Maren Winkler above, I have had a situation whereby the bootstap algorithm does not run the samples requested but stops around 1139 (out of 5000 requested). The reason from tech 9 output appears to be that one of the variables has 0 frequencies of a category and the sample is discarded
In this case would it be acceptable to increase the number of bootstrap samples requested in order to obtain a larger number of valid samples, (e.g. those without cell count of 0 for a category), until the number of obtained bootstrap samples is 5000)?
I have shortened the variable in a way that does not lose the meaning: Basically collapsing two of the smallest levels of the variable. This hasn't made much of an improvement. I could dichotomise to improve it further, but fear I would lose information in the process.
Is forcing 5000 bootstrap samples is problematic as it would bias the confidence intervals in favour of samples that contain at least 1 instance of the problem category?
Furthermore I have three outcomes in my mediation model. When I run the analysis with only 1 outcome variable the bootstrap completes 5000 draws without a problem. Would it be acceptable to run separate analyses, 1 for each DV, to avoid the bootstrap issue?
I think it's because of a combination of things: Small sample size and ordinal outcome variables that are restricted in range in the lower end of the variable. It maybe that I will have to live with the lower than desired number of Bootstrap samples. I was thinking about using Bayesian estimator to deal with the skew in the DVs but that prevents bootstrapping, because of numerical integration requirement.
I thought that bootstrapping allowed one to compensate for bias arising from small sample size. I still don't understand why one shouldn't or couldn't ramp up the bootstraps to achieve a higher number of samples, and ignore the invalid draws. Apart from 'it takes longer to run' are there any known statistical reasons why one should definitely not do this?
Bayes does not need bootstrapping in that it both allows non-normal distributions for estimates and works well with small samples. Bayes does not use numerical integration.
No statistical reasons against using many bootstraps. But there is the practical issue of having many chances of drawing "difficult" samples in cases where the data-model relationship makes "difficult" more likely.
1. That depends on so many factors that you are best off doing a Monte Carlo simulation in Mplus to decide it for your particular setting. See UG chapter 12.
2. Ignoring the invalid samples does pose a danger of distorting the results.
2.3. I would try to figure out why some bootstrap samples have problems. This will clarify why the data-model situation is difficult.
Tracy Witte posted on Thursday, April 11, 2013 - 7:56 am
Similar to Winkler's 2010 inquiry above, I am finding that chi-square and all other fit indices are identical with bootstrapping (residual), compared to the ML results. I thought that the bootstrapping (residual) command gives us the Bollen and Stine transformed bootstrap, which adjusts chi-square to reflect the fact that chi-square is not centrally distributed with bootstrapping. Is this an anomaly with my dataset? Or is it typical for the ML fit statistics to be identical to the boostrapped fit statistics?
from which output do I have to report the estimator of the direct and total effects? Of the output with bootstrap (estimator and confidenzinterval)or of the output without bootstrap - because bootstrap only show the standard errors (bootstrap)?
A quick question re bootstrapping in full SEM: is it possible to specify the sample size in bootstrapping?
I am testing a particular model on two different samples (with different characteristics), and would like to compare the fit stats & factor loadings. However, they are not proportionate in terms of N, and I need to make a like-for-like comparison. Hence, I want to run the bootstrapping with the same N specified on each sample. Just wondering if this was possible.
I am hoping to conduct bootstrapping for a TYPE = Twolevel random multilevel model.
I have read in the discussion that bootstrapping is not possible in a TWOLEVEL random model when assessing indirect effect for mediation, but I am wondering if this is possible for a general two level random Multi-level model?
No twolevel bootstrapping (yet). But if you are concerned with non-normally distributed indirect effects you can use Bayes. The confidence (credibility) intervals of Bayes are also non-symmetric taking into account the non-normal indirect effect distribution.
I assume you are using the BOOTSTRAP option because ESTIMATOR=BOOTSTRAP is not a valid command. If you use BOOTSTRAP with the default CINTERVAL, you obtain symmetric confidence inervals using bootstrapped standard errors.
jml posted on Friday, February 26, 2016 - 10:15 am
I'm using residual bootstrapping [BOOTSTRAP = 1000(RESIDUAL)] to evaluate model rejection rates in a simulation. I'm trying to decide whether it's better to use residual bootstrapping or standard bootstrapping in evaluating the bias of the parameters' standard errors. Can someone please point me to a reference that explains the difference in how Mplus handles the two methods computationally? Also, is there a recommended type of bootstrapping to use in evaluating the estimated standard errors as opposed to the p-values?
Is there a way to export the results (in particular for unstandardized coefficients) for each bootstrapped sample if one runs a multinomial Regression.
The Background of my inquiry: I would like to calculate predicted probabilities and their confidence intervals for multinomial regressions (and then later also SEM multinomial regressions).
Calculation of the predicted probabilities is easy, based on the coefficients.
However, I am having problems with the confidence intervals for the predicted probabilities.
I initially wanted to calculate the confidence intervals of the predicted probabilities by calculating the standard error from the covariance matrix of the Parameters. However, I couldn't fiure out how to configure the vector.
Now I would like to calculate based on the bootstrapped unstandardize coefficients the predicted probabilities for each of the bootstrapped samples, so I could calculated the bootstrapped confidence intervals.
Thank you in advance for your feedback, Zsofia S. Ignacz
Rick Borst posted on Friday, July 29, 2016 - 3:11 pm
Hello I also receive the problem that only 671 Draws of 1000 were completed. My sample size is 9500 so that cannot be the problem, can you give some advise? Thank you!
It's because the model estimation did not converge for 1000-671 draws. This is an indication that your model is "fragile", that is, it may not be easily replicated in a new sample. You may want to modify the model.
Chang Liu posted on Wednesday, June 07, 2017 - 8:59 pm
Dear Professors Muthen & Muthen,
Hope you are doing well! Is there an option in Mplus to save the bootstrap samples (for future use)?
Chang Liu posted on Thursday, June 08, 2017 - 7:59 am
Thank you for the quick reply! A quick follow up question: I am doing mediation analysis but have missingness that I would like to handle with multiple imputation and that I would also like to use bootstrap to get the CI for the indirect effect. Is there a way that I can simultaneously request for multiple imputation and bootstrapping. If not, what would you recommend instead?
Send your output to Support along with your license number.
Mike Nelson posted on Saturday, May 09, 2020 - 11:54 pm
Greetings, I am attempting to run a multinomial logistic regression that controls for clustering with TYPE = COMPLEX and I am having one severe inconsistency between p values and confidence intervals when I use bootstrap confidence intervals (specifically with CHANGE#1 ON PDCHNG). When I bootstrap with 10,000 draws I receive the following values that seem incongruent (BCBCIs are to the right of p values):
The rest of the values in the model seem appropriate. Also, when I attempt to run the model without bootstrapping, the p values change drastically and are more in line with the results of the bootstrap confidence intervals. Should I trust the confidence intervals in this case?
The P-value that you see in the input is the P-value for the symmetric bootstrap confidence interval (this just uses the bootstrap SE to form a Z score of estimate/SE and reports normal distribution based p-value). We currently don't compute the P-value for the asymmetric confidence interval. Clearly the BCB p-value is less than 0.05. If you need the exact value follow the instructions given at the bottom of this thread http://www.statmodel.com/discussion/messages/11/628.html?1581358643
I conducted bootstrapping on some potentially overfit regression models. But, this only appears to resample the standard errors. We are more interested in getting stable estimates of the beta coefficients than we are concerned with their significance values. I'm not sure if it is possible to bootstrap the coefficients themselves in mplus or if there is an alternative resampling procedure we could conduct.
The primary way to identify overfitting regression coefficients is to look at their standard errors. If the coefficient is not significant it can certainty be removed. Our most effective tool for discovering weakly identified parameters is the standard errors produced by the MLF estimators. Typically those would be relatively large for such parameters. You might find this useful https://statmodel.com/download/ConditionNumber.pdf