Bootstrapping PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Patrick Malone posted on Thursday, September 02, 2004 - 8:45 am
Is it possible in version 3 to get the per-run output from a bootstrap? I'm interested in a bootstrap CI for a difference between two total effects, and I'm not seeing any good way to get that. Thanks.
 bmuthen posted on Thursday, September 02, 2004 - 5:12 pm
No. Perhaps there is some modeling trick to parameterize the difference between two total effects but I couldn't think of one today (while keeping the car straight on the road).
 Patrick Malone posted on Thursday, September 02, 2004 - 5:50 pm
Uh-oh -- better be careful what I ask during commuting times! Thanks.
 Michael Eid posted on Tuesday, February 22, 2005 - 12:01 am
Is it possible to get bootstrap confidence intervals for r-square in mplus?
 Thuy Nguyen posted on Wednesday, February 23, 2005 - 11:15 am
No, bootstrap confidence intervals for R-square is not available in Mplus.
 Mary Campa posted on Tuesday, March 15, 2005 - 6:53 am
Hello.

I am running a multilevel path analysis with a mix of categorical and continuous variables with an interaction included. I am also estimating indirect effects and am having trouble getting the bootstrapping to work. I get the following error messages.

*** FATAL ERROR
THE WEIGHT MATRIX PART OF VARIABLE EVREPGD IS NON-INVERTIBLE. THIS MAY
BE DUE TO ONE OR MORE CATEGORIES HAVING TOO FEW OBSERVATIONS. CHECK
YOUR DATA AND/OR COLLAPSE THE CATEGORIES FOR THIS VARIABLE.
PROBLEM INVOLVING THE REGRESSION OF EVREPGD ON U_T. THE PROBLEM
MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION.

THE WEIGHT MATRIX PART OF VARIABLE BIRTH19 IS NON-INVERTIBLE. THIS MAY
BE DUE TO ONE OR MORE CATEGORIES HAVING TOO FEW OBSERVATIONS. CHECK
YOUR DATA AND/OR COLLAPSE THE CATEGORIES FOR THIS VARIABLE.
PROBLEM INVOLVING THE REGRESSION OF BIRTH19 ON U_T. THE PROBLEM
MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION.


The variables in question are dicotomous and as such cannot be collapsed any further. The U_T variable is an interaction of two other dichotomous variables (niether of which are causing the error).

Is there any way to get the bootstrapping to work with this problem??

Thank you.
 Linda K. Muthen posted on Tuesday, March 15, 2005 - 8:24 am
It's hard to know what is happening from the information provided. Please send the full output and data if possible to support@statmodel.com.
 Anonymous posted on Friday, April 01, 2005 - 7:18 am
Is it possible to bootstrap the RMSEA in Mplus? I want to replicate a fit statistics study that Nevitt and Hancock wrote a few years back in Structural Equation Modeling. More to the point, I would like to be able to demonstrate that the results I have found are stable, without moving directly to a Monte Carlo format.
 Linda K. Muthen posted on Saturday, April 02, 2005 - 8:29 pm
No, this is not possible in the current version of Mplus. Only standard errors can be bootstrapped at this time.
 Lily posted on Saturday, April 26, 2008 - 12:14 am
Dear Dr. Muthen,

It says in the Mplus guide page 496 that
standard bootstrapping is available for the ML, WLS, WLSM,WLSMV, ULS, and GLS estimators.

However, when I tried to do bootstrapping on my IRT model, single factor with 5 categorical dependent variables using ML estimator(which requires numerical integration)I got the following error message: "BOOTSTRAP is not allowed with ALGORITHM = INTEGRATION"

Is there tricks to get Mplus to do bootstrapping for my model?

Many thanks.
 Linda K. Muthen posted on Saturday, April 26, 2008 - 6:13 am
I'm afraid not. If you need numerical integration, bootstrapping is not available.
 krisitne amlund hagen posted on Tuesday, December 01, 2009 - 6:45 am
Hello,
I am testing a path model with 3 time points. My sample is not very large (N = 112) so I am running my model with bootstrapped standard errors. I have two questions/problems:
1. When I run the model with bootstrapped st. errors, the CFI value is 0.00! Other values are normal and indicative of a good model (e.g., RMSEA, all expected important paths are significan, etc). When I run the model without the bootstrapped st. errors command on, the CFI is .96. Why is this?
2. The model fit and the paths' significance levels are much better if I leave out time 2 (post) scores, and just use Time 1 (pre) and Time 3 (follow-up) scores. Once I add the Time 2 scores, the important paths are no longer significant and the fit indices become worse. Shouldn't the model be better if I used more information? Is it okay to leave out Time 2 scores, when I am in fact interested in the Time 3 scores and not so much Time 2 scores.
Thank you.
 Linda K. Muthen posted on Tuesday, December 01, 2009 - 9:14 am
1. This should not happen. Please send the full outputs and your license number to support@statmodel.com.
2. It's not possible to say much about this. When the model changes, you expect different results. You need to try to understand why your particular changes occur.
 Maren Winkler posted on Monday, January 25, 2010 - 1:59 am
Hi,

I'm running a SEM and included "BOOTSTRAP = 1000 (RESIDUAL);".

In my output file, I can see that only 788 (of 1000) bootstrap draws were completed. In TECH9, I can see that this is mainly due to convergence problems ("NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.").

How reliable are the results I get? E.g. my Bootstrap P-Value is 0.1967 - can I trust this value?

Thanks for your help!
 Maren Winkler posted on Tuesday, January 26, 2010 - 5:56 am
I have an additional question after comparing results with and without bootstraping.

Whereas chi-square and RMSEA are the same in both models, CFI and TLI are higher (indicating better fit) in the model including bootstraping.Why does that happen?

Thanks for your help!
 Linda K. Muthen posted on Tuesday, January 26, 2010 - 10:12 am
Please send the two outputs, bootstrap and not, your data, and your license number to support@statmodel.com.
 Susy Harrigan posted on Monday, September 27, 2010 - 6:27 am
I'm using the bias-corrected bootstrap in the analysis of my latent growth model. My decision to use bootstrap resampling is based on recommendations made by Mackinnon and others for the analysis of indirect effects, and also because the continuous outcomes and observed variables in my model are not well behaved from a distributional point of view. My sample comprises 413 individuals and the bootstrap is based on 10,000 draws.

Firstly, it is interesting to note that the standard errors of the model parameters obtained under the bootstrap approach are much larger than those obtained using the 'usual' ML analysis, with a number of previously significant effects becoming non-significant when using the bootstrap; this probably indicates that the smaller standard errors previously obtained were a result of the poor distributional properties of the data.

My question relates to the asymmetric confidence intervals generated from the BCbootstrap; two of the four intervals for specific indirect effects that I've examined do not contain zero, therefore providing evidence of a significant effect, however the corresponding probability values are non-significant. For example, the 95% CI for one specific indirect effect is (-2.885, -0.232), with a p-value of 0.085. Should I rely on the confidence intervals or the p-values for a bias-corrected bootstrap? Thank you for your time. Susy Harrigan
 Linda K. Muthen posted on Monday, September 27, 2010 - 10:20 am
Bootstrap standard errors are larger that maximum likelihood standard errors in most cases. I think if you use the MLR estimator you will find the standard errors closer to the bootstrap standard errors.

I would use the confidence interval results.
 Rebecca Fisher posted on Monday, April 18, 2011 - 3:14 am
Hi,

Where are there no p values for the standardised models when using bootstrapping?

Thanks
 Linda K. Muthen posted on Monday, April 18, 2011 - 9:07 am
We don't give standard errors when the BOOTSTRAP option is used. Therefore, no p-values are given.
 Nidhi Talwar posted on Tuesday, August 02, 2011 - 2:03 pm
Hi,

If I am inputting a covariance matrix (and not individual raw data), is it possible to get confidence intervals for indirect effects through boot-strapping (perhaps Monte-Carlo)?

Thanks!
 Linda K. Muthen posted on Tuesday, August 02, 2011 - 2:17 pm
I don't believe this is possible.
 gibbon lab posted on Tuesday, March 20, 2012 - 11:49 am
Using the following command, is the bootstrap resampling with replacement or without replacement in Mplus 6.12? Thanks.

Analysis:
bootstrap = 10000;
 Bengt O. Muthen posted on Tuesday, March 20, 2012 - 12:01 pm
The bootstrap employs sampling with replacement by definition.
 Cecily Na posted on Friday, August 10, 2012 - 9:28 pm
Hello, Professors,
I'm working on a large survey data set with 500 bootstrapping weights (replicate weights). Obviously, Mlus only takes 500 variables. I'm wondering if I can use only part of the 500 weights, say 80? How may that affect the standard error estimates? thanks!
 Linda K. Muthen posted on Saturday, August 11, 2012 - 10:27 am
This restriction was removed in Version 6.12.
 Kofan Lee posted on Wednesday, December 05, 2012 - 12:04 pm
Linda,

I have run CINTERVAL (BCBOOTSTRAP) in Output command and obtain the confidence intervals of estimates. Also, I use SAVEDATA to create another file. The output only shows the numbers. I am confused to interpret it. Do you have any suggestion

Thanks

Kofan
 Linda K. Muthen posted on Wednesday, December 05, 2012 - 12:18 pm
See the end of the output where the saved data are described.
 Kofan Lee posted on Wednesday, December 05, 2012 - 4:29 pm
Linda,

Thanks. Here is the information I get, basically shows the file name I created. I wonder if I omit some steps.
SAVEDATA INFORMATION


Estimates

Save file
C:\Users\koflee\Desktop\120312\rq1m4bootstrap
Save format Free

Thank you

kofan
 Linda K. Muthen posted on Wednesday, December 05, 2012 - 5:39 pm
Please send the full output and your license number to support@statmodel.com.
 Volker Patent posted on Thursday, March 07, 2013 - 2:18 am
Similar to Maren Winkler above, I have had a situation whereby the bootstap algorithm does not run the samples requested but stops around 1139 (out of 5000 requested). The reason from tech 9 output appears to be that one of the variables has 0 frequencies of a category and the sample is discarded

In this case would it be acceptable to increase the number of bootstrap samples requested in order to obtain a larger number of valid samples, (e.g. those without cell count of 0 for a category), until the number of obtained bootstrap samples is 5000)?

Thanks in advance for your reply.
 Linda K. Muthen posted on Thursday, March 07, 2013 - 8:41 am
I would instead collapse the category of the variable that has 0 frequencies.
 Volker Patent posted on Thursday, March 07, 2013 - 1:21 pm
Thanks Linda,

I have shortened the variable in a way that does not lose the meaning: Basically collapsing two of the smallest levels of the variable. This hasn't made much of an improvement. I could dichotomise to improve it further, but fear I would lose information in the process.

Is forcing 5000 bootstrap samples is problematic as it would bias the confidence intervals in favour of samples that contain at least 1 instance of the problem category?

Furthermore I have three outcomes in my mediation model. When I run the analysis with only 1 outcome variable the bootstrap completes 5000 draws without a problem. Would it be acceptable to run separate analyses, 1 for each DV, to avoid the bootstrap issue?
 Linda K. Muthen posted on Friday, March 08, 2013 - 9:24 am
I would think running separate analyses would be okay. But I wonder why putting them together causes a problem. You might want to investigate this further.
 Volker Patent posted on Friday, March 08, 2013 - 1:52 pm
Thanks Linda,

I think it's because of a combination of things: Small sample size and ordinal outcome variables that are restricted in range in the lower end of the variable. It maybe that I will have to live with the lower than desired number of Bootstrap samples. I was thinking about using Bayesian estimator to deal with the skew in the DVs but that prevents bootstrapping, because of numerical integration requirement.

I thought that bootstrapping allowed one to compensate for bias arising from small sample size. I still don't understand why one shouldn't or couldn't ramp up the bootstraps to achieve a higher number of samples, and ignore the invalid draws. Apart from 'it takes longer to run' are there any known statistical reasons why one should definitely not do this?

V
 Bengt O. Muthen posted on Friday, March 08, 2013 - 6:27 pm
Bayes does not need bootstrapping in that it both allows non-normal distributions for estimates and works well with small samples. Bayes does not use numerical integration.

No statistical reasons against using many bootstraps. But there is the practical issue of having many chances of drawing "difficult" samples in cases where the data-model relationship makes "difficult" more likely.
 Volker Patent posted on Saturday, March 09, 2013 - 2:01 am
Ok. Thats useful to know.

Last questions on this subject I hope.

1. What sample size would be required for bayes estimation considering I have one IV one mediator and three DVs and one moderator of the x to m path?

2. As regards the practical issue of many bootstraps and ignoring the invalid samples, there would be no effect on accuracy of the bias correction, given large numbers of draws?

3. Just to clarify, when I get large numbers of invalid samples: is the practical problem about not getting enough of the difficult samples in a situation where difficult is more likely?

Thanks for the dialogue, btw. I don't know any other stats software that has such prompt statistical support. Mplus gets 10 stars out of 10 for that. :-)
 Bengt O. Muthen posted on Saturday, March 09, 2013 - 11:32 am
1. That depends on so many factors that you are best off doing a Monte Carlo simulation in Mplus to decide it for your particular setting. See UG chapter 12.

2. Ignoring the invalid samples does pose a danger of distorting the results.

2.3. I would try to figure out why some bootstrap samples have problems. This will clarify why the data-model situation is difficult.
 Tracy Witte posted on Thursday, April 11, 2013 - 7:56 am
Similar to Winkler's 2010 inquiry above, I am finding that chi-square and all other fit indices are identical with bootstrapping (residual), compared to the ML results. I thought that the bootstrapping (residual) command gives us the Bollen and Stine transformed bootstrap, which adjusts chi-square to reflect the fact that chi-square is not centrally distributed with bootstrapping. Is this an anomaly with my dataset? Or is it typical for the ML fit statistics to be identical to the boostrapped fit statistics?
 Linda K. Muthen posted on Thursday, April 11, 2013 - 8:59 am
We only bootstrap standard errors.
 Tracy Witte posted on Thursday, April 11, 2013 - 9:11 am
Thanks for your quick response! If you do not bootstrap the fit statistics, then what is the difference between the Bollen & Stine (residual) bootstrapping and standard bootstrapping?
 Linda K. Muthen posted on Thursday, April 11, 2013 - 1:21 pm
We use the same approach for the standard errors as Bollen & Stine use for chi-square. See the following for a brief description:

http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29#Resampling_residuals
 USF Laboratories posted on Wednesday, May 15, 2013 - 9:06 am
USEVARIABLES ARE Age gender AUDstat druguse DMQsoc DMQcop DMQen QF;
Nominal IS AUDstat;
ANALYSIS: BOOTSTRAP = 1000;
MODEL: AUDstat QF on DMQcop DMQen;
AUDstat on DMQsoc druguse gender age;
MODEL INDIRECT:
AUDstat IND QF DMQcop;
AUDstat IND QF DMQen;
OUTPUT: CINTERVAL;

I am using the above syntax to attempt to run bootstrapped mediation in mplus 5, but keep getting the cannot use with algorithm = integration error, even though I don't have that in my syntax.
 Linda K. Muthen posted on Wednesday, May 15, 2013 - 11:03 am
Your nominal variable requires numerical integration. ALGORITHM=INTEGRATION is the default in this case.
 USF Laboratories posted on Wednesday, May 15, 2013 - 11:33 am
Okay, thank you, Dr. Muthen. Is there any way to get this model to run using a dichotomous outcome?
 Linda K. Muthen posted on Wednesday, May 15, 2013 - 11:36 am
If you variable is dichotomous, use the CATEGORICAL option instead of the NOMINAL option. Then you can use WLSMV.
 Kathrin Dehmel posted on Wednesday, April 16, 2014 - 10:09 am
Dear Linda,

from which output do I have to report the estimator of the direct and total effects? Of the output
with bootstrap (estimator and confidenzinterval)or of the output without
bootstrap - because bootstrap only show the standard errors (bootstrap)?

Thank you!
 Bengt O. Muthen posted on Wednesday, April 16, 2014 - 5:46 pm
Please send your output and highlight the two places you are looking at.
 Kathrin Dehmel posted on Wednesday, April 23, 2014 - 12:32 am
Dear Bengt O.,

thank you for your answer. How can I post my output here with higlighted elements?
 Linda K. Muthen posted on Wednesday, April 23, 2014 - 6:34 am
Send it along with your license number to support@statmodel.com.
 Selin Kudret posted on Monday, March 02, 2015 - 11:10 am
Dear Linda,

A quick question re bootstrapping in full SEM: is it possible to specify the sample size in bootstrapping?

I am testing a particular model on two different samples (with different characteristics), and would like to compare the fit stats & factor loadings. However, they are not proportionate in terms of N, and I need to make a like-for-like comparison. Hence, I want to run the bootstrapping with the same N specified on each sample. Just wondering if this was possible.

Many thanks.
 Linda K. Muthen posted on Monday, March 02, 2015 - 11:16 am
No, there is no option to do this.
 Holly Andrewes posted on Wednesday, August 12, 2015 - 7:21 pm
Dear Drs Muthens,

I am hoping to conduct bootstrapping for a TYPE = Twolevel random multilevel model.

I have read in the discussion that bootstrapping is not possible in a TWOLEVEL random model when assessing indirect effect for mediation, but I am wondering if this is possible for a general two level random Multi-level model?

Thanks for your help,

Best wishes,

Holly
 Bengt O. Muthen posted on Wednesday, August 12, 2015 - 7:28 pm
No twolevel bootstrapping (yet). But if you are concerned with non-normally distributed indirect effects you can use Bayes. The confidence (credibility) intervals of Bayes are also non-symmetric taking into account the non-normal indirect effect distribution.
 Holly Andrewes posted on Wednesday, August 12, 2015 - 8:46 pm
Hi Bengt,

Thanks for your quick reply. Just a further query into this issue.

When completing a general "Type = TWOLEVEL RANDOM" MLM which includes "estimator = BOOTSTRAP; in conjunction with output: CINTERVAL;" the output does deliver confidence intervals.

How do I interpret these? Rather than being bootstrapped confidence intervals, are they instead the frequentist confidence intervals (bayesian credibility intervals)?

Thanks again for your help,

Holly
 Linda K. Muthen posted on Thursday, August 13, 2015 - 9:50 am
I assume you are using the BOOTSTRAP option because ESTIMATOR=BOOTSTRAP is not a valid command. If you use BOOTSTRAP with the default CINTERVAL, you obtain symmetric confidence inervals using bootstrapped standard errors.
 jml posted on Friday, February 26, 2016 - 10:15 am
Hi,

I'm using residual bootstrapping [BOOTSTRAP = 1000(RESIDUAL)] to evaluate model rejection rates in a simulation. I'm trying to decide whether it's better to use residual bootstrapping or standard bootstrapping in evaluating the bias of the parameters' standard errors. Can someone please point me to a reference that explains the difference in how Mplus handles the two methods computationally? Also, is there a recommended type of bootstrapping to use in evaluating the estimated standard errors as opposed to the p-values?

Thanks!
 Tihomir Asparouhov posted on Friday, February 26, 2016 - 3:06 pm
Which bootstrapping to use is summarized well here
http://www.stat.cmu.edu/~cshalizi/uADA/13/lectures/which-bootstrap-when.pdf
i.e., if you are confident in the model the residual bootstrap is expected to yield smaller SE.

The residual bootstrap is explained here

https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29#Resampling_residuals

Bollen, K.A. & Stein, R.A. (1992). Bootstrapping goodness-of-fit measures in structural equation models. Sociological Methods & Research, 21, 205-229.

Enders, C.K. (2002). Applying the Bollen-Stine bootstrap for goodness-of-fit measures to structural equation models with missing data. Multivariate Behavioral Research, 37, 359-377.
 jml posted on Saturday, February 27, 2016 - 2:20 pm
Thanks Tihomir
 Zsofia Ignacz posted on Friday, July 29, 2016 - 7:20 am
Dear Professors Muthen & Muthen, dear Mplus Community,

Is there a way to export the results (in particular for unstandardized coefficients) for each bootstrapped sample if one runs a multinomial Regression.

The Background of my inquiry: I would like to calculate predicted probabilities and their confidence intervals for multinomial regressions (and then later also SEM multinomial regressions).

Calculation of the predicted probabilities is easy, based on the coefficients.

However, I am having problems with the confidence intervals for the predicted probabilities.

I initially wanted to calculate the confidence intervals of the predicted probabilities by calculating the standard error from the covariance matrix of the Parameters. However, I couldn't fiure out how to configure the vector.

Now I would like to calculate based on the bootstrapped unstandardize coefficients the predicted probabilities for each of the bootstrapped samples, so I could calculated the bootstrapped confidence intervals.

Thank you in advance for your feedback,
Zsofia S. Ignacz
 Rick Borst posted on Friday, July 29, 2016 - 3:11 pm
Hello I also receive the problem that only 671 Draws of 1000 were completed. My sample size is 9500 so that cannot be the problem, can you give some advise? Thank you!
 Bengt O. Muthen posted on Friday, July 29, 2016 - 3:27 pm
Zsofia:

No, there is not a way to export estimates for each bootstrap sample.

Why not instead express the predicted probabilities in Model Constraint and ask for bootstrapping and Cinterval(bootstrap) in the Output command.
 Bengt O. Muthen posted on Friday, July 29, 2016 - 3:29 pm
Rick:

It's because the model estimation did not converge for 1000-671 draws. This is an indication that your model is "fragile", that is, it may not be easily replicated in a new sample. You may want to modify the model.
 Chang Liu posted on Wednesday, June 07, 2017 - 8:59 pm
Dear Professors Muthen & Muthen,

Hope you are doing well! Is there an option in Mplus to save the bootstrap samples (for future use)?

Many thanks!
Chang
 Linda K. Muthen posted on Thursday, June 08, 2017 - 6:08 am
This option is not available.
 Chang Liu posted on Thursday, June 08, 2017 - 7:59 am
Thank you for the quick reply! A quick follow up question: I am doing mediation analysis but have missingness that I would like to handle with multiple imputation and that I would also like to use bootstrap to get the CI for the indirect effect. Is there a way that I can simultaneously request for multiple imputation and bootstrapping. If not, what would you recommend instead?
 Bengt O. Muthen posted on Thursday, June 08, 2017 - 5:55 pm
I would recommend not using multiple imputation when bootstrapping is called for. Instead, use ML. Or use Bayes (bootstrapping not needed).
 Chang Liu posted on Thursday, June 08, 2017 - 7:14 pm
Thank you so much for the suggestion! I will try both ML and Bayes :-)
 fatimahassan posted on Thursday, May 31, 2018 - 12:31 pm
if (-.30, -.01) with b={-.16,p<.05} tested moderation is it significant moderation
 fatimahassan posted on Thursday, May 31, 2018 - 12:33 pm
if CI(-.30, -.01) with {B=-.16,p<.05} tested moderation is it significant moderation
 Bengt O. Muthen posted on Friday, June 01, 2018 - 3:19 pm
Right.
 JIANAN ZHOU posted on Tuesday, June 26, 2018 - 3:14 pm
Dear Prof Muthen & Muthen,

I'm doing a SEM. But I'm confusing why the bootstrapping completed is 0 in the Output? And if I delete a WITH statement in the model, the bootrapping will be all completed.

Thank you very much!
 Bengt O. Muthen posted on Tuesday, June 26, 2018 - 3:27 pm
Send your output to Support along with your license number.
 Mike Nelson posted on Saturday, May 09, 2020 - 11:54 pm
Greetings, I am attempting to run a multinomial logistic regression that controls for clustering with TYPE = COMPLEX and I am having one severe inconsistency between p values and confidence intervals when I use bootstrap confidence intervals (specifically with CHANGE#1 ON PDCHNG). When I bootstrap with 10,000 draws I receive the following values that seem incongruent (BCBCIs are to the right of p values):

CHANGE#1 ON
VISCHNG 0.548 0.466 -0.970 0.332 [-1.262, .003]
PDCHNG 0.467 1.841 -0.290 0.772 [-1.772, -.155]

The rest of the values in the model seem appropriate. Also, when I attempt to run the model without bootstrapping, the p values change drastically and are more in line with the results of the bootstrap confidence intervals. Should I trust the confidence intervals in this case?
 Tihomir Asparouhov posted on Monday, May 11, 2020 - 2:41 pm
The P-value that you see in the input is the P-value for the symmetric bootstrap confidence interval (this just uses the bootstrap SE to form a Z score of estimate/SE and reports normal distribution based p-value). We currently don't compute the P-value for the asymmetric confidence interval. Clearly the BCB p-value is less than 0.05. If you need the exact value follow the instructions given at the bottom of this thread
http://www.statmodel.com/discussion/messages/11/628.html?1581358643
 Bengt O. Muthen posted on Wednesday, May 13, 2020 - 3:24 pm
We will have a FAQ posted on this tomorrow called Bootstrap P-Value Computation.
 Dennis Reidy posted on Friday, June 26, 2020 - 7:31 am
I conducted bootstrapping on some potentially overfit regression models. But, this only appears to resample the standard errors. We are more interested in getting stable estimates of the beta coefficients than we are concerned with their significance values. I'm not sure if it is possible to bootstrap the coefficients themselves in mplus or if there is an alternative resampling procedure we could conduct.

Thanks
 Tihomir Asparouhov posted on Monday, June 29, 2020 - 12:08 pm
The primary way to identify overfitting regression coefficients is to look at their standard errors. If the coefficient is not significant it can certainty be removed. Our most effective tool for discovering weakly identified parameters is the standard errors produced by the MLF estimators. Typically those would be relatively large for such parameters. You might find this useful
https://statmodel.com/download/ConditionNumber.pdf
 Dennis Reidy posted on Monday, June 29, 2020 - 7:04 pm
Got it. Thanks.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: