

'Wild' SEs and CIs when bootstrapping... 

Message/Author 


Hi Bengt, Linda, I am looking for some advice on a problem I'm having generating bootstrapped SEs and CIs for parameter estimates and indirect effects in a model. The model has a number of binary indicators of latent variables that are used as predictors and some binary observed predictors. Some of these have low frequencies of endorsement (~10% in a sample just less than 200). The bootstrapped estimates of SEs for most parameters are quite close to the normallyestimated values but some are wildly inflated. This applies to the unstandardized parameters  the standardized ones seem reasonable. My guess is that some bootstrap samples have an even lower prevalence than this and that, although the model converges, some parameter estimates are highly questionable. The problem doesn't occur when I treat the binary variables as being continuous. I'd like to use bootstrapped CIs, particularly for the indirect effects, so the question is what I can do about this given I can't see any way of weeding out resamples that yield improper results and I'm not sure whether I can rely on the results for the standardized outcomes. The variables in question are important to my colleagues guiding the substantive aspects of the model so I'm not at liberty to produce an instant fix by deleting the problem variables! Thanks, Andrew 


Yes, some bs samples are probably causing the problem. Not much one can do. But you could try Bayes instead  see chapter 9 of our new book where Bayes is applied to mediation. 


Thanks for confirming my suspicion. I experimented briefly with Bayesian estimation but it appears this may not be so straightforward for this model. I need to doing some more reading and upskill so I can try this more confidently. Andrew 


Ok. 

Back to top 

