Specifying burn in for Bayes PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Trang Q. Nguyen posted on Monday, December 08, 2014 - 9:24 pm
I understand the default in Mplus is to discard half of the iterations as burn-in. Could I specify something different, a specific number of iterations or a different ratio (e.g., 1/10)? I am working on a model that converges very fast, but some of the parameters have large auto-correlation, so I would like to use a big thinning factor and run long chains. This means the burn-in I need is only a small fraction of the number of actual iterations. Thanks!
 Tihomir Asparouhov posted on Tuesday, December 09, 2014 - 10:36 am
Auto correlations are not a reason of concern in principle. I would recommend to not alter Mplus convergence decision and posterior distribution. If the auto correlations is extremely high the MCMC sequence can not possibly converge fast. I would recommend using thin as you have done but leave the rest as is.

You can save all parameters in all MCMC iterations with the BPARAMETERS and summarize the posteriors any way you want from that file.
 Trang Q. Nguyen posted on Wednesday, December 10, 2014 - 12:18 pm
Thank you, Tihomir.
 Trang Q. Nguyen posted on Wednesday, December 10, 2014 - 5:03 pm
Hi Tihomir,

I just wanted to share a bit more. I used four chains and set BCONVERGENCE = 0.001, and convergence happens within a few hundred iterations. This is very clear from the trace plots, where the four chains quickly converge and mixing is very good. Most of the parameters have low auto-correlation, except for binary variables' thresholds and regression coefficients relating these to a latent mediator. Auto-correlation is not a concern except that it reduces effective sample size, and I was trying to get to effective sample sizes that I feel comfortable with (a few thousand).

Yes, I have saved all the iterations and discarded only part of the first half. I was just wondering whether there was a way to specify a burn-in option so you don't have to do this manually.

Thank you.
 Trang Q. Nguyen posted on Wednesday, December 10, 2014 - 5:16 pm
PS: Sorry, I misspoke. In that model I have an observed continuous mediator, which is why it converges very fast. I have another model with a latent mediator underlying an ordinal variable; that one takes a longer time to converge.
 Tihomir Asparouhov posted on Monday, December 15, 2014 - 9:50 am
Trang

Use FBITER command to specify whatever number of iterations you want.

BCONVERGENCE = 0.001 is a very strict criterion and I have no doubt the model has converged.

Models with categorical variables in general tend to take longer to converge and have higher auto correlations.

Tihomir
 Jonathan L. Helm posted on Tuesday, April 26, 2016 - 4:31 pm
If the BCONVERGENCE criterion is set to .001, what is the PSR boundary for convergence?

I know that the PSR boundary is based on the number of parameters estimated within the model. Within this note:
http://www.statmodel.com/download/Bayes2.pdf

on page 8, some text indicates that convergence is reached when PSR values for all parameters is less than 1 + e, where e = f*c. c is set by the user (via BCONVERGENCE = c), but what is f? How can we determine f?

Thanks!
 Tihomir Asparouhov posted on Wednesday, April 27, 2016 - 11:34 am
You should use FBITER if the automatic setup is not working for your models. It should work for most standard models.

In excel
f=NORMINV(0.95^(1/p),0,1)/1.64485362695147
where p is the number of model parameters.
Thus if p=1 then f=1.
 Daniel Lee posted on Thursday, January 10, 2019 - 10:08 am
Hi Tihomir,

Can you very briefly explain why categorical variables have higher auto-correlations than conntinuous variables? Thank you so much!
 Tihomir Asparouhov posted on Friday, January 11, 2019 - 3:29 pm
I assume that your question is why you get smaller correlation when a categorical variable is treated as continuous. This is well-known in the literature and it is generally referred to as attenuation. It happens not just in time series models but also in cross-sectional models. It has been documented first here
http://www.statmodel.com/bmuthen/articles/Article_036.pdf
It is also discussed in this recent article
http://www.statmodel.com/download/CenteredMediation.pdf
under Table 9.
 Daniel Lee posted on Monday, January 14, 2019 - 9:33 am
Thank you!
 Yanling Li posted on Friday, May 01, 2020 - 6:36 pm
Hi Tihomir,

I wonder if Mplus adapts the samplers during the burn-in phase. That is, can the first half samples be treated as independent samples drawn from the posterior distribution? I am running a long chain and do not want to discard the whole first half, so I keep some iterations in the first half for estimation, but if these are not independent samples, I may not use them for estimation.

Thanks!
 Tihomir Asparouhov posted on Monday, May 04, 2020 - 10:18 am
In some estimations we adapt the sampler in the burn-in phase. This happens only up to iteration 1000. We do not recommend that you use the first half of the estimation as we can't guarantee that convergence has been achieved at that point and the estimates could still be moving towards the posterior distribution.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: