I am analysing data using Bayesian mixture model. I am checking autocorrelation of the generated data and wondering whether I have to ensure that all chains must meet the criteria of having autocorrelation less than .1, or just one chain is enough.
Hard to say; perhaps twice as long as it took to get the PSR down to convergence.
For label switching, see Section 8.2 of our technical report on our web site:
Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3.
as well has the handout for Topic 9 on our web site, slides 82-85, especially the trace plot on slide 84, showing a low and a high set of estimates corresponding to two different solutions. The label switching topic needs to be well understood before doing Bayes mixture analyses.
Carolyn Hou posted on Monday, February 28, 2011 - 6:18 pm
I have several questions regarding Bayes mixture modeling. 1. When "Data imputation: Plausible=" commands are used, is the algorithm still MCMC by default? 2. What is the difference conceptually between parameter estimates with and without using this command? 3. Which set of parameter estimates should be used for more accuracy, those with this command, or those without it?
1. Yes. 2-3. There is no difference. Plausible values like factor scores are computed after model estimation.
Tim Stump posted on Friday, September 28, 2012 - 7:56 am
I am using estimator=bayes to run LCA of 11 binary items. My question is about how to best know when the process has converged. I set fbiterations to a large value, say 25,000. Some of my models conclude with PSR<1.1, but another one I ran had PSR hovering between 8 and 10 and never getting close to 1. Do you have suggestions on what you might do in a case like this? If the model is mis-specified, would this be a characteristic you see in the model fitting process? I'm curious about a general strategy for model fitting/checking for convergence using bayes for lca.
Tim Stump posted on Friday, September 28, 2012 - 8:19 am
Just a follow-up to my previous post...let's say you have a PSR that hovers between 2 and 4 hypothetically for a large number of fbiterations, not as extreme as PSR between 8-10. What might be a next step to figure out how to obtain convergence?
To do Bayesian estimation of mixture models takes some studying because it is not a straightforward analysis. A key issue is label switching. This is discussed in the 6/1/11 Topic 9 course video and handout on our web site. The topic of using PSR also needs to be studied and issues like premature PSR convergence and non-identification is discussed in the new Utrecht video and handout on our web site.
PST values of 2-4 are not really any different from8-10 in that both settings indicate problems in the "mixing" (convergence), either due to label switching or non-identification.
I am trying to model a multigroup Latent Markov Chain using known classes. While I was able to run the model using MLR with random starts I would like know if this type of analysis is possible using Bayesian estimation. For the moment I keep getting this error: "Analysis with more than one categorical latent variables is not allowed with ESTIMATOR=BAYES."
I mean doing 3-step "manually" as described in the web note.
Alden Gross posted on Wednesday, August 28, 2013 - 2:56 pm
Thank you! I replicated the webnote and adapted it for my purposes.
Ted Fong posted on Thursday, March 27, 2014 - 8:10 am
I have conducted a Bayes mixture model with one categorical latent variable and 2 continuous latent variables. I would like to ask:
1) Is 'Label switching' still an issue of concern in Bayes mixture modeling in Mplus 7?
2) It seems Tech11 and Tech13 are not avaliable with Bayes estimator. The output file did not show any results for Tech12 and Tech14. Are these two also not available with Bayes?
3) With Bayes, all of the auxiliary options (R3STEP/DU3STEP/E/DCAT/DCON)for comparing are not available, right?
4) There are no BIC/DIC shown in my Bayes mixture model. I can only find the posterior predictive p value. In this case, do you have any suggestions on what other tools to determine the model fit of competing models?
Earlier in this thread (March 7 2014) I see that it is not possible to do the (automatic?) auxiliary variable options with the Bayes estimator. From my attempts it seems this is still the case in Version 8, but I'm wondering if I can implement the manual 3-step approaches as mentioned in webnotes (specifically for the BCH and DCAT methods)?
I have attempted to do this, and get the overall model to converge just fine, but whenever I specify SAVE = bchweights, I get the following error:
SAVE=BCHWEIGHTS is not available with ALGORITHM=INTEGRATION
Elsewhere I have seen discussion of this error and it was said that integration is needed for data with missingness - so I tried with complete data and got the same error.
I do think that the manual BCH and 3-step can in principle be conducted with the Bayes estimator, however, we have not done it yet and a number of the steps would require a bit more work than with the ML estimator. Bayes manual 3-step would be easier to do for you. Manual Bayes BCH would not be easy at all for you and Mplus-Bayes will not accept any weights (even BCH weight).