John Woo posted on Wednesday, October 20, 2010 - 4:54 pm
Hi, I have a model involving two latent categorical variables, cd(4 classes) and ch (5 classes).
I am trying to use five imputed datasets using TYPE=IMPUTATION, but I can't seem to avoid the class-label switching problem.
Only way to avoid the class-label switching problem appears to be fixing the parameters using the best estimates from individual imputed datasets.
But, it seems that using the average of the best estimates from individual dataset as the fixed parameters does not give the lowest BIC in TYPE=IMPUTATION runs. That is, there seems to be so many other possible parameter values that would produce lower BIC.
Since I can't do something like START=500 20 when using TYPE=IMPUTATION (because of label-switching).. what would be the best way to make sure that my result is the global maximum in the situation described above?
Perhaps you are not using Mplus 6.01. In Mplus 6.01 the ML label switching problem when using Type=Imputation data is approached by applying the random starts to only the first data set and then using the final estimates for that data set as starting values for the subsequent data sets, not applying random starts.
John Woo posted on Wednesday, October 20, 2010 - 5:59 pm
I mean 6.1 which was released the other day. There is no 6.01 - we went straight to 6.1 given some new developments.
milan lee posted on Friday, May 30, 2014 - 12:20 pm
Hi Dr. Muthen, I just happened to notice the label switching problem and would like to have your further explanation on it. I am analyzing growth mixture models based on 10 imputed datasets using TYPE=IMPUTATION. Tueller et al. (2011) used the swithched label detection algorithm implemented in R. But I wonder how Mplus works to detect this problem? Thank you!
Are you referring to label switching over MCMC iterations in Bayes, or in ML over datasets? I assume the latter, in which case you detect it by seeing poor parameter recovery; there is no warning given by Mplus.