Hello! We have a 3-class LCA model with high entropy (0.81) and good model fit. We wish to fit a model with distal outcomes. After unsuccessfully trying to fit the model using the 3 step procedure (more than 20% of the observations in step 1 class moved to a different class in step 3), we then fit the model using the DCON and DCAT commands. We noticed that the results are drastically different from the results from the classify-analyze approach, which could be attributed to the fact that Mplus in less biased. However, the problem is, Mplus results appear to be misleading (e.g., we included a validity check where one of the outcomes is English language vocabulary and the model is predicting that Spanish speakers have higher English vocabulary that English speakers). The classify-analyze results actually appear to be in line with validity checks. There seems to be some label switching in the Mplus output, but not in a consistent way. In sum, we have strong suspicions that the results we are obtaining using DCAT and DCON commands are not trustworthy. At the same time, we would like to use the best available approach and are aware of all the shortcomings of the classify-analyze approach.
What do you recommend we do? Are there any model diagnostic tests you could recommend? Thank you in advance!
There is no label switching issue with DCON/DCAT (that is, no switch from Step 1 to Step 3) - if you think you experience that, you can send outputs showing that.
Try the new BCH option for the continuous distals to see if that makes a difference. We have not found any problems with BCH for continuous distals or with DCAT for categorical distals. For which option to use when, see Table 6 and 7 of our new paper on our website:
Asparouhov, T. & Muthén B. (2014). Auxiliary variables in mixture modeling: Using the BCH method in Mplus to estimate a distal outcome model and an arbitrary second model. Mplus Web Notes: No. 21.