Guido Biele posted on Tuesday, January 28, 2014 - 6:22 am
Hello, I am running Mixture CFA models and seem to get non-sensical results. Each model has 18 ordinal indicator variables: - 9 for lV A - 5 for lV B - 4 for lV C The 11 different models allow 2-12 latent classes. Pairwise model comparisons between all models, with the difftest for MLR, result mostly in p=0, with exception of comparisons with one model, where the result is always p=1, I guess due to a high LLCorrectionFactor.
My questions: 1) What could cause this problem? 2) Could my large sample size (over 10000) be the reason that ever more groups lead to ever lower BICs? 3) Which alternative estimator to MLR would you recommend? Could I use Bayes?
Below are key model statistics and an exemplary model.
What does the 1-7 in your first column refer to - going from a high to a low number of classes? Typically when BIC keeps improving but never showing an optimum the model type is not quite right for the data. For example, your model assumes that the intercepts of the factor indicators are invariant across classes. You can instead fix all factor means to zero and let all intercepts be free. That makes for a more complex model but it may fit much better.
Guido Biele posted on Wednesday, January 29, 2014 - 9:09 am
Thanks for the quick response Bengt!
(The column # in the table can be ignored, it is an arbitrary ID used to distinguish models with different number of latent classes).
I tried to implement your suggestion to fix factor means to zero and to let (indicator) intercepts free. however, I seem to be using the wrong syntax and can't figure out what is wrong with this:
which results in the error message: The following MODEL statements are ignored: * Statements in Class 1: [ ATT1 ] [ ATT2 ] [ ATT3 ] ...
1) Can you tell me what's wrong with the following syntax and/or where i could find helpful examples?* 2) Would the model as you describe it insure that the factor loadings are still consistent across groups?
Thanks in advance! Guido
*(I looked in chapters 5 & 7 of the userguide, but could not use the info there to get the model right)
Perhaps your items are categorical in wnich case you should say [att1$1] etc.
This model setup makes the factor loadings invariant across groups. This is often good enough.
Guido Biele posted on Thursday, February 13, 2014 - 5:29 am
Fixing latent means to 0 and freeing indicator intercepts helped. Comparing BICs now allows me identifying a model with a reasonable number of latent classes.
So here are my hopefully 2 last questions: 1) Is there a way that I can draw class profiles on the level of the latent variables from a mixture CFA with consistent factor loadings, latent means fixed to 0, and freed indicator intercepts?* 2) Should I be able to implement the same analysis as a Bayesian analysis?
Thanks in advance! Guido
* I thought about using the thresholds to calculate class means or medians of my (ordered ordinal) indicators and then using the factor loadings to calculate average profiles for the classes. However, a colleague told me that the model I have set up implies violation of scale invariance, and that I could thus not compute class-wise means of the latent variables that can be compared.
The issue with the free intercepts model is that you don't have measurement invariance for the factor across the classes. So factor levels cannot be compared across classes - if that is what you are asking.
Bayes analysis is more difficult with mixtures given its label switching issue.