Steps in mixture modeling PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
Message/Author
 Kiki van Broekhoven posted on Friday, May 19, 2017 - 5:05 am
Dear profs Muthen,
I have a question about the steps to conduct in mixture modeling. I started with LCGAs, and next determined whether I needed to estimate any growth factor variances (instead of fixing them to 0 as in LCGA).

For my model, there was a lot of within-class variation for each of the classes. So, I ran models with freely estimated variances for all the classes. These models did not converge, so eventually I ended up with models with freely estimated intercept variances for each class, and the slope variance fixed at 0 for each class.

However, I now wonder whether I should not have tried an "in between" solution, so GMMs with estimated variances but that are the same for each of the classes. I find it very difficult to decide, based on the graphs, whether the classes could do with the SAME variance across the classes or need class-SPECIFIC variances.
(Yesterday, I tried to quickly run a 3-class model with estimated variances that are the same across classes, with starts = 20 4; this model runned for 7 hours straight and I encountered the message that the best loglikelihood was not replicated so maybe these models are not even suitable for my data...)

Or could I just stick with my current model, so a GMM with freely estimated intercept variance and the slope variance fixed at 0? Your help would be greatly appreciated!
 Bengt O. Muthen posted on Friday, May 19, 2017 - 1:37 pm
I would try to get the Mplus GMM default to replicate the best logL value. The Mplus default is free but equal variances. And often only the intercept variance needs to be free (and equal). I would go by BIC to get the best model. I don't know why your 3-class GMM with equal intercept variance would take so long.
 Kiki van Broekhoven posted on Friday, May 19, 2017 - 2:09 pm
Thank you for your response. The 3class GMM with free variance was for both intercept and slope so maybe that's why it took so long.

(1) just to check if I understand you fully: I should try models with the intercept free (but equal across classes) and with slope and quadratic fixed at zero (I also have quadratic effect). And then compare BIC values with that of a completely LCGA model (sp all growth factors fixed at 0); not look at models with free growth factor variances that also differ BETWEEN classes. Am In correct?

(2) are you aware of any publications that explain/state why often only the intercept variance needs to be free (and equal)?

(3) as a last question: for my current models, one class does not have a significant quadratic term. Should I just stick to that model or should I follow up with a model in which I restrict that class to intercept+slope only?
 Bengt O. Muthen posted on Saturday, May 20, 2017 - 12:34 pm
(1) Yes, that's what I meant. Except you can also compare to the BIC you get from the model with free and unequal intercept variance.

(2) I think I mentioned something about this in my early GMM papers but perhaps not - check under Papers on our website. In any case, the intercept typically represents the largest variance contribution to the outcomes and is therefore the first thing you would want to free the variance of.

(3) Just stick to that model and report the non-significance. I am not much for "model trimming".
 Kiki van Broekhoven posted on Monday, May 22, 2017 - 12:58 am
Thank you, that's clear then. I compared BIC from the free and unequal intercept variance as well; BIC for the model with equal variance is slightly better than for the model with unequal variance. Apart from that they are very much alike in each way. So I will report results from the model with equal intercept variances. Thank you for your help.

I will definitely check those papers.
 Kiki van Broekhoven posted on Monday, May 22, 2017 - 1:34 am
I have another question about my final model.
GMMs are clearly a much better fit to my data than the simpler LCGA models. Model fit statistics point to the 3-class GMM with equal intercept variance as the most suitable model. However, starting from a 2-class model, the GMMs give rather small classes. Is this problematic? (e.g. 3 class model: N=41 and N=76, and a very large class of N= 1308) I read in a paper of yours from 2000 the following:

[...] Only one GMM class has class size less than 50. It is doubtful that classes with so few individuals allow a trustworthy generalization.

So would this be a problem?
 Bengt O. Muthen posted on Monday, May 22, 2017 - 2:15 pm
This depends on how many parameters are specific to the small class. In your case it may be only the 3 means for i, s, q in which case N=41 would seem ok.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: