Comparing nested GMMs
Message/Author
 Johnny Wu posted on Wednesday, November 15, 2006 - 4:54 pm
Dr. Muthen,

Let's say I have a 2-class GMM with an i and s, and a set of covariates.

My saturated model (SM) would free up all unidirectional paths. Thus, the LL would be highest and have the most parameters.

My null model (NM) would fix all unidirectional paths to zero. Thus, the LL would be lowest and have the least parameters.

My hypothesized model (HM) would fall somewhere in between the two (i.e., free some paths and constrain others, thus the LL and paramaters would be somewhere between the LL and parameters of SM and NM.

To find the best model, should I:

1) Test my HM against SM and see if the LL difference is significant. If yes, it suggests the my HM fits significantly worst than SM. Thus, I free more paths until the LL difference is no longer significant.

OR

2) Test my HM against NM and see if the LL difference is significant. If yes, it suggests that my HM fits significantly better than NM. Thus, I constrain more paths until the LL difference is no longer significant.

Thank you,

johnnywu
 Linda K. Muthen posted on Thursday, November 16, 2006 - 7:40 am
I would test my hypothesized model againt a restricted version of it.
 Dustin Pardini posted on Tuesday, September 03, 2013 - 2:17 pm
I am running a GMM with annually collected ordinal marijuana use data from early adolescence to young adulthood (11 waves total). A unconditional general growth model indicates the adding a cubic term increases the fit of the model, and the variance is significant for this cubic factor. However, including this level of complexity into a GMM leads to significant conversion problems when I attempt to add a second latent class with only a random intercept. Even when I simplify the overarching model to include only a quadratic term and then run a two class model with one class having all slope factor variances freed and the second having a just having the intercept variance free a solution is reached where the second class includes just 7.9% of participants and follows a dramatic inverted u-shape. This solution seems strange to me, especially since when I run LCGA there are up to 5 latent classes that can be justified based on various tests (e.g., BIC, BLRT). I know they two methods oftentimes result in different solutions, but the GMM results seem to suggest that a two class solution does not provide any improvements in model fit.

Do you have any thoughts or suggestions regarding what may be going on here?
 Bengt O. Muthen posted on Wednesday, September 04, 2013 - 3:18 pm
When you have that many growth factors you may want to fix the variances of say the quadratic and cubic at zero. And then do many random starts. Then go with BIC and try not only 2 but more classes. You may also want to not specify a class with only a random intercept - if there is such a class, you can let the estimates show that.