Unconditional vs. conditional GMM PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
Message/Author
 socrates posted on Saturday, November 11, 2006 - 2:30 am
Dear Dr. Muthén

With an unconditional GMM I identified five latent classes in a londitudinal dataset. These trajectories agree with theoretical expectations.
Subsequently, I entered time-invariant covariates to check if these variables allow to predict growth parameter variance within these latent classes. While I found some significant predictors with this procedure, some of the resulting trajectories look quite different compared to the ones of the unconditional GMM. How do I have to interpret this?

Thank you very much foryour help!
 Linda K. Muthen posted on Saturday, November 11, 2006 - 8:32 am
The following paper discusses this isuue. It can be downloaded from the website. See Recent Papers.

Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345-368). Newbury Park, CA: Sage Publications.
 Tracie B posted on Monday, May 11, 2009 - 9:37 am
I have 2 questions on this topic:

1. I am using unconditional GMM to generate classes, exporting them, and then examining covariates as well as outcomes, i.e. I am treating 'class' like any other categorical variable. Many posts talk about 'leading to distorted results' but intuitively this is what makes sense to me- using the naturally occuring patterns (regardless of who populates them), then exploring who is in what class, and what the consequence of being in that class is. Is there a problem with this approach, i.e. estimating unconditionally and then dealing with covariates subsequently?

2. Unrelated to above: when I use GMM (linear), I get the expected number of classes (4) and patterns, in line with a priori theory; if I fix the variance, it is still similar. If I fix the variance AND the intercept, I no longer get the patterns I anticipated- just more and more parallel lines. I know that there is a lot of within person variation, and in my data I have 20 time points. So it works well with GMM but not with LCGA. Is this a problem? My feeling is that I have to be able to allow within class variation in order to have them emerge.
 Tracie B posted on Monday, May 11, 2009 - 9:42 am
To clarify above,
I meant if I fix the variance of the slope, and then of both the slope and the intercept
(i-s@0)
 Michael Spaeth posted on Tuesday, May 12, 2009 - 8:25 am
1.) If you export to another program class membership is based on "most likely" and fuzzy boundaries (that would be: class membership based on posterior probabilities) are not taken into account in further analyses. Contrary, if you model the covariates in your LGMM (within mplus) effects of covariates are controlled for this class uncertainty. Another point is that in the latter kind of model you can reduce the potential of a misspecified model (often you need direct effects of covariates on the indicators).
However, if your classification quality is very good (entropy above .90, average post probabilities also) you can probably stick with saving class membership based on most likely (because class uncertainty is very lowin this case). I also often check whether there are very few "borderline cases", i. e. individuals with nearly the same posterior probability to be assigned to each class.

2.) Interesting issue. However, I always do a LCGA first and then free growth factor variances in a stepwise fashion (first intercept, then slope) and finally I try to let these variances differ across classes. If your growth factor variances are significant within classes, I would leave them in the model, because this is closer to reality. Addtionally, out of my experiences, I get the impression that one overestimates the number of classes with restricted growth factor variances.
 Amir Sariaslan posted on Tuesday, May 12, 2009 - 8:39 am
Tracie,

For the first part of your post, you should read the following paper:

Clark, S. & Muthén, B. (2009). Relating latent class analysis results to variables not included in the analysis. Submitted for publication.

http://www.statmodel.com/download/relatinglca.pdf

Sincerely,
Amir
 Linda K. Muthen posted on Tuesday, May 12, 2009 - 8:56 am
For the second part, if GMM makes more sense both statistically and substantively because you have variation within classes, I would use GMM.
 Tracie B posted on Tuesday, May 12, 2009 - 7:46 pm
Thank you very much, this was a great help!
 Youngoh Jo posted on Thursday, December 15, 2011 - 8:51 pm
Using unconditional models I found 3 groups. When I use conditional models, I put the following commands:
data: file is "E:\data\w1-w6.csv";
variable: NAMES ARE ID SEX sc1-sc6 pa1-pa5 mo1-mo5 ab1-ab5 ta1-ta5 dp1-dp5 ne2-ne5;
USEVARIABLES ARE SEX sc2-sc6;
MISSING ARE ALL (999);
classes = c (3);
ANALYSIS: TYPE = MIXTURE;
starts = 20 2;
model: %overall%
i s | sc2@0 sc3@1 sc4@2 sc5@3 sc6@4;
i s on sex;
c on sex;

OUTPUT: tech1 tech8;

and I got the following error message:

*** ERROR in Model command
Unknown variable(s) in an ON statement: C

What's wrong with this?

Thanks in advance.
 Linda K. Muthen posted on Friday, December 16, 2011 - 6:14 am
Please send the full output and your license number to support@statmodel.com.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: