Regressing growth factors on covariates PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
 Anthony Mancini posted on Friday, March 06, 2009 - 11:07 am
I am working on an LGMM with 4 time points and 5 covariates. A model with intercept, slope, and quadratic parameters provides the best fit.

Covariates predict class membership in theoretically relevant ways. However, when I regress the intercept, slope, and quadratic latent variables on the covariates, it perturbs the solution in strange ways. For example, the estimates for the means of the latent growth factors change dramatically but the figure is very similar. Also, the means different and don't match the figure representation (e.g., the I for the low class is .85 in the output and -.35 in the figure).

How can I understand this? Should I omit regressing the ISQ on the covariates? And instead only predict class membership?
 Bengt O. Muthen posted on Sunday, March 08, 2009 - 11:49 am
If you are comparing

c on x;


i s q on x;

you should expect a large difference. If on the other hand you are comparing (1) with

c on x;
i s q on x;

then if the "i s q on x" slopes are significant that would be the solution you should pay attention to.

Note also that when you add "i s q on x" you no longer see the means of i, s, q in the output (unless you request TECH4), but you see their intercepts in the regressions on x.
 Anthony Mancini posted on Tuesday, March 10, 2009 - 1:34 pm
As usual, thank you indeed.

I am still getting a curious estimate for the intercept means when I run the LGMM with the ISQ regressed on the covariates.

It is curious because the plot produces very different estimates for the intercept means than the output indicates. For example, as mentioned above, the plot shows a mean I value of -.35 for the low class whereas the output has a mean I value of .99. It occurs to me that because I am using montecarlo integration that perhaps the means are being translated into logit scale. Curiously, if you subtract the threshold estimate for the lone categorical predictor (1.33) from the mean estimates of the intercept for each class, it appears to result in exactly the mean values depicted in the figure. I am unsure what is causing this or how I might produce the correct mean estimates for the intercept. I hope this is making sense. Thanks again in advance.
 Anthony Mancini posted on Tuesday, March 10, 2009 - 2:09 pm
I think I now understand that the mean estimates are not included in the output, only the estimates of the intercepts for the regressions are included. Therefore, the above post is obviated. Sorry about that.
 linda beck posted on Wednesday, October 14, 2009 - 7:08 am
Hi there!
I have a binary grouping variable "bg" and a binary outcome "bo". Both bg-groups are approx. equivalent regarding the binary outcome "bo" (equivalence is for T1, I have T1-T5 data). Now, is it possible to have a slight but significant effect of bg on latent class "C" of "bo" in a two group LGMM (together with other covariates) but no effect of bg on both LGMM-group intercepts "bo-iu" (centered at T1)?
Background: Both LGMM groups of "bo" differ slighty (but significant) by intercepts but more markedly with regard to linear and quadratic growth (flat vs. heavy increasing). I wonder whether it is possible to have an (weak) effect of bg on group membership while having no, somewhat balancing, effects on both group intercepts (against the background of T1 equivalence regarding "bo" across both "bg-groups" in conjunction with the slightly higher iu-mean of one LGMM-group).
 Bengt O. Muthen posted on Wednesday, October 14, 2009 - 5:32 pm
The fact that bg influences c doesn't mean that [iu] is different across the bg groups. The c classes may be defined by [su qu] differences across class rather than [iu] differences.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message