Single-class LGMM vs. LGM & correlate... PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Michael J. Zyphur posted on Monday, June 06, 2005 - 6:06 pm
Dear Profs. Muthen,
I have two questions:

1)I fitted a LGM and an identical single-class LGMM. The single-class LGMM had significantly lower AIB/BIC values and slightly different parameter estimates. Can you please tell me why? Also, if I'd like to compare a LGMM to an LGM, would it be advisable to build my initial model in a single-class LGMM to keep future model comparisons within a similar model estimation technique?

2) I have correlated errors in a censored LGM and Mplus works finely. I have correlated errors in an uncensored LGMM and Mplus works finely. However, I cannot specify correlated errors in a censored LGMM. Can you please tell me why? Also, because of this, when fitting an initial model in an LGM (for example, by looking at appropriate error structures) to move toward a LGMM, should I just forget this if I'm going to have to doff the correlated errors in my censored LGMM?

Thanks!
 bmuthen posted on Monday, June 06, 2005 - 6:45 pm
1. You probably have covariates in the model. LGMM is a mixture model where the likelihood is evaluated for y|x, not including the marginal x part. This difference between the LGM and LGMM approaches does not have any substantive effect. LogL and AIC/BIC values will be on different scales in the two approaches, but you get the same comparison across models within each approach. So, yes, if you to compare LGM and LGMM indices to each other, do it all in the mixture track. The parameter estimates should be exactly the same, at least if you sharpen the convergence criteria.

2. With LGMM, so doing mixture modeling, the ML estimation for censored outcomes needs to be handled by numerical integration, making the computations feasible by requiring conditional independence of the outcomes given the growth factors. Residual correlations violate that conditional independence and to take them into account, you would have to add extra dimensions of integration which would slow computations down. So that is not directly allowed. But you could add correlated residuals if you do it in a "disciplined way", for example having a single "methods factor" influencing the outcomes at all time points. Such a methods factor would correlate the residuals and add only 1 dimension of integration. If the residual correlations are important, I would not ignore them. Be sure to fix the mean of such a factor @0.
 Michael J. Zyphur posted on Monday, June 06, 2005 - 10:12 pm
That makes much sense, thanks for the response.

Creating a factor for every pair of error variances (e.g., 1 and 2, 2 and 3, etc.) really does ruin the computer's ability to compute a solution at any normal speed. However, how good of a solution is a single factor informed by all error variances (I know this would likely vary greatly across datasets)? I've never seen a discussion of a single-factor error solution in for LGMs, have you?

Also, aside from fixing such a factor's mean @0, is it safe to assume that fixing its variance to 1 and unfixing the first loading from 1 is a good idea?

Thanks again.
 bmuthen posted on Tuesday, June 07, 2005 - 6:27 pm
On second thought, the single methods factor idea for correlating residuals was probably a too hasty suggestion. It doesn't allow for correlations of different signs, doesn't allow for diminishing correlations as time distance grows, and probably competes with the intercept factor. So if you find residual correlations in the LGM, hopefully you don't have significant ones for all adjacent time points. Or you can check the sensitivity to leaving them out in the LGM. Or, add time-varying covariates that may explain away a large portion of them.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: