1) Is it possible to estimate LCA with a distal outcome in a single step in Mplus(along the same lines one does with LCGA or GGMM)? (Are there examples of this anywhere?)
2) Would I obtain the same results if I used a two-step procedure whereby, first, I estimated conditional latent classes with covariates (while also identifying my distal outcomes using the "auxillary" command), export this to another statistical package like Stata, and second, run regular regressions of my distal outcomes on class probabilities in Stata?
I am running a LCA with several covariates and distal outcomes. I would like to control for the distal outcome scores in the fall so I have specified "outcome on fallscore" in my overall model. Because of this, I get intercepts for my distal outcomes instead of means. I have two questions:
1) What is the best way to request the class means for the outcomes? Tech7?
2) What is the best way to compare my 3 classes on the distal outcomes? I have used MODEL TEST but isn't that comparing the class intercepts and not the means in my particular model?
I'm aiming to use LCA to predict both categorical and continuous distal outcomes. From what I see in Example 8.6, it seems like the current recommendation is to basically do the equivalent of including it as a covariate (c on x). Is that the correct interpretation?
If so, when I do that, the size of my 3 class shift more than I'd expect or like for them to (even when I've specified the stat values for each of the latent class indicators. My understanding is that this indicates that the model may be unstable or not replicable, and that the number of classes may not be correct. The 2- and 3-class models have very similar fit indices:
The 3-class model is a better fit theoretically/substantively. What, then, is the best way to estimate the association between the latent classes and the distal outcome? Can I trust the results I get when the classes are shifting?
The key is that u is on the NAMES list so it is an analysis variable. In the case where all variables on the NAMES list are not analysis variables, u would have been on the USEVARIABLES list. The same holds for a continuous distal outcome. It needs only to be on NAMES or NAMES and USEVARIABLES.
Keri Jowers posted on Thursday, September 22, 2011 - 5:49 pm
Right, I've got the NAMES and USEVARIABLES piece. Perhaps a better way of posing my question is this: When the continuous distal is in the USEVARIABLES list and I then set the start values for my LC indicators (not the u) using my previously obtained thresholds to try to preserve my classes, the output provides me with class-specific means for the distal, and these are based on the re-estimated model I mentioned above. Are these means and their associated p-values intended to be interpreted as the association between the latent class and the distal? This seems counterintuitive to me.
The relationship between the categorical latent variable and the distal is found in the varying of the means of the distal across classes. The question you want to ask is if these means are the same across classes or different. You can use MODEL TEST to answer this question.
Thanks so much! One final question -- how concerned should I be that the class sizes change drastically when compared to when the distal is not included in the USEVAR statement? Not only are the class proportions very different (below), but the sample proportions within each class are very different:
We have estimated a six-class solution in LPA and are interested in using the LPA class membership (which was estimated at Time 1) to predict a distal outcome (at Time 2) while controlling for various attributes at Time 1. I recently attended a conference and heard a presentation that discussed "distal-as-consequence" where class membership is treated as missing data in the regression of the distal outcome on class membership and multiple imputations are used based on the posterior class probabilities (obtained from the estimated growth mixture model without the distal outcome included) to estimate the association between class membership and distal outcomes.
I have searched the Mplus archive and reviewed papers posted as well as the manual for more information/examples on this. Thus far, I have not found any. Any suggestions? Thank you.
Section 4 of this paper on our web site discusses plausible values for latent class variables obtained by multiple imputation and how those plausible values can be used:
Asparouhov, T. & Muthén, B. (2010). Plausible values for latent variables using Mplus. Technical Report.
C. Gantz posted on Tuesday, February 03, 2015 - 1:48 am
I have read with great interest the many posts on using LCA to predict distal outcomes. I understand this is a complex topic.
In my analysis, I would like to use a 3 class solution at T1 to predict a variety of T2 continuous outcomes. I additionally would like to control for T2 outcomes at T1. In the first step of this analysis, a three class solution was best based on AIC, BIC and Lo-Mendell, with entropy of .89. These three classes also make a lot of sense theoretically.
My question is as follows:
I understand that often the one step approach is preferable here. However, I read the Clark & Muthen (2009) piece, and it seems that when entropy is high, it is acceptable to use the most likely class membership. When I included the outcome in the class estimation, this significantly changed the formation of the latent classes in a way that no longer made theoretical sense. Am I right to interpret the Clark & Muthen (2009) paper that in this case, given the high entropy from my 3 class results, I could be justified to assign most likely class membership and use these in follow up analyses?
I think you can refer to the paper below for this logic:
Asparouhov, T. & Muthén, B. (2014). Auxiliary variables in mixture modeling: Three-step approaches using Mplus. Structural Equation Modeling: A Multidisciplinary Journal, 21:3, 329-341. The posted version corrects several typos in the published version. An earlier version of this paper was posted as web note 15. Appendices with Mplus scripts are available here.
Yueqi Yan posted on Tuesday, April 28, 2015 - 3:42 am
Is there any way to examine effect size when comparing the mean differences of the distal outcomes among classes?
You can divide the mean difference by the standard deviation of the distal.
Yueqi Yan posted on Wednesday, April 29, 2015 - 6:10 pm
Thanks Bengt! So how I can receive the standard deviation of the distal. My distal is latent variable. I used fixed factor loading method and could not directly receive the variance information for the latent distal from the output. There is only standard error coming up with mean difference and residual variance of the distal outcome for each latent class. Should I run a separate model without latent class to see the variance of the distal outcome? Thanks again!