I ran a GMM with a distal outcome with three categories. The model ran well and I receive the expected outcome, including the latent class indicator model part in probability scale. Now, I have computed odds ratios. How can I tell whether my odds ratios comparing a given class probability to the reference class is significant? Where would I get the standard errors to compute a confidence interval? I'd appreciate your help greatly, as this is a question I anticipate a reviewer asking.
It seems that many people use LCA and there are studies including covariates and distal outcomes in the same model together with LCA. In different papers you could find various interpretations of such models (mediation, moderation, risk factors and outcomes). Basically it looks like mediation model, even though it is hard to interpret in that way. How could you interpret such model, if there is a way to name it? Let’s say you have a model where children’s temperament at age 3 (covariate) predicts parents behavior (that would be LCA) and both of them predict distal outcome – satisfaction with marriage at age 35. So, children’s temperament is a covariate and satisfaction with marriage – distal outcomes. In such model I have 3 classes or parents behavior and for each class I estimate distal outcomes. How should I interpret this model, especially if the links between predictor and latent class is significant and in the same class the mean value of predictor is significantly higher compare to other classes? It is a mediation? How should be interpreted if there is only one significant link between the covariate and distal outcomes for one class but not for others? I would be grateful for you advice.
In a model where the covariate x influences the categorical latent variable c which then influences a distal outcome u and you are interested in the relationship between x and u, I would call this mediation. This type of model has not been written about much in the latent class literature.
I want to use a binary distal outcome to assess the predictive validity of my LCA results (4 classes; 6 categorical indicators).
I want to fix the class-specific item probabilities (i.e. values fixed at values from the 4 class model without covariates) to ensure the means are estimated based on classes I already have (re: I am referring to the strategy presented in K Nylun dissertation)
My model runs but I am not sure of my syntax. Here is what I have: I wrote the thresholds for the 6 categorical variables and did nothing for the distal outcome. The distal outcome just show up in the USEVAR ARE and CATEGORICAL statements.
I see. This is one way of doing this. I would recommend not fixing the classes in this way. Although substantively you may think of the distal outcome differently than the latent class indicators, statistically the distal outcome is another latent class indicator. If this dramatically changes the classes, the reason for this should be investigated.
Hi again, I have an interpretation question. My 4 latent classes represent health status. I have age and gender as covariates and my distal outcome is mortality. In interpreting the ORs for the my distal outcome, is it correct to say that I've controlled for age and gender? For instance, the OR when comparing class 1 and class 2 is 4.5. Does it control for the age and gender differences between these classes as a multinomial regression would?
It may sound like a stat 101 question but I am not sure how Mplus takes care of the covariates in this case.
I'm using multiple distal outcomes in a GMM, allowing the means (thresholds) of the two distal outcomes to vary across classes as follows:
%c#1% [anychron$1]; [welfare$1];
When looking at the effects of class membership on the distal outcomes (in the output), do the effects of class membership on one outcome control for the effects on the other outcome (i.e. are they adjusted effects)?
I'm doing an LCA with a binary distal outcome and I'm not sure I understand how to interpret the output. The output excerpt below is from the 'Latent Class Odds Ratio Results' section. Am I correct in stating, based on this excerpt, that compared to Class 3, Class 1 is 2.179 times more likely to have an F1EVERDO score greater than category 1 than they are to have a score in category 2?
It sounds like you have a mixture model for which a binary distal has a threshold parameter estimate for each class. The thresholds are related to the probability of the distal. The threshold differences can be tested for significance and so can the probabilities - you would use Model Constraint to do this.
Thank you so much for your explanation - that the thresholds and thus, the probability of the distal outcome varies significantly between the two classes. Would model constraint tell me that the two classes have sig different probabilities or would it also tell me which class was higher (and the magnitude)?