Sung Kim posted on Tuesday, October 02, 2007 - 1:41 am
My CFA mixture model without covariates preferred four classes. However, after the factors and the latent class variable were regressed on three covariates, the values of fit indexes indicates a 2-class model is preferred. Is this because "factor mixture models are estimated conditional on covariates (Lubke & Muthen, 2005, p.31)"? If that's true, do I need to try an unconditional model first without covariates and then try an conditional model with covariates? Or, do I need to insert covariates from the beginning and only go for a conditional model?
One more question: when calculating factor mean difference across classes, which one should be used: unstandardized or standardized values?
I would first settle on the number of classes without using covariates. Then I would add covariates but not be surprised if the needed number of classes changes. There shouldn't be a change unless there are direct effects from covariates to the outcomes. If you need such direct effects, the number of classes should be determined when such direct effects are included.
Unstandardized values should be used.
Sung Kim posted on Thursday, October 04, 2007 - 2:29 pm
If there are direct effects (Path 5 in the article) like that, it means that "the measurement model ... is not the same across classes" (Lubke & Muthen, 2005, p. 29). Is that right?
May I ask about how to detect the direct effects from covariates to the outcome variables? For example, I have three covariates of gender, race, and clinical status and 45 items measuring either one or three factors related to psychological well-being. What I'm interested in is how to know which items are regressed on which covariates. Could you let me know?
You can then ask for modification indices and see where you may have measurement non-invariance.
Sung Kim posted on Tuesday, October 09, 2007 - 8:22 am
Since modification indexes suggested a direct effect from the three covariates to one outcome variable (i.e., y42), I modeled that in the next step. However, although the likelihood value decreased (and so did AIC, BIC, etc.), the regression coefficients linking the covariates and the outcome variable became much smaller and insignificant than the standardized E.P.C.s for the direct effects estimated in the previous step.
Is it expected or did I do something wrong?
Also, which one should be used for a standardized E.P.C. between std and stdYX?
The modification indices reflect what would happen if one parameter is changed not three. I would look at the change in chi-square rather than the E.P.C. values.
Sung Kim posted on Tuesday, October 09, 2007 - 9:18 am
I did look at MIs, whose values for three direct effects were greater than 100. Some of the MIs were over 300. In addition, standardized (stdYX) EPCs related to them were over .50 and one EPC was even close to 1. That's why I included them in the model next step.
I expected the regression coefficients in the model that included the direct effects would be close to the EPCs estimated in the previous step, but the coefficients were much smaller. Why is that?
A model with one class is the same as a regular model. If you don't get the same results, something must be different. It may be that you don't have means in the regular model but you do in the mixture model.
Sung Kim posted on Friday, October 19, 2007 - 1:19 pm
I included "TYPE = MEANSTRUCTURE" in the regular model as you suggested, but I got the same loglikelihood value, which is still different from the single class mixture model. I'd like to know why there is a difference.
However, I see your point that means are included in a mixture model but not in a regular CFA model.
In a conditional model, you are fixing the intercept to zero.
Sung Kim posted on Sunday, November 11, 2007 - 2:20 pm
In a conditional model, comparing factor means across classes is not free from errors, that is the residual factor scores. Therefor, if I want to compare classes with something error-free, I need to compare them at both levels of the intercepts and the regression weights of covariates. Is that right?
Jon Elhai posted on Friday, November 07, 2008 - 9:38 am
Linda/Bengt, I'm having trouble specifying a CFA mixture model, using 4 factors for 17 observed continuous items, and testing 2 classes. The syntax from the UG's example 7.17 shows this for a one-class, one factor model: %OVERALL% f BY y1-y5; %c#1% [f*1];
But if I add more factors and a second class, I'm wondering what additional syntax to write for the c#1 and c#2 text - the above text fixes the single factor's mean to 1. Do you have any examples?
Jon Elhai posted on Friday, November 07, 2008 - 9:50 am
To follow-up on my previous message. I just realized the example 7.17 does test two classes but specifies that class 1 has a fixed factor mean. So if I intend my model to be the same across classes, do I just specify my factor model under the %OVERALL% syntax, with no specification of individual classes below it? When doing this, I am receiving some error messages.
This implies class-invariant loadings, intercepts, residual variances, and factor variances. The factor means change over classes with one class having the mean fixed at zero as the default for identification purposes.
If that doesn't help and the error message for your problematic run doesn't help, please send your license number and the input, data, and output for the problematic run that you mention.
I am running a CFA mixture model with 3 factors with 2 items on the first factor, 3 on the second and 3 on the third. The factors are non-invariant based on a binary variable (0 and 1). I decided to first determine the number of classes without the binary covariate. I have allowed factor loadings, factor variances, residual variances to vary across classes as I have no reason to believe they are invariant across classes. I have only set the number of classes to 2.
I continue to get the message to increase the number of miterations, but I am now at 50,000 miterations and still getting the message. Variances of the indicators are less than 2 so they aren't way out.
How many miterations should I realistically have for this?
See the following article which is available on the website:
Clark, S.L., Muthén, B., Kaprio, J., D’Onofrio, B.M., Viken, R., Rose, R.J., Smalley, S. L. (2009). Models and strategies for factor mixture analysis: Two examples concerning the structure underlying psychological disorders.
Fiona Shand posted on Wednesday, August 11, 2010 - 9:24 pm
I have identified a 2 class, 1 factor mixture model as the best fit to my data. I have then run the model again with covariates. A reviewer has commented that he/she is skeptical of such post hoc comparisons when not weighted by group membership probabilities. I had thought that FMM with covariates incorporated posterior class membership probabilities when calculating the regression model. Can you please tell me if this is so? Thanks in advance.
It sounds like the reviewer's comment relates to doing the analysis in two steps, that is, saving most likely class membership and regressing it on a set of covariates. It sounds like you are doing the analysis in one step in which case I don't understand the critique.
Adam Meade posted on Monday, September 19, 2011 - 12:47 pm
I have a factor mixture model with 5 continuous indicators represented by a single factor. I have two latent classes. I need to compare 9 covariates to see which best differentiate those two latent classes. Some of my covariates are highly correlated. Should I compare these covariates simultaneously in a single model, or in different models given the colinearity? Also, by what criterion is it best to evaluate the covariates in terms of differentiating classes? Thank you!
I don't know that there is any best strategy for this. You could first see which single covariate is most strongly related to the latent classes. Then work with pairs of covariates, etc. "Most strongly related" needs to be defined, which is easy with 2 classes (which one changes the probability the most when, say, changing the covariate one SD), but with more than 2 classes the covariate can be good for discriminating between two of the classes, but not between two other ones.