Tim Stickle posted on Wednesday, May 27, 2009 - 9:52 am
In one of Bengt's papers, within-class item R-squares for a factor mixture model were labeled "reliabilities." Such R-squares in related models (e.g., CFA) are typically considered validity coefficients. Did I misread the paper or are within-class item R-squares interpreted differently for FMM? I am particularly interested in the case of factors with continuous indicators. If they are considered reliability coefficients, would you kindly provide a reference or brief explanation?
There is no difference between CFA and FMM in this regard.
Consider a 1-factor model with
y_j = lambda_j*eta + e.
The R-square for y_j is
where T is the total variance
lambda_j^2*V(eta) + V(e).
The R-suare is the reliability of y_j by conventional definition - namely the ratio of variance due to the factor divided by the total variance.
If the total variance is 1, and the factor variance is 1, this is also the square of the correlation between the item and the factor - where the correlation is sometimes called validity (although I don't place much confidence in such a definition of validity).
Tim Stickle posted on Thursday, May 28, 2009 - 12:54 pm
Hello, I'm running some factor mixture models with continuous observed factor indicators and no observed predictors in Mplus. The R-square values printed in the standardized results look to be within-class R-squares, i.e. the proportion of variance of the observed variables accounted for by the latent factor. What I'd like to know in addition to this is how to work out the total variance accounted for by both the latent factor and the categorical latent class variable too. I thought that this might just be the weighted average of the within-class R-squares but this doesn't seem to make sense, e.g. might not the latent categorical variable account for variance not accounted for by the factors? Anyway, any help appreciated. Nick
Yes, the R-2 is the within-class R-2. And yes, the variation in the outcomes is also due to variation in the latent class variable. I can't recall seeing a summary of both sources; maybe a topic for exploration.
I have just a quick follow-up question related to this topic. I have a simple LPA model with one DV regressed onto one IV, along with four indicators for class membership so that I allow the regression coefficients, slopes, and R-squares etc. to vary across latent classes.
When interpreting the LPA output across classes, if I have non-significant R-square values for a particular latent class, should I ignore the other parameter estimates, particularly the slope, even if they are significant? Does this mean that the model does not explain a significant portion of variance in the criterion for that class and therefore only the classes for which the R-square values are significant should be interpreted any further?