Anonymous posted on Wednesday, August 10, 2005 - 12:10 am
Hi, I've just started using MPlus. I'm testing a CFA-model and I'd like to fix the CORRELATION between the two factors to one. "f1 WITH f2 @ 1" seems to refer to the covariance, which causes a bad model fit in my case. How can I refer to the correlation between the two factors?
If you free the first factor loading of f1 and f2 and set the metric of the factors by fixing the factor variances to one (f1@1f2@1;), then f1 WITH f2 @1; will refer to a correlation.
Anonymous posted on Wednesday, August 10, 2005 - 8:54 am
Thank you very much !!!
anonymous posted on Tuesday, January 16, 2007 - 8:29 pm
I have performed a CFA with 7 factors. These factors represent dimensions under a bigger construct, which I am not testing. I want to say that these 7 represent multiple dimensions of this construct. CFA results show that these 7 factors are correlated. I am using them then to predict a binary outcome. In my logit/probit model should I force these factor to not correlate? What would it be if I do?
If, besides factor 1, and a regression on that factor, I add a second factor which correlates with factor 1, but is not included in the regression, the effect sizes of the regression change. This seems logical, but the exact interpretation is not clear to me. Could you clarify what happens there?
If I save the factor scores and use them in the exact same regression in spss I get slightly different coefficients, p-value and explained variance, resulting in some variables gaining a significant effect. Do you know what causes these differences? (I use the WLSMV as the factor items are categorical)