

How to constrain the two factors to b... 

Message/Author 

Yi Ou posted on Sunday, October 17, 2010  4:32 pm



I use CFA to test discriminant validity of a five factor measure. One easy way is to create two factor combination and say the five factor model is better than the four factor model. Another way (my professor suggests) is to equate one factor with another, meaning constraining the covariance of the two factors to be equal to 1. However, I tried and it didn't converge. What should I do? Below is the syntax I used: itle: CFA self 5 factor model data: file is cfa data.dat; format is free ; type is individual ; variable: names are id hum1  hum34 lgo1  lgo5 cse1  cse12 val1  val21 mod1  mod13 narc1  narc14 sd1  sd10 huma1  huma34 humb1humb34 ; missing = all (9) ; usevariables are hum1 hum4 hum5 hum7  hum12 hum16  hum18 hum23  hum25 hum27  hum29 hum31 hum32 hum34; analysis: type = general; estimator = ML; iterations = 10000; model: hum_s by hum1 hum4 hum5 hum7 hum8; hum_o by hum9hum12; hum_l by hum16hum18; hum_p by hum23 hum24 hum25 ; hum_c by hum27 hum28 hum29 hum31 hum32 hum34; hum_l with Hum_p@1; output: sampstat modindices (0) standardized tech1 ; 


Fixing the factor covariance to one most likely makes the model fit poorly resulting in convergence problems. Instead use MODEL TEST to test if the covariance is one. See the user's guide. 

Jon Elhai posted on Monday, October 18, 2010  3:32 pm



Linda: When using MODEL TEST to test if the difference between two correlations (i.e., correlations between factors) is zero... I find some unusual results. In one Model test, I find correlations (using STDYX) of .56 (between factor A and C) and .58 (between factor A and D), which Wald test results shows as significantly different (p = .002). But in another Model test (using the same dataset), I find correlations (STDYX) of .57 (between factor B and C) and .66 (between B and E), which is not statistically significant (p = .93). So the difference in correlations for the second Wald test appears to be greater than in the first case, yet without statistical significance. Could this be because of the use of STDYX instead of using nonstandardized correlations? 


It is not only the difference between the estimates but also the standard errors of the estimates that determine significance. If you have not standardized the coefficients in MODEL CONSTRAINT, you may be testing covariances not correlations. 

Back to top 

