I am running a CFA with 9 indicators and one hypothetical factor. All indicators are binary, so I'm using WLSMV, which first runs a tetrachoric correlation and then models the CFA from there. My model fit stats are pretty solid: pvalue of Chi2 is 0.453, CFI is 0.999, TLI is 0.999, and RMSEA is 0.003 (w/ pvalue=1).
The indicator loadings are also reasonable, with most loadings between 1 and 1.3, and the smallest being 0.6. So far, so good, evidence of an underlying latent trait.
When I look at the indicator error variance, however, that's where things get fishy. Variances range from 0.6 to 0.96, with over half of the indicators having error variance over 0.8. Just to test things out, I ran an EFA (off of the tetrachoric matrix) and had similar results, with very small communalities.
Related, with 9 indicators reliability (as measured by Alpha, which of course will underestimate as it is a congeneric model) is low at 0.52. McDonald's omega, a better stat, is only 0.53.
So how should I be interpreting these results? My current understanding is that I have a latent variable, but my indicators are poor measures, as evidenced by the high error variances/low communalities, and low reliability.
Binary indicators can have low reliability. And that in turn can contribute to a well-fitting model due to low power. The real question is how useful the model is - for instance does the factor give a significant slope in predicting a later outcome?
Unfortunately I am not able to look at prediction at this point, but eventually I hope to be able to do that.
I'm not sure I follow how the low power can lead to a well-fitting model. Is that because you cannot pick up real changes in fit through significance testing, say through Chi2? Would that affect fit indices like CFI and TLI as well?