I ran a CFA over 14 items, specifying a one-factor model. Fit was not very good (chi2=118.165, df=77, p=0.0018, CFI=.845, RMSEA= 0.043, SRMR=0.051), with one item displaying a very low factor loading (0.071, p=.298). When I fixed this loading to zero, the fit almost did not change (chi2=119.167, df=78, p=0.0019, CFI=.845, RMSEA=.042, SRMR=0.052). Only when I excluded this item, fit improved substantially (Chi2=88.988, df=65, p=0.026, CFI=0.903, RMSEA=0.035, SRMR=.047).
Why does the fit not improve when fixing the loading to zero, but does so when excluding the item?
How would one interpret a factor loading fixed to zero as opposed to excluding this item (I would say in both cases the particular item does not contribute anything to the common factor)? In another CFA I ran (on other items), fit did increase when I fixed low factor loadings to zero.
Thank you for your reply. I see that the data is different for the two options.
Maybe I should ask differently: Why does fixing a very low factor loading to zero in one CFA increase fit, and in another dataset doing the same (for an equally low factor loading, with the same number of items) does not increase fit at all?
Is it good practice to fix the factor loading to zero in the first case and exclude the item in the latter case?