I am using SEM to model the relative influence of local- and landscape-level habitat variables on wetland plant community richness. I am comparing models created with landscape-level data collected from a nested series of spatial scales in order to select the optimal measurement scale. In comparing the models using different landscape-level data, at some spatial scales one manifest variable has a negative estimate of error variance.
1) Can measures of model adequacy (e.g. RMSEA, Chi-sq, etc) and comparative fit (e.g. AIC) be trusted if one or more manifest variables has an estimated error variance that is negative? Or should the model be considered inadmissible?
2) If you are performing confirmatory modeling and obtain a negative estimate of error variance for one or more manifest variables, does fixing that error variance to 0 (or alternatively constraining it and all other indicator variables with the same latent variable to have equal error variances) considered a modification of the model, and therefore push you into exploratory analysis? It is not a change to the structural model, and I am more interested in the direct and indirect path coefficients of the structural model than the paths in the measurement model.
Further to this, I have identified that the negative error variance is the consequence of a single outlier. An alternative to fixing the error variance would be to eliminate the outlier site. This would be justifiable on theoretical grounds as it represents an extreme case; however, would excluding the outlier count as modifying the model? I.e. would it put me out of confirmatory and into exploratory mode?
If you have determined that you have an outlier using the OUTLIER options available in Mplus, I would delete the outlier. In my opinion, this does not constitute leaving confirmatory modeling. Others might disagree.
However, just to sate my curiosity, would you consider fixing a negative (say, -0.002) error variance to 0 to be a modification of the model? I.e., if I hadn't been able to resolve the problem by deleting an outlier, and had resorted to fixing the error variance at 0, would I have left confirmatory modeling?