
Message/Author 


Hello, I am doing a set of comparisons of two nested SEM models in which a second predictor of a latent criterion variable is added to a model specifying only a single predictor. Specifically: Model 1: Predictor 1: common factor across 3 rating sources Model 2: Predictor 1: common factor across 3 rating sources Predictor 2: residual variance in 1 of the rating sources The goal is to test whether what 1 of the raters uniquely perceives predicts the criterion above and beyond what is predicted by the common variance in ratings. For some of these comparisons, the estimate of variance explained in the criterion reduces in Model 2 from that in Model 1. This is at odds with the way I think about Rsquared values in OLS, where adding a second predictor should yield an Rsquare at least equal to the single predictor alone. Do you have any insight about what might be causing this? 


Adding a predictor can cause a misspecified model in which case anything can happen to Rsquare. 

Back to top 

