Keemia posted on Thursday, August 10, 2017 - 3:00 pm
We have a paper using basic hierarchical regression to see if X2 predicts over X1. We have missing data so we've been using Amos and Mplus to get FIML results.
Approach 1: At first, we ran these analyses by: (a) running a model with X1 and getting the R-square value for that (Y = X1), (b) running a model with X1 and X2 and getting the R-square value for that (Y = X1 + X2), and (c) computing the difference between those R-square values (R-square change).
The problem is in a couple of cases this produced *negative* R-square change values.
Approach 2: After extensive searching, we came across a brief reference to an alternative approach to this analysis on an Mplus FAQ site. The alternative approach is to have both X1 and X2 in both models but fix the X2 regression coefficient to 0 in the first model. So: (a) Y = X1 + X2 (but X2 coefficient fixed to 0), (b) Y = X1 + X2 (X2 coefficient freely estimated), and (c) computing the difference between the R-square values.
This actually worked well - we got a similar overall pattern of results but all R-square change values are now positive.
Question: Have you ever heard of this type of approach (Approach 2) for regression or have any thoughts or recommendations for other resources?
The comparison of R-squares across models like this is a bit problematic in my view, I'm afraid, when x1 and x2 are latent. There is not only a question of how R-square changes when including/excluding one of them as alternatives. Because they are latent, it is also a matter of model fit - for one alternative the model may not fit at all because different restrictions are imposed on the covariances between the DV and the indicators of the x1, x2 factors.
I know of no references.
Keemia posted on Wednesday, August 23, 2017 - 7:29 am
My apologies for the mix-up, there are NOT any latent variables in the model. Does this change your comment?