ben pelzer posted on Sunday, February 02, 2014 - 2:41 pm
Dear forum, I would like to test whether adding a control variable Z, significantly changes the regression effect of a predictor X. So, the equations are:
1) y = b00 + b10*X + e1 2) y = b01 + b11*X + b21*Z + e2
The hypothesis is: H0: b10 = b11
What I tried is to copy the dependent variable y, with the copy named ycopy, and then estimate the following model:
model: y on x; ycopy on x z;
The above resulted in an error message, when running the model (no syntax error, but about mplus being unable to estimate the model). Is it impossible to estimate the above model (due to the dependents being equal)? Should I add some constraint first?
Could I compare the fit of the above model with the fit of another model, in which the effect of x is constrained to be equal in both equations, as a test of H0?
Is the following correct? One could estimate the equations separately, but as the dependents are equal, the error terms correlate. So the two equations are related, and hence "seemingly unrelated regressions" would have to be applied, allowing the error terms in (1) and (2) to be correlated. Am I right here?
Anyway, the hypothesis to test is quite simpe, but how to do it in mplus? What is more, my "real" equations are two-level. Would it still be possible to test H0 for a two-level problem?
ben pelzer posted on Wednesday, February 05, 2014 - 2:21 am
Apparently, I was completely wrong. Starting from the idea that the residuals in y and ycopy should correlate (which intuitively made sense to me), caused the identification problem. Thanks for helping me out!
ben pelzer posted on Wednesday, February 05, 2014 - 8:30 am
After following your suggestion, the model could be estimated and a Wald test could be done to test the equality of the two b-coefficienst: H0: b10 - b11=0. The resulting statistic was 0.054 and insignificant (1 df) p=0.895.
In contrast, testing the indirect effect:
x1 ---> x2 ---> x3
resulted in a highly significant statistic 4.895, with p=0.000.
Now I would like to (roughly) understand how this extreme difference in test results can arise. In a paper of McKinnon et al. (2002) a number of test procedures for mediation are discussed, some of which are based on the difference b10 - b11. I worked out one of these, and indeed, the difference is highly sign. and in line with the mplus result for the indirect effect. What does this say about the test that I carried out, using the "trick" with y and ycopy? Is this test simply not a good idea (or maybe even a very bad idea) for testing mediation?
Thanks for any explanation you could offer!!!
ben pelzer posted on Wednesday, February 05, 2014 - 1:08 pm
Sorry for the unclear formulation in my previous mail: the indirect model should read:
x ---a---> z ---b21----> y
For the product a*b21 of the two effects shown in the diagram we can write:
a * b21 = b10 - b11
Hence, I tested for the significance of the indirect effect:
H0: a * b21 = 0
and compared this result with the one obtained earlier for
H0: b10 - b11 = 0
As I said, the results highly disagree, and this is difficult for me to understand. That these two tests are not exactly equal (in terms of p-values) would not have surprised me, but the difference in significance is so big.
Thanks for any guidance as to what may cause this big difference in p-values.
Looks like the problem is that in the model with the copy the two parameters are considered independent but they are not. So that method can't really provide accurate SE estimates.
ben pelzer posted on Tuesday, February 11, 2014 - 2:05 am
Dear prof. Muthen,
Thanks a lot for your reply. I realised the independence you mentioned and therefore thought it would be necessary to have the correlation of the residuals of the original and copy variable estimated. But this caused the convergence issue.
The last thing I tried, after you reply, was to 'fix' the covariance of these residuals to the covariance value obtained after estimating both equations separately:
So I guess I simply have to drop the whole idea of testing mediation this way, or more generally, testing the change of a regression effect in two different equations. That's a pity! Thanks for your help and best regards,