

Why slope variance changes in growth ... 

Message/Author 

George Howe posted on Wednesday, October 03, 2012  1:52 pm



Hi folks, (Congrats on getting 7 up and out, by the way!). I'm modeling growth on two separate variables simultaneously. Three time points, using TSCORE for varying times of measurement. Runs fine. I had run a separate grwoth model using only one of the variables, and I noticed that the estimated variance for the slope of that variable was quite different when it was run alone, compared to when it was run in the parallel growth model. THe parallel model allows the slopes and intercepts to correlate, but nothing is regressed on anything, so I can't figure out why the variance estimate would differ. I did notice that the N is slightly different (421 vs 423), so I also ran a simple growth model for the first variable but included the three indicators of the second growth variable only as measured variables, allowing them to correlate with each other (this made the Ns equivalent to those in the parallel growth model). This gave the same slope variance estimate as in the simple growth model, so it doesn't look like it could be different N's causing the different variance estimates. This does lead to substantive differences (in the parallel growth model the slope variance is over twice as large, and robustly significant, while in the single growth model it is smaller and nonsignificant.) Any thoughts as to why this might be happening? Thanks, George 


It sounds like it may have to do with the restrictions imposed by the parallel process model, namely that correlations between the outcomes of the two processes have to be channeled through the growth factors. You can try to relax that by for instance correlating the concurrent residuals of the outcomes of the two processes. And see if the growth factor estimates are sensitive to those residual correlations being allowed or not. 

George Howe posted on Wednesday, October 03, 2012  4:52 pm



Bengt, You're right, although the results are more complex. In the parallel growth model, if I leave the correlations among slopes and intercepts but also include correlated residuals with concurrent measures of the two variables, the slope variance decreases some but is still greater than that in the single variable growth model. However if I simply force the correlations to zero between slope and intercept across variables (but keep withinvariable slopeintercept correlations free), the variance estimate is almost identical to that in the single variable growth model. So it does seem to be due to the crossprocess correlations. However, I'm still not clear why this is the case, or what the implication is. These findings would make sense if I was somehow independently accounting for residual variance in the first variable process, allowing for a more sensitive test of the slope variance by removing error, but that's not the case with this model. The correlated parallel process model indicates that there is systematic slope variance worth exploring, which I like, but if I hadn't included the other process this wouldn't be the case. Bottom line: am I justified in testing covariate associations with that slope, keeping the parallel process in the model but only because the slope variance remains significant, or is that tantamount to cherrypicking the model that serves my goals? Thanks, George 


You may want to check how things look if you include a list of key timeinvariant covariates. That sometimes changes the variance assessment for growth factors. Perhaps results are more similar then, comparing single and parallelprocesses. If you have key timevarying covariates, that may also change the picture. I assume that your parallelprocess model fits the data well. 


I have a variable of main interest and I want to correlate its growth factors with the growth factors of three covariates (5 measurement points in total). To do this, I have modeled three separate bivariate growth models. I would like to compare the growth factor correlations (esp. the "between processes"correlations) of these models but what makes this difficult is the issue of correlated concurrent residuals, which is more or less prevalent in each of the three bivariate models. My strategy was to estimate the concurrent residual correlations in each bivariate model first and to fix those correlations to zero which proved to be insignificant (p > .05). In one bivariate model, for instance, there were no significant concurrent residual correlations (thus, all 5 correlations were fixed to zero) and in another bivariate model there were three significant correlations (thus, I only fixed 2 concurrent residual correlations to zero). The result is, that there are higher "betweenprocesses"correlations in the first model as compared to second (and I'm afraid that this is mostly due to fixed concurrent correlations between residuals in the first model). Is my approach nevertheless appropriate? 


May be my question was a bit too difficult to answer if one does not have the data, sorry. Easier: Would you estimate "all" correlations between contemperanous residuals "by default" in bivariate growth models (even if some of them are insignificant?) or would you only estimate statistically significant correlations between concurrent residuals (ps < .05). My observation is that it makes quite a big difference between both procedures (with respect to the strength and significance of growth factor correlations between the processes) although both approaches appear acceptable at a first glance. 


I would take an "a priori" approach and free all concurrent residual covariances  and let them stay in the model even if some are nonsignificant. 

Back to top 

