Anonymous posted on Saturday, March 03, 2001 - 2:23 pm
Dear Bengt and Linda
In linear LGM, the two factors indicate initial status and growth rate. In quadratic LGM, I'd like to know the interpretaion of quadratic factor. If its variance is significant, what does that mean? I know it means that there are ind differences in quadratic parameter values. Any better interpretations? If time-invariant covariates are positively related with linear and quadratic factors, how can I interpret that? Thanks.
Simply put, the quadratic factor describes the upturn or downturn over time beyond what is predicted by the linear factor. However, the quadratic and linear factors are to some extent confounded. This is especially clear with time scores for the linear factor slopes of 0, 1, 2, ..., where the linear and quadratic factors describe the same development from time 1 to time 2 (time 2 scores are 1 also for the quadratic). I therefore find it hard to give a separate interpretation of effects of covariates for the linear and the quadratic factors. Interpretation may be easier with piece-wise growth modeling and with centering at different time points. On the other hand, an interpretation of covariate effects may not be needed; perhaps it is sufficient to include the effects so the model is correctly specified.
Harry Garst posted on Monday, February 18, 2002 - 7:48 am
Dear Linda and Bengt,
My model consists of two separate growth curves based on the latent factors (a multivariate or cross-domain model using two separate second-order models - one for each growth curve - because the measurement models is included). This is the first time I used quadratic factors (in this model: two quadratic factors, one for each growth curve). The results are rather disappointing: the correlations between the slope factors and quadratic factors are almost perfectly negative (r < -.92) and the asymptotic variances of the parameter estimates are huge for the quadratic factor variances. The asymptotic correlations of the parameter estimates of the slope/quadratic covariance is also huge, resulting in a high SE (even a moderate covariance is not significant). My conclusion is that either this model is misspecified (although the goodness of fit measures are superior to the linear growth model) or there is too less information to estimate the parameters of this quadratic model (collinearity). Do you agree? Are these problems typical for quadratic growth curve models? Is there an alternative way? I have heard of orthogonal polynominals. Can I use these? As you can understand the term orthogonal sounds very promising to me. I don't want to use nonlinear models based on freed factor loadings, because these models have their own problems. Thanks.
bmuthen posted on Monday, February 18, 2002 - 11:55 am
Quadratic growth modeling does not always give these types of results. However, it sounds as if it might be worthwhile for you to try to center at the mean time. That is, use
where a, b, and c are the growth factors and xbar is the average time value. So your loading values are x_t - xbar and their squares. This defines the intercept growth factor a as the growth curve value at the mean time xbar. Typically, this makes parameter estimates less highly correlated.
We are conducting a cohort sequential design in which we have two cohorts that are each measured at three waves. Given the nature of our cohort sequential design and our longitudinal data collection plan, we'll get data at 8 time points (e.g., fall 7th grade, spring 7th grade, fall 8th grade, etc.). Although each participant is only measured 3 times will we be able to test for quadratic and higher order trends given our extra time points or are we bound by the number of data points we have for each participant?
Hi Linda, I have a question about fitting quadratic model. I have a liner model and I want to know if the model could fit better in quadratic. So I change de s (slope) for a q (quadratic) in the input but the results still the same... I plot the result but its not quadratic curve?? Thank you for telling me what is wrong with my input :
variable: names are id y1-y3 a1-a3; usevariables are y1-y3; auxiliary are a1-a3; classes = c(6); missing = . ;
analysis: type = mixture missing; starts = 500 20;
q is a significant growthfactor, but i s ON x gives a better model fit than i s q ON x. Should I eliminate q from this regression or is q really necessary here for the overall interpretation of the model?
If you have a significant mean for your quadratic factor, I find it hard to understand how a linear model can fit better. Please send your input, data, output, and license to firstname.lastname@example.org if you have further questions.
i have also some problems with this quadratic issue. here they are:
a. i have a unconditional quadratic growth model where my quadratic factor has no significant variance. Doese it make sense to retain that variance (not setting it to zero) when it comes to conditional models and predict q with the covariates? I've read this in a paper, but it is a bit counterintuitive for me. b. In addition my linear factor of the quadratic model has a sig. variance but no signifcant mean. What does that mean for predicting that growth factor? Am I predicting nothing (no increase of that factor)? c. What does it mean in general, when i have an effect of covariates on the linear factor but not on the quadratic factor in the quadratic model?
a. I retained the variance but there were no effects of my covariates on this quadratic factor. In addition, some effects on the linear slope of the quadratic model faded away, as compared to the former "q=0 model". The residual variance of q was insignificant. In this case, would it be better to set the q-variance to zero?
b. In terms of a prevention effect on the development of aggression. How could one formulate this outcome in one sentence? E. g., "X buffered the early increase of aggression" ? (but for this statement, one needs a significant linear factor mean of the quadratic model, in my understanding). Btw, is there a chance to detect a significance of this linear factor mean in a conditional model (more power, as with the variances) and what is the indicator for that?
c. o. k., I adopted your interpretation as you can see above :-). Does this interpretation holds also for alternative parameterizations? I use one, in which the means of the linear and quadratic factors describe the increase between the first and the last measurement point (5 time points).
a. No, keep it and report it as is. No need for "trimming" the model.
b. What you need for this is a significant slope in the regression of the growth slope factors (linear and/or quadratic) on x. The means of the growth slope factors do not need to be significant. When x is included you do have a conditional model.
This is a big topic - come and learn about it at our August growth modeling course at Hopkins (see home page).
b. just to clarify: so one could in fact speak of an buffering effect of the treatment, despite of the insignificant mean? I only need a sig. regression coefficient (on the slope) to state such an interpretation? It's hard to imagine, because I'm always thinking of this zero mean....
c. yes, I'm still looking for some funding to get the opportunity to visit one of your courses! But, do you have a short comment available regarding this special parameterization and quadratic + linear factors? Would be very helpful!
b. That's right. Think of problem behavior development with x = 0 for Ctrls and 1 for Treatment. The mean slope of the development can be zero for Ctrls (say high, but flat problematic development) and the mean for Treatment negative. This happens if the slope in the regression of the growth factor on x is negative. Just like in regular linear regression with a dummy x variable shifting the mean by shifting the intercept.
c. I don't understand your specific parameterization under c.
I switched from the common parameterization for a quadratic development "0 1 9 25" (using my particualar time points) to "0 0.04 0.36 1" ("0 0.20 0.60 1", would be a linear trend), where the mean of both growth factors (lin and quad) represent the average increase between the first and the last measurement point. My Question is, if your interpretation of the quadratic factor (later developement) still holds for this kind of parameterization.
Ok. But is this rescaling of time scores may be an issue in mixture modeling? The funny thing is, I get "different" solutions when using one or the other time scores parameterization in mixture modeling. The solutions differ with regard to the run one special group is extracted. 3 group and 5 group solutions have the same groups and size in both timescore scaling methods. But in a 4 group run mplus extracts a group (under one kind of time score scaling) which is extracted under the other kind of time score scaling in the run with 6 groups and vice versa. So groups and their timepoint of extraction are mixed up a little bit, depending on time score scaling. Anyway, the solution with my rescaled time scores fits my theory better. So the question is clear: is it in principal possible to use this sort of time scores scaling (mean of slope is average change between first and last measurement point) in mixture modeling?
It is hard to say much without seeing more information. Rescaling should not result in different results. Please send the two outputs where the two scalings are shown and your license number to email@example.com.
Vincent May posted on Saturday, August 09, 2008 - 7:07 am
Hello! I have fairly the same problem like in the post april 14. 11.27 am. I'm a little bit confused and may be it is not a big problem and I'm thinking too much.
My treatment is coded 0 = intervention and 1 = control. I have a treatment effect (coefficient sign is positive) on the linear slope of substance abuse, but the slope mean is negative and insignificant (I have tested significance of the slope mean in an unconditional model).
Does that suggest that the intervention has an iatrogenic effect on the slope of substance abuse or is it a positive and desirable effect? How would you characterize this effect? Many thanks!!!!
Positive effect means "higher" slope. Think of the growth line as a hand on a clock where the hand can turn counter-clock wise or clock wise. A positive effect on the slope implies a counter-clock wise turn (no matter where the hand started from). So, yes, an iatrogenic effect is suggested.
Vincent May posted on Sunday, August 10, 2008 - 6:56 am
Thank you, very good example. But my treatment variable is coded: treatment = 0 and control = 1, so belonging to control group means that you have a "higher" slope?! Is this correct? I was irritated by the negative, however, insignificant slope. But this plays no role, as I understand your posting.
On an earlier post, it was noted that > 3 times points are required to include a quadratic growth factor. Why is this? For example, I have three points of data and the data look quadratic (e.g., time 1 = 3, time 2 = 7, time 4 = 2). Perhaps I am using the term incorrectly? I would like to run an analysis like example 6.13 growth model for two parallel processes for continuous outcomes, but did not think I could because of the way the data are looking.
With three time points, the unrestricted H1 model has 9 free parameters: three means and 6 variances/covariances. The H0 quadratic growth model has 12 parameters: three means for the intercept, slope, and quadratic growth factors; three variances for the intercept, slope, and quadratic growth factors; three covariances among the growth factors; and three residual variances of the outcome. For three time points, at least three restrictions would have to be imposed to identify the model.
Kirsten Bank posted on Thursday, September 25, 2008 - 12:29 pm
I have a conditional quadratic growth model with a) a positive slope and a negative quadratic factor for self-concept (5 measurement points) and b) a positive effect of a covariate (achievement) on the slope and a negative effect of the same covariate on the negative quadratic trend. How do I interpret that? Is it right that a) means that there is first an increase in self-concept and later a decrease? Or does the negative quadratic trend mean that the increase later is not as fast as at the beginning but no decrease? My interpretation of b) is that a higher achievement has a positive effect on the increase of self-concept. Can I say so? And what about the negative effect on the negative quadratic factor? Does it mean that the braking / decrease of the self-concept trajectory is less strong if you have a higher achievement? Or that it is even stronger (maybe because of a ceiling effect)? Thanks.
Regarding "that a) means that there is first an increase in self-concept and later a decrease? Or does the negative quadratic trend mean that the increase later is not as fast as at the beginning but no decrease?" - this choice cannot be determined unless you plot the estimated mean growth curve.
Regarding "My interpretation of b) is that a higher achievement has a positive effect on the increase of self-concept. Can I say so?" - the answer is yes.
Regarding "And what about the negative effect on the negative quadratic factor?" - that means that the quadratic factor value is even more negative as achievement increases.
You would need to impose three restrictions on a quadratic model for three time points for it to be identified. I do not think this is a good idea because you have no way to test if the restrictions are valid. I would not fit a quadratic model with less than four time points. I know of no reference for this issue.
Dear Dr. Muthèn: I’m running growth curve analyses with continuous variables (5 time-points; from grade 7 to 11). One of my curves is best represented by a quadratic slope. When I fix the time-points to 0, 1, 2, 3, and 4, the correlation between the linear and quadratic factors is quite large (-.87). I read on the Mplus discussion site that one way to reduce the collinearity between the linear and quadratic slopes is to center at the mean time by using this formula: y_it = a_i + b_i * (x_t - xbar) + c_i * (x_t - xbar)^2 + e_it . Here, this would result in using -2, -1, 0, 1, and 2 as time-points. The problem is that when I try to predict outcomes with this growth curve, there are significant associations with the slope when I fixed the growth parameters from 0 to 4. However, when I fixed them from -2 to 2, the results are not significant anymore. Interestingly, when I move the zero point to the left, my results remain significant (-1, 0, 1, 2, and 3) but when I move it to the right (-2, -1, 0, 1, 2; -3, -2, -1, 0, 1), they are not. Is it normal that when changing the growth parameters, the results change quite dramatically (from significant to nonsignificant)? Can this highlight a major problem with collinearity in the data? Thank you very much for your time and help
Hi! I have an associated LGM and let the growth factors correlate between the growth curves. However I have two problems.
1)In one of the growth curves there is a significant q-mean, however no variance. To symplify the model I set the q-variance to zero and I get the impression that this small variance is (partly) soaken up by the linear factor "s" (variance became bigger). Am I right? How do I interpret covariance of this "altered" s-factor variance (after q variance is set to zero) with growth factor variance from associated LGMs?
2)The mean of the s-factor is not significant (however, the variance is sig.). How do I interpret significant positive covariances with other growth factors? Does your statement "b" from "april, 11, 2008, 13.30h" still apply, also to correlations between growth factors?
1) If there is a small but not negative q variance I would keep it in the model. I don't think you want to enter into interpretations about altered s variance.
2) The average trend can be zero while still having individual variation around that trend. So a positive correlation means that the higher a person's slope value, the higher the person's ....
nina chien posted on Wednesday, November 18, 2009 - 5:49 pm
I have four timepoints, and tried to fit a quadratic growth model.
I encountered an error message, "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 4."
So I fixed one of my parameters (s@0) and then the quadratic model ran fine.
Question 1) To fit a quadratic growth curve model with only 4 timepoints, do I always have to impose one (or more) restrictions? I had thought not because the example in the mplus manual (6.9) has only 4 timepoints for a quadratic growth curve model.
Question 2) If I do have to impose restrictions, is there a recommendation for a good place to impose such a restriction? Or should I always inspect the model output before deciding which parameter to fix?
We are interested in the co-development of anxiety, depression and ODD in adolescent females. We have 4 waves of data (ages 12-24)and are using time scores. The univariate models indicate that all three processes are quadratic (significant quadratic means). Therefore, we ran a quadratic parallel process model. We could use some help interpreting this quadratic parallel process model. We are interested in how these constructs are related over time, so we want to know the intercept-intercept covariance, slope-slope covariance and the residual-residual covariance. We are aware of how to interpret this in a linear parallel model (it was linear for males in our sample). However, we are unsure how to interpret the model when we have the addition of a quadratic slope because we know that the linear slope and quadratic slopes are confounded with one another.Do you have any suggestions on how we should be interpreting this model? Thanks Kara
That's hard to separate out as you say. How about instead doing a series of runs where you center the time scores at each of the 4 waves? So a variation on the Muthen-Muthen (2000) theme. The intercept then captures the systematic part of the growth at that time point. Then you only have to look at the intercept covariation.
Thank you for your quick response. Could you please clarify further how interpeting the intercept-intercept covariance at each time point helps us get around the confounded slope-slope covariances?
Does a signficant intercept-intercept covariance at each time point suggest that these two constructs "travel" together, or only that they are related at each time point? Does doing this allow us to say anything about how these constructs change together over time?
Take a look at Muthen-Muthen (2000) on our website.
The approach I suggest says how much they are related at different time points. It is hard to show that the processes travel together - certainly with a quadratic model where you have this confounding of effects. Orthogonal polynomials can be attempted, but interpretations are not easy.
Some researchers try using an underlying slope factor - a second-order factor - that lies behind the slopes of all the processes.
I have 4 quarters of household water consumption data. The data are fairly consistent with seasonal periods (i.e. Quarter 1 - Winter; Quarter 2 - Spring; Quarter 3 - Summer; Quarter 4 - Fall). given temperature and rainfall variations across seasons, the consumption means rise from Winter to Summer and then decrease to a level in Quarter 4 that lies between consumption in Quarters 1 and 2. I ran a LGC model in which the slope loading for Quarter 4 was free. The loadings for Quarters 1 to 3 were 0, 1, 2, respectively. The model fit was good. I then ran another LGC model which added a quadratic factor and I fixed the Quarter 4 loading on the slope factor to equal 3.
The model fit was poor. The theta and psi matrices were both not positive definite. I'm interpreting this to mean that the quadratic model doesn't fit well and that I should go with the original optimal curve model.
I also ran both models with covariates and the outcome was the same. The two-factor model fitted well and the three-factor (quadratic) model output reported non-positive definite matrices.
I was trying to model multi-group growth curve with quadratic term-
out of the 5 groups, I got warnings for the two groups as follows-
when I get this warnings only for some of the groups, what should I do?
========================================== 1) WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP CODA IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE S.
WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP EARLYBABY IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE S.
I'm running a growth curve model that fits best with a quadratic included, to predict an outcome. The collinearity between the slope and quadratic is leading to largely inflated estimates in this prediction. I've considered 1) setting the quadratic variance to 0 (however the variance estimate is significant) or 2) recentering to the middle time point, which fixes the collinearity issue but leads to problems in interpreting change overtime as a predictor of the outcome (i.e.-slope is no longer a significant predictor). Any thoughts/alternative suggestions would be greatly appreciated.
Thanks Linda. Having looked at our model and recentered the intercept at each time point as suggested in Muthén, B. & Muthén, L. (2000), it looks as if the early time time points are driving prediction.
If I'm following along correctly from the 2000 paper as well as from Bengt's post from May 05, 2009 above, would I be correct to state that having a quadratic growth model but only using the intercept and/or slope as predictors in the regression portion of the model would be okay (i.e.- not including the quadratic term in the regression part of the model)? Thanks again.
Yes, although you simplify the question answered. The intercept represents the systematic part of the growth (so minus the time-specific residual) at the time point where the time score is zero. So you are predicting from the level where the person is at that point, not how he got there (so not a function of the slope, steep or not steep).
If you want to predict from the shape of how a person got to where he is, it is probably better to use growth mixture modeling and predict from the latent class variable - that's what I have chosen to do.
We are constructing a parallel growth model with quardatic time scores in two growth processes (with continuous response variables). The quadratic model seems to fit the data better than the linear model. We have some problems interpreting the results.
1) There is a negative residual variance for the first observed variable that is not statistically significant. Is it ok to fix it to zero?
2) For one of the processes, the linear and quadratic slope factor variances are negative and non-significant. Should these as well be fixed to zero?
3) There are some statistically non-significant covariances between intercept and rate factors. Should these be set to zero? And does it matter whether these are between or within the two growth processes?
4) Should we try orthogonal polynomials (we have six unequally measured time scores) to potentially fix the above mentioned concerns?
5) Our current model has 39 free parameters, but we only have 212 subjects. Are we overfitting here?
Thank you! I'd like to ask a follow-up question relating to my previous question no. 3. So, you would not recommend fixing non-significant covariances to zero? I'm currently reading Byrne's book "SEM with Mplus" and there, for the sake of parsimony, non-significant covariances between intercept and slope factors of parallel LGC-model were fixed to zero. I'm thus a bit confused what would be the best strategy to proceed? The current model doesn't fit the data well, so I suppose some modification are in order (adding residual covariances?)...
Thank you for your helpful answers. I still have a few more follow-up questions.
1. What is the interpretation of the intercept in the case that the first one or two time points for a subject are missing values? Does FIML estimate missing values and the value at time 0 is the intercept?
2. I want to modify the growth model to make it fit better to the data. In which order should I proceed?
a. Set residual variances of the time points to be equal?
b. Estimate residual covariances between different time points?
3. Based on AIC values a quadratic growth model is clearly better than a linear one. However, a plot of sample and estimated means shows a very much linear relation, and the fit indices of the quadratic model are not great. Is it ok to use a linear model?
4. I am trying to model an interaction between factor intercept and linear growth factor, which seems to mean that I have to use numerical integration. However, model fit indices are not available. Is there a way of assessing absolute model fit of the model to the data?
Thankyou once again for your prompt answers. I have yet again a few questions regarding the model I am constructing.
1. Since I am using numerical integration, and cannot assess the absolute fit of the model, should I first attempt to modify the latent growth model portion of the model, which does not require numerical integration (the interaction is not used here) or assess the relative fit of the whole model to other candidate models?
2. In order to improve the fit of our model to the data I modified the model on the basis of modification indices by allowing error covariances between some adjacent time points. However, the modification indices suggested some error covariances that were far apart in terms of time (for example between the first and last time points, I had a total of six time points). Is it ok to estimate these covariances even though their interpretation does not make sense to me?
3. Related to the above question, by estimating the suggested covariances at some point the factor slope variance changes from being significantly positive to significantly negative. I know from a random regression model that there is significant variation between subjects. What’s going on?
There seems to be a general linear trend in growth in my data, except that there is a slight upward hump between two time points in the middle of our data. I have freed one time score as shown in the code below:
2. I'm also interested in how to interpret a parallel growth curve model where both growth processes include free time scores. Given e.g. the code above (both processes modelled in the same way), does the covariance/regression coefficient between the two slope growth factors indicate a linear association between the processes (although the individual processes have also non-linear parts)?
The means, variances, and covariances of the growth factors apply only to the first and last time points. Using so many free time scores makes for a difficult to interpret growth model. I would free time scores sparingly.
I have to questions: (1) When estimating a quadratic model a linear latent slope faktor is also estimated. Is it allowed to estimate the model without the linear slope faktor? If no, in which other cases do I need a linear slope faktor too (cubic?, inverse?, exponential?).
(2) When I estimate a kurvilinar model is it possible to estimate the individual breakpoint of the curve? To be more precise, I would like to predict the different breakpoints by several covariates.
If I have three timepoints and I see that my graph looks like a roof because the values at T2 are always higher than at T1 and T3 (e.g. means T1:420 ms, T2:470ms, T3:400ms) then I could specify my parameters as follows:
The model fits well, but I noticed that the output for the quadratic growth goes like this: t1=0 t2=0 t3=0 t4=.001 t5=.002 ... t15=.020 t16=.023 t17=.026 t18=.029
That doesn't seem right to me as it is not progressing quadratically - does this seem incorrect and could it be due to the rescaling?
I have tried the model with normal time scaling (i.e. xt2@1...) and the fit statistics are identical but the estimations for the S and Q parameters and covariances are different. Can you not rescale like this for quadratic models?