Anonymous posted on Wednesday, October 30, 2002 - 12:25 pm
One important advantage of LGM is that the model allows one to examine inter-individual differences in intra-individual growth in longitudinal studies. And the inter-individual differences are captured by the variances of the growth factors.
To my understanding, if the variance of a slope factor is not statistically significant, I would say that there is no significant inter-individual differences, in regard to changes in the outcome measure under study.
In my recent data analysis, I found that the variance of a lope factor, without covariates, was not statistically significant. This indicates that the changes over time in the outcome measure were not significant different across individuals. However, once I added covariates into the LGM, I found two covariates (ethnicity and age) had significant effects on the slope factor and the R-square was 0.42 (i.e., about 42% of the variation in the slope factor was explained by the covariates). How should I interpret these results? Your help will be appreciated!
If the slope growth factor mean is significant, this means that it is significantly different from zero which means that there is development over time on average. If the slope growth factor variance is significant, this means that not all individuals grow at the same rate, but that there is significant variability in their growth rates. If the slope growth factor variance is not significant, this means that all individuals have the same growth rate. Even if the slope growth factor variance is not statistically significant without covariates, inclusion of covariates often shows that they have significant influence on the slope so that the slope does vary (as a function of the covariates). These seemingly conflicting results may be due to higher power to detect slope variability when covariates are included.
Anonymous posted on Tuesday, August 10, 2004 - 7:18 am
I have a similar situation in a simultaneous process model. I have a non-significant mean slope with non-significant variance. Yet, this slope factor is significantly correlated with the slope factor of the second growth process. There are no regression parameters in the model, just the intercept and slope parameters for the two processes. How should I interpret this? Thank you,
bmuthen posted on Friday, August 13, 2004 - 4:57 pm
I think it could be possible that you cannot reject a zero variance, but be able to reject a zero covariance. It may just be a matter of power. On the other hand, the non-zero covariance between the growth factors may mask a model misspecification where the outcomes of the 2 processes need to correlate - and the correlation between the slopes help them do that - but the correct way to represent the outcome correlation may be between contemporaneous residuals.
Jungmeen Kim posted on Thursday, February 21, 2008 - 2:28 pm
We ran parallel process growth models (between attention and anger) using standardized scores (since we had different reporters and slightly different questions for the variables we had to standardize the scores to make composites), thus we know that there is going to be no significant mean changes over time. Then, we found that the intercept of the attention variable was significantly predictive of the slope of the anger variable. The slope of the anger had a significant variance. How can we interpret this significant and negative regression path? Since the mean of anger slope is not significant, does this mean that the higher intercept of attention is predictive of smaller (since it is negative) "variances" of anger?
Thank you for your attention and guidance in advance!
No, a negative influence on a slope implies that as the predictor increases the slope decreases; it has nothing to do with the slope variance. Because your anger slope mean is not significantly different from zero, this would mean that it gets negative as the predictor value increases.
But you should not do growth modeling on standardized scores - see the dangers described in the Seltzer article "The Metric Matters".
I am doing a twin-singleton comparison (with grouping option) on latent growth curves of externalizing problem behavior. I want to investigate if the growth factor variances of the twins are different from the growth factor variances of the singletons. I tested this by using equality constraints, like: i (1). Then, I did a chi-square diff test by comparing the chi-sq of this model with the chi-sq of the unconstrained model.
My question is: is this a correct approach to test for variance differences in growth factors? I believe I should do an F-test instead of a chi-square diff test, but how can I do this in Mplus?
A related question: is it okay to test for group differences in growth factor MEANS using chi-sq diff testing, or do I also need another test for this?
jemila seid posted on Monday, August 25, 2008 - 7:04 pm
I am new to Mplus and doing a simple growth curve modeling - but with four dependent variables simultaneously.
I am just wondering if it is possible to get confidence intervals and/or p-values for the estimated correlations of the latent variables. If so, I would appreciate if you could tell me in which output command I can find these values.
You would need to use MODEL CONSTRAINT to define these correlations. Then you would be able to get confidence intervals and p-values. See the user's guide for further information about MODEL CONSTRAINT.
jemila seid posted on Saturday, October 04, 2008 - 4:58 am
Thanks a lot, Linda - I appreciate it. Is it possible to get residual correlations? that is a correlation between latent variables left unexplained by the model? how do I get it in the out put?
I am having a little bit of trouble calculating the variances for my intercept and slope in my latent growth curve. When examining the standardized values for the variances, I get an estimate of 1.00, 999 for the est/se and 999 for the p-value. My outcome is physical activity levels (continuous variable). I know that 70% of my sample is sedentary at some point during the seven waves of data and the data is no where near normally distributed. Is a negative variance the problem that I am encountering. If so, would this be caused by nonnormal data or some type of floor effect? How can I fix it?
All variances are standardized to the value of one. Given the preponderance of zeroes in your variable, you might consider two-part growth modeling as shown in Example 6.16. See also Two-Part Growth Modeling under Papers on the website.
Thank you for the suggestion on two-part growth modelling. My research looks at how change in one variable influences change in another variable (parallel process model). Would I still be able to see how change influences change with a two-part growth model?
just a short question regarding slope variances and individual differences in change?
1.) Should I use a one-tailed p-value for the evaluation of the slope variance? Because the question is, whether or not the variance is greater than 0.
2.) The strategie proposed by Hertzog et al (2008) (comparing a "random-intercept, fixed slope model" with a "random intercept, random slope model") yields often other conclusions. The random slope is more often objected.
What would you suggest? What is the best way to analyse individual differences, especially in connection with the second posting on top (beside non significant slope variances, covariates have significant effects)?
Would it be adequate not to "overrate" the slope variance, in case that one is especially interested in effects of covariates?
I don't think you need to emphasize how precisely the slope variance is estimated in order to let the slope be either predicted by a covariate or predicting another slope. You are more interested in how precisely those relationships are estimated.
I'm running some unconditional two-part LGM and would like to add covariates to my models. I would have two questions in this regard:
1) the variances of my growth factors (linear and quadratic terms)are not significant. Can I still add (and interpret in a meaningful way)covariates?
2)the continuous part of the model fits the data better when I introduce a cubic term. Yet, I get a warning "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES." I've solved the problem by fixing the variances of the quadratic and cubic term to zero. However, again, can I still add and interpret covariates?
Thanks a lot Linda! I would like to ask you one more question: some of the covariates I add in my model are significantly related to the growth factors. Yet, the growth factors that in the unconditional model were significant now turn out to be non-significant. Is this possible? can I interpret my results without problem? Thanks!!!
I think you are saying the variances of the growth factors were significant in the unconditional model and that the residual variances of the growth factors are not significant in the conditional model. These are not the same parameters. This is to be expected because you are explaining the variance by the covariates. The residual variance is what is still not explained.
Sorry, I was not clear enough. Actually I meant the means of the growth factors, not variances. In the unconditional model I obtain means for all my growth factors and they are all significant. When I add predictors (even simply gender) I do not obtain estimated means for the growth factors anymore, but estimated intercepts which are all not significant. I thought that the estimated means and intercepts for the growth factors were the same parameters but I assume I am making some confusion! Thus, is it possible that I obtain significant estimated means for growth factors in the unconditional model and non-significant estimated intercepts for growth factors in the conditional model? Can I interpret the effects of my covariates based on the means of growth factors I obtained in the unconditional model? Thanks a lot for your help!!!
I still have a main issue with my two-part models. In the unconditional model I fixed at zero the variances for the the slope and quadratic terms of the continuous part of the model, as well as all the covariances except the one between the two intercepts. This model worked well. Yet, when I introduce predictors the covariance between the intercepts is higher than 1. I tried to center the intercept at a different time point but did not help. Also both the variances around the intercepts are significant. Do you have any suggestion about how to solve it? Can I fixed it at zero? Thanks a lot!
I am writing up a paper explaining a situation in which the slope variance in an unconditional model was not significant, however, I have significant time-invariant covariates predicting slope variation. From your previous posts I understand that there is more power to detect slope variability, when covariates are added. I was wondering, if you could point out a paper or a chapter that discusses the above mentioned issue in more detail.
Reiko Hirai posted on Saturday, February 02, 2013 - 2:58 pm
Dear Dr. Muthen,
I am new in Mplus discussion board. I am looking for a reference to justify my analyses. Two of my outcome variables do not have significant slope variances. I have significant intercept variances for both outcomes. From what I read, Nagin recommends not to run trajectory analyses if there is no slope variance. I heard that you take a different approach - it is still useful to run trajectory analyses if there is significant intercept variance. I was unable to find the citation. Could you kindly point me to the appropriate article or book? I appreciate your help very much.
When you say trajectory analysis, do you mean using latent trajectory classes?
And when you say significant slopes variance, are you referring to a regular growth model with random effects, so a 1-class model?
Reiko Hirai posted on Saturday, February 02, 2013 - 4:34 pm
Dear Dr. Muthen,
I am sorry that I was not clear. Yes, I believe it is latent trajectory class analysis. I ran a single class model without constraining the variance zero to test the significance of slope and intercept variances. The output indicated significant variance in intercept but non-significant variance in slope. Hope I provided enough information. Thank you very much for your prompt response.
I think you can find trajectory classes even if only the intercept has significant variance in a single-class random effect growth model. For instance, groups of individuals may have distinctly different starting points but develop approximately at the same rate. I know of no specific citation for this, but I don't think it needs it.
Reiko Hirai posted on Tuesday, February 05, 2013 - 8:12 pm
Dear Dr. Muthen,
Thank you very much for your answer. Hearing it from you made me feel good about all the analyses I have done.
Dr. Muthen: I have a question about setting a slope factor variance to 0 in a piecewise growth model. The observed dependent variables are five time points of suicidal ideation. There are 2 linear slope factors: S1 (from time 1 to time 2), and S2 (time 2 through time 5). In order to get the model to run properly we set the variance of S1 to 0 (otherwise the standard errors could not be computed). Setting S1@0 is fine, because the computed variance is very small and non-significant. In our model we examined the regressions of the intercept and both slope factors on our covariates. Mplus provides answers for the regressions of S1 on the family predictors, a couple of which are significant. My question is: What do those findings for S1 mean? That is, how can covariates predict S1 when there is no variance in S1? Are the answers interpretable? If not, should we omit regressions of S1 on predictors in the model, in other words, only look at regressions of I and S2 on the predictors? Omitting S1 regressions changes the answers for regressions involving S2, so it is not an inconsequential decision. Thank you!
Without covariates the s1 variance is not identified, so any estimates you see are not trustworthy (as you say, the SE signals the non-ident). With covariates, fixing s1@0 implies that the s1 residual variance is zero. The s1 regression on covariates is still fine. For instance, a gender covariate says that the s1 mean is different for males and females.
My colleagues and I are working on a paper in which we have conducted a LGM and found a non-significant slope variance (p =.12). Based on the previous posts we have decided to still conducted a conditional LGM and we found significant effects of predictors on the slope. Yet, a reviewer is now questioning our analyses stating that given the non-significant slope we should have never moved to examine a conditional model. So, my question is: do you know any paper or book to which we could refer to support our decision? May we cite this post as a "personal communication"? Thank you so much for your help!
I would not go by the reviewer's rule. It is often found that adding covariates can tease out variance in a slope, presumably due to increased power. I can't put my finger on a reference right now - others? Also, the usual z test (Wald test for one parameter) is known to have low power; see, e.g., Berkhof-Snijders (2001) in JEBS.
I'm testing a simple growth model with 4 time points. The linear (a) and quadratic (b) models show poor fit, so I tested a "free growth model" (c) with two of the 4 time scores free (estimated). This model has a good fit (and visually, the growth seems to be non-linear and non-quadratic, so this makes sense).
However, for each of the above models I get the "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE..." warning. In models a and c, the warning says "PROBLEM INVOLVING VARIABLE I" and in model b, the problematic variable is S.
I found in the outputs that the variance of the problematic latent variable (I or S) is negative, causing the warning (correlations between latent variables are clearly below 1).
But I don't know why this happens and what to do with it. The data looks OK, kurtosis and skew are within acceptable limits for all variables etc.
It sounds like the negative residual variances are not small and insignificant. In this case, they should not be fixed at zero. Instead you need to find a better model for your data.
Katy Roche posted on Tuesday, January 07, 2014 - 10:07 am
I am having trouble finding the slope variances in my conditional model (and I did request Tech 4). Thus, although I can see them in my unconditional model output, I just do not see them in the conditional model output. Can you let me know if there is a default supressing these or how I can call them out?
I have computed a growth model with four time points and results indicate significant intercept variance. I went to examine this +1sd and -1sd below the intercept and the value for +1sd was above my scale. My scale is 1-5 and the value for +1sd above = 5.09. I have double checked my values and they are in the correct range (i.e., 1-5). Do you know what could be happening here? Also, the SRMR for model fit is indicating poor fit (i.e., .14) but the chi-square and SRMR are not.