Variances of growth factors PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
 Anonymous posted on Wednesday, October 30, 2002 - 12:25 pm
One important advantage of LGM is that the model allows one to examine inter-individual differences in intra-individual growth in longitudinal studies. And the inter-individual differences are captured by the variances of the growth factors.

To my understanding, if the variance of a slope factor is not statistically significant, I would say that there is no significant inter-individual differences, in regard to changes in the outcome measure under study.

In my recent data analysis, I found that the variance of a lope factor, without covariates, was not statistically significant. This indicates that the changes over time in the outcome measure were not significant different across individuals. However, once I added covariates into the LGM, I found two covariates (ethnicity and age) had significant effects on the slope factor and the R-square was 0.42 (i.e., about 42% of the variation in the slope factor was explained by the covariates). How should I interpret these results?
Your help will be appreciated!
 Linda K. Muthen posted on Wednesday, October 30, 2002 - 4:06 pm
If the slope growth factor mean is significant, this means that it is significantly different from zero which means that there is development over time on average. If the slope growth factor variance is significant, this means that not all individuals grow at the same rate, but that there is significant variability in their growth rates. If the slope growth factor variance is not significant, this means that all individuals have the same growth rate. Even if the slope growth factor variance is not statistically significant without covariates, inclusion of covariates often shows that they have significant influence on the slope so that the slope does vary (as a function of the covariates). These seemingly conflicting results may be due to higher power to detect slope variability when covariates are included.
 Anonymous posted on Tuesday, August 10, 2004 - 7:18 am
I have a similar situation in a simultaneous process model. I have a non-significant mean slope with non-significant variance. Yet, this slope factor is significantly correlated with the slope factor of the second growth process. There are no regression parameters in the model, just the intercept and slope parameters for the two processes. How should I interpret this? Thank you,

 bmuthen posted on Friday, August 13, 2004 - 4:57 pm
I think it could be possible that you cannot reject a zero variance, but be able to reject a zero covariance. It may just be a matter of power. On the other hand, the non-zero covariance between the growth factors may mask a model misspecification where the outcomes of the 2 processes need to correlate - and the correlation between the slopes help them do that - but the correct way to represent the outcome correlation may be between contemporaneous residuals.
 Jungmeen Kim posted on Thursday, February 21, 2008 - 2:28 pm
Dear Linda,

We ran parallel process growth models (between attention and anger) using standardized scores (since we had different reporters and slightly different questions for the variables we had to standardize the scores to make composites), thus we know that there is going to be no significant mean changes over time. Then, we found that the intercept of the attention variable was significantly predictive of the slope of the anger variable. The slope of the anger had a significant variance. How can we interpret this significant and negative regression path? Since the mean of anger slope is not significant, does this mean that the higher intercept of attention is predictive of smaller (since it is negative) "variances" of anger?

Thank you for your attention and guidance in advance!
 Bengt O. Muthen posted on Thursday, February 21, 2008 - 6:04 pm
No, a negative influence on a slope implies that as the predictor increases the slope decreases; it has nothing to do with the slope variance. Because your anger slope mean is not significantly different from zero, this would mean that it gets negative as the predictor value increases.

But you should not do growth modeling on standardized scores - see the dangers described in the Seltzer article "The Metric Matters".
 Sylvana Robbers posted on Thursday, July 24, 2008 - 2:03 am
Dear Dr. Muthen,

I am doing a twin-singleton comparison (with grouping option) on latent growth curves of externalizing problem behavior. I want to investigate if the growth factor variances of the twins are different from the growth factor variances of the singletons.
I tested this by using equality constraints, like: i (1).
Then, I did a chi-square diff test by comparing the chi-sq of this model with the chi-sq of the unconstrained model.

My question is: is this a correct approach to test for variance differences in growth factors? I believe I should do an F-test instead of a chi-square diff test, but how can I do this in Mplus?

A related question: is it okay to test for group differences in growth factor MEANS using chi-sq diff testing, or do I also need another test for this?

Thanks in advance for your time.

 Sylvana Robbers posted on Thursday, July 24, 2008 - 7:26 am
In addition, the estimator used in my analyses is ML.

 Linda K. Muthen posted on Thursday, July 24, 2008 - 9:13 am
Using a chi-square difference test should be fine for both variances and means. The variances are not on the border of the admissible parameter space.
 Sylvana Robbers posted on Thursday, July 24, 2008 - 2:33 pm
Thank you very much for your swift reply.

I am wondering, could I also use loglikelihood difference testing instead? When should chi-square difference testing be applied and not loglikelihood difference testing, or vice versa?


 Linda K. Muthen posted on Thursday, July 24, 2008 - 5:13 pm
You can use either one. They will yield the same results when both are available.
 Sylvana Robbers posted on Friday, July 25, 2008 - 12:10 am
Thanks alot!
 jemila seid posted on Monday, August 25, 2008 - 7:04 pm
I am new to Mplus and doing a simple growth curve modeling - but with four dependent variables simultaneously.

I am just wondering if it is possible to get confidence intervals and/or p-values for the estimated correlations of the latent variables. If so, I would appreciate if you could tell me in which output command I can find these values.

Thanks a lot
 Linda K. Muthen posted on Tuesday, August 26, 2008 - 9:25 am
You would need to use MODEL CONSTRAINT to define these correlations. Then you would be able to get confidence intervals and p-values. See the user's guide for further information about MODEL CONSTRAINT.
 jemila seid posted on Saturday, October 04, 2008 - 4:58 am
Thanks a lot, Linda - I appreciate it. Is it possible to get residual correlations? that is a correlation between latent variables left unexplained by the model? how do I get it in the out put?

Best regards
 Linda K. Muthen posted on Saturday, October 04, 2008 - 11:56 am
If you ask for RESIDUAL in the OUTPUT command, you can get residual covariances.
 jemila seid posted on Monday, December 01, 2008 - 11:40 am
Thanks again, Linda. I appreciate it.
 Wayne deRuiter posted on Monday, June 14, 2010 - 1:38 pm
I am having a little bit of trouble calculating the variances for my intercept and slope in my latent growth curve. When examining the standardized values for the variances, I get an estimate of 1.00, 999 for the est/se and 999 for the p-value. My outcome is physical activity levels (continuous variable). I know that 70% of my sample is sedentary at some point during the seven waves of data and the data is no where near normally distributed. Is a negative variance the problem that I am encountering. If so, would this be caused by nonnormal data or some type of floor effect? How can I fix it?

 Linda K. Muthen posted on Tuesday, June 15, 2010 - 9:20 am
All variances are standardized to the value of one. Given the preponderance of zeroes in your variable, you might consider two-part growth modeling as shown in Example 6.16. See also Two-Part Growth Modeling under Papers on the website.
 Wayne deRuiter posted on Tuesday, June 15, 2010 - 10:40 am
Thank you for the suggestion on two-part growth modelling. My research looks at how change in one variable influences change in another variable (parallel process model). Would I still be able to see how change influences change with a two-part growth model?
 Linda K. Muthen posted on Tuesday, June 15, 2010 - 11:37 am
Yes, you can two two-part models or one two-part and one regular.
 Christoph Weber posted on Wednesday, June 16, 2010 - 3:42 am
Dear Dr. Muthen,

just a short question regarding slope variances and individual differences in change?

1.) Should I use a one-tailed p-value for the evaluation of the slope variance? Because the question is, whether or not the variance is greater than 0.

2.) The strategie proposed by Hertzog et al (2008) (comparing a "random-intercept, fixed slope model" with a "random intercept, random slope model") yields often other conclusions. The random slope is more often objected.

What would you suggest? What is the best way to analyse individual differences, especially in connection with the second posting on top (beside non significant slope variances, covariates have significant effects)?

Would it be adequate not to "overrate" the slope variance, in case that one is especially interested in effects of covariates?

best regards
Christoph Weber
 Bengt O. Muthen posted on Wednesday, June 16, 2010 - 4:12 pm
I don't think you need to emphasize how precisely the slope variance is estimated in order to let the slope be either predicted by a covariate or predicting another slope. You are more interested in how precisely those relationships are estimated.
 Wayne deRuiter posted on Tuesday, July 06, 2010 - 7:52 pm
Hello Dr. Muthen

In a previous posting you mentioned that all variances are standardized to a value of one. Why is this?

 Bengt O. Muthen posted on Wednesday, July 07, 2010 - 10:05 am
Standardization implies that variances are turned into ones. You don't need to consider the standardized solution if you don't want to and then the variances would not be one.
 matteo giletta posted on Tuesday, May 22, 2012 - 5:28 am
Dear Linda & Bengt,

I'm running some unconditional two-part LGM and would like to add covariates to my models. I would have two questions in this regard:

1) the variances of my growth factors (linear and quadratic terms)are not significant. Can I still add (and interpret in a meaningful way)covariates?

2)the continuous part of the model fits the data better when I introduce a cubic term. Yet, I get a warning "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A
I've solved the problem by fixing the variances of the quadratic and cubic term to zero. However, again, can I still add and interpret covariates?

Many thanks!!!

 Linda K. Muthen posted on Tuesday, May 22, 2012 - 8:43 am
1. Yes, you will have more power when you add covariates.

2. Yes for the same reason stated above.
 matteo giletta posted on Tuesday, May 22, 2012 - 9:31 am
Thanks a lot Linda!
I would like to ask you one more question:
some of the covariates I add in my model are significantly related to the growth factors. Yet, the growth factors that in the unconditional model were significant now turn out to be non-significant.
Is this possible? can I interpret my results without problem?
 Linda K. Muthen posted on Tuesday, May 22, 2012 - 9:38 am
I think you are saying the variances of the growth factors were significant in the unconditional model and that the residual variances of the growth factors are not significant in the conditional model. These are not the same parameters. This is to be expected because you are explaining the variance by the covariates. The residual variance is what is still not explained.
 matteo giletta posted on Tuesday, May 22, 2012 - 1:04 pm
Sorry, I was not clear enough. Actually I meant the means of the growth factors, not variances. In the unconditional model I obtain means for all my growth factors and they are all significant. When I add predictors (even simply gender) I do not obtain estimated means for the growth factors anymore, but estimated intercepts which are all not significant. I thought that the estimated means and intercepts for the growth factors were the same parameters but I assume I am making some confusion!
Thus, is it possible that I obtain significant estimated means for growth factors in the unconditional model and non-significant estimated intercepts for growth factors in the conditional model?
Can I interpret the effects of my covariates based on the means of growth factors I obtained in the unconditional model?
Thanks a lot for your help!!!
 Linda K. Muthen posted on Tuesday, May 22, 2012 - 2:38 pm
The same situation holds. In an unconditional model, a mean is estimated. In a conditional model, an intercept is estimated.
 matteo giletta posted on Tuesday, May 29, 2012 - 7:56 am
Dear Linda,
thanks you so much for your reply!

I still have a main issue with my two-part models. In the unconditional model I fixed at zero the variances for the the slope and quadratic terms of the continuous part of the model, as well as all the covariances except the one between the two intercepts. This model worked well. Yet, when I introduce predictors the covariance between the intercepts is higher than 1. I tried to center the intercept at a different time point but did not help. Also both the variances around the intercepts are significant. Do you have any suggestion about how to solve it? Can I fixed it at zero?
Thanks a lot!
 Yuliya Kotelnikova posted on Tuesday, May 29, 2012 - 5:19 pm
Dr. Muthen,

I am writing up a paper explaining a situation in which the slope variance in an unconditional model was not significant, however, I have significant time-invariant covariates predicting slope variation. From your previous posts I understand that there is more power to detect slope variability, when covariates are added. I was wondering, if you could point out a paper or a chapter that discusses the above mentioned issue in more detail.

Best regards,

 Linda K. Muthen posted on Wednesday, May 30, 2012 - 10:28 am
This may be discussed in the Raudenbush and Bryk book. You might find an answer by posting on multilevel net.
 Linda K. Muthen posted on Wednesday, May 30, 2012 - 10:30 am

Please send the conditional and unconditional outputs and your license number to
 Reiko Hirai posted on Saturday, February 02, 2013 - 2:58 pm
Dear Dr. Muthen,

I am new in Mplus discussion board. I am looking for a reference to justify my analyses. Two of my outcome variables do not have significant slope variances. I have significant intercept variances for both outcomes. From what I read, Nagin recommends not to run trajectory analyses if there is no slope variance. I heard that you take a different approach - it is still useful to run trajectory analyses if there is significant intercept variance. I was unable to find the citation. Could you kindly point me to the appropriate article or book? I appreciate your help very much.
 Bengt O. Muthen posted on Saturday, February 02, 2013 - 4:04 pm
When you say trajectory analysis, do you mean using latent trajectory classes?

And when you say significant slopes variance, are you referring to a regular growth model with random effects, so a 1-class model?
 Reiko Hirai posted on Saturday, February 02, 2013 - 4:34 pm
Dear Dr. Muthen,

I am sorry that I was not clear. Yes, I believe it is latent trajectory class analysis. I ran a single class model without constraining the variance zero to test the significance of slope and intercept variances. The output indicated significant variance in intercept but non-significant variance in slope. Hope I provided enough information. Thank you very much for your prompt response.

 Bengt O. Muthen posted on Sunday, February 03, 2013 - 11:32 am
I think you can find trajectory classes even if only the intercept has significant variance in a single-class random effect growth model. For instance, groups of individuals may have distinctly different starting points but develop approximately at the same rate. I know of no specific citation for this, but I don't think it needs it.
 Reiko Hirai posted on Tuesday, February 05, 2013 - 8:12 pm
Dear Dr. Muthen,

Thank you very much for your answer. Hearing it from you made me feel good about all the analyses I have done.

 Barry Wagner posted on Tuesday, March 19, 2013 - 2:07 pm
Dr. Muthen:
I have a question about setting a slope factor variance to 0 in a piecewise growth model. The observed dependent variables are five time points of suicidal ideation. There are 2 linear slope factors: S1 (from time 1 to time 2), and S2 (time 2 through time 5). In order to get the model to run properly we set the variance of S1 to 0 (otherwise the standard errors could not be computed). Setting S1@0 is fine, because the computed variance is very small and non-significant. In our model we examined the regressions of the intercept and both slope factors on our covariates. Mplus provides answers for the regressions of S1 on the family predictors, a couple of which are significant. My question is: What do those findings for S1 mean? That is, how can covariates predict S1 when there is no variance in S1? Are the answers interpretable? If not, should we omit regressions of S1 on predictors in the model, in other words, only look at regressions of I and S2 on the predictors? Omitting S1 regressions changes the answers for regressions involving S2, so it is not an inconsequential decision. Thank you!
 Bengt O. Muthen posted on Tuesday, March 19, 2013 - 2:27 pm
Without covariates the s1 variance is not identified, so any estimates you see are not trustworthy (as you say, the SE signals the non-ident). With covariates, fixing s1@0 implies that the s1 residual variance is zero. The s1 regression on covariates is still fine. For instance, a gender covariate says that the s1 mean is different for males and females.
 matteo giletta posted on Wednesday, August 21, 2013 - 9:55 am
Dear Drs. Muthen,

My colleagues and I are working on a paper in which we have conducted a LGM and found a non-significant slope variance (p =.12). Based on the previous posts we have decided to still conducted a conditional LGM and we found significant effects of predictors on the slope. Yet, a reviewer is now questioning our analyses stating that given the non-significant slope we should have never moved to examine a conditional model. So, my question is: do you know any paper or book to which we could refer to support our decision? May we cite this post as a "personal communication"?
Thank you so much for your help!

 Bengt O. Muthen posted on Wednesday, August 21, 2013 - 10:14 am
I would not go by the reviewer's rule. It is often found that adding covariates can tease out variance in a slope, presumably due to increased power. I can't put my finger on a reference right now - others? Also, the usual z test (Wald test for one parameter) is known to have low power; see, e.g., Berkhof-Snijders (2001) in JEBS.
 matteo giletta posted on Wednesday, August 21, 2013 - 1:10 pm
Thank you very much for your answer!
 Sointu Leikas posted on Monday, September 09, 2013 - 5:00 am
Dear Drs Muthen,

I'm testing a simple growth model with 4 time points. The linear (a) and quadratic (b) models show poor fit, so I tested a "free growth model" (c) with two of the 4 time scores free (estimated). This model has a good fit (and visually, the growth seems to be non-linear and non-quadratic, so this makes sense).

However, for each of the above models I get the "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE..." warning. In models a and c, the warning says "PROBLEM INVOLVING VARIABLE I" and in model b, the problematic variable is S.

I found in the outputs that the variance of the problematic latent variable (I or S) is negative, causing the warning (correlations between latent variables are clearly below 1).

But I don't know why this happens and what to do with it. The data looks OK, kurtosis and skew are within acceptable limits for all variables etc.

My input for the free growth model:

Model: i s | LS1@0 LS2@2.5 LS3* LS4*;

I'm grateful for all suggestions!
 Sointu Leikas posted on Monday, September 09, 2013 - 5:02 am
An addition: constraining the variance of I or S to zero causes a very poor fit.
 Linda K. Muthen posted on Monday, September 09, 2013 - 9:44 am
It sounds like the negative residual variances are not small and insignificant. In this case, they should not be fixed at zero. Instead you need to find a better model for your data.
 Katy Roche posted on Tuesday, January 07, 2014 - 10:07 am
I am having trouble finding the slope variances in my conditional model (and I did request Tech 4). Thus, although I can see them in my unconditional model output, I just do not see them in the conditional model output. Can you let me know if there is a default supressing these or how I can call them out?
 Linda K. Muthen posted on Tuesday, January 07, 2014 - 10:46 am
They will be residual variances in the conditional model.
 Danyel A.Vargas posted on Saturday, March 08, 2014 - 4:34 pm

I have computed a growth model with four time points and results indicate significant intercept variance. I went to examine this +1sd and -1sd below the intercept and the value for +1sd was above my scale. My scale is 1-5 and the value for +1sd above = 5.09. I have double checked my values and they are in the correct range (i.e., 1-5). Do you know what could be happening here? Also, the SRMR for model fit is indicating poor fit (i.e., .14) but the chi-square and SRMR are not.

Can you please help me?

Thanks so much.

 Bengt O. Muthen posted on Monday, March 10, 2014 - 5:26 pm
You would have to send the output and license number to Support.
 Jason Bond posted on Tuesday, March 18, 2014 - 12:19 pm

I’m trying to estimate the effect of a grouping variable G in a 3-time point design using the random intercept model:
y(i,t) = a(i) + b(i)I(T=1) + c(i)I(T=2) + e(i,t)
a(i) = a1 + a2*G + u(i)
b(i)= b1 + b2*G
c(i)= c1 + c2*G
where I(T=x) is an indicator variable for data from wave x. When I ran the below syntax, it indicates that PSI is not positive definite. Any suggestions re the problem? Thanks much,



i BY n28_hdd@1 fn28_hdd@1 gn28_hdd@1;
s1 BY n28_hdd@0 fn28_hdd@1 gn28_hdd@0;
s2 BY n28_hdd@0 fn28_hdd@0 gn28_hdd@1;
[n28_hdd@0 fn28_hdd@0 gn28_hdd@0 i s1 s2];
n28_hdd fn28_hdd gn28_hdd (1);
s1@0 s2@0;

i on group;
s1 on group;
s2 on group;
 Bengt O. Muthen posted on Tuesday, March 18, 2014 - 2:13 pm
Are you sure that the 3 growth factors have fixed zero covariances? If that's not the problem, send output and license number to Support.
 Jason Bond posted on Tuesday, March 18, 2014 - 4:24 pm
That was indeed the problem...thanks.

 RuoShui posted on Sunday, March 23, 2014 - 7:45 pm
Dear Drs. Muthen,

I read that when doing LGCM, we need to compute pseudo R squared to understand the variance of I and S explained by the covariates. I am wondering does Mplus provide pseudo R squared statistics or the R squared provided in the output serve the same function?

Thank you very much!
 Linda K. Muthen posted on Monday, March 24, 2014 - 8:22 am
I believe Pseudo R-square is for logistic regression. The regression of one growth factor on another is a linear regression. Growth factors are continuous variables. Pseudo R-quare would not apply in this situation.
 RuoShui posted on Monday, March 24, 2014 - 3:49 pm
I see. Thank you very much Dr. Muthen.
On a related note, the variance of the slope growth factor is .003 in the unconditional LGCM model. When I included covariates, R square statistics indicate 4% of the variance in S was explained. But the residual variance of the slope growth factor in the conditional model is .004. Is this even possible?
Thank you a lot!
 Linda K. Muthen posted on Monday, March 24, 2014 - 4:24 pm
Please send the output and your license number to so I can take a look at it. Are you looking at the standardized residual variance?
 holly Andrewes posted on Wednesday, February 11, 2015 - 2:06 pm
Dear Drs. Muthens,

I have developed a quadratic LGCM with varying times of observation.

I notice that in the unconditional growth model I have created the variance of the quadratic growth factor is 0, yet still appears to be significant. Can you tell me why this is, and if there is a way of rectifying this?

Thank you very much for your help!
 Linda K. Muthen posted on Wednesday, February 11, 2015 - 2:27 pm
Please send the output and your license number to
 Sabe posted on Friday, April 10, 2015 - 2:48 am
Dear Dr. Muthen,

In my recent analysis, I found that according to the unstandardized model results the mean of the slope factor was significant. However, according to the standardized model results the mean was not significant. How can I interpret these results?
Thanks for your help!
 Bengt O. Muthen posted on Friday, April 10, 2015 - 8:11 am
If you send your input, output, and data to support we can tell.
 Anna Swan posted on Wednesday, March 16, 2016 - 3:25 pm
Dear Dr. Muthen,

I am using latent growth curve modeling to examine the impact of treatment response (0,1) on continuous outcomes, assessed at multiple time-points.

In the unconditional growth curve model, the variance of the slope is nonsignificant; however, in the conditional growth model, one of my predictors significantly predicts slope.

Do you know of articles that argue for/against (or provide illustrations of) examining predictors of slope when the slope variance in the unconditional model is insignificant?

Thank you very much for your help!

 Linda K. Muthen posted on Wednesday, March 16, 2016 - 4:41 pm
See the Raudenbush and Bryk book.
 Martijn Van Heel posted on Friday, August 19, 2016 - 3:11 am
Dear Drs Muthen,

I'm running an LCGA and I want to know whether the slope and quadratic term are significant in the different classes. In my output I don't have any p-values in the standardized results. Is it ok to look at the unstandardized p-values to conclude on significance for growth factors?

Many thanks in advance.
 Bengt O. Muthen posted on Friday, August 19, 2016 - 12:05 pm
I think so - they are approximately the same.
 Abbas Firoozabadi posted on Tuesday, October 25, 2016 - 6:22 am
I conducted a bivariate growth latent curve model over 6 weekly time points (y1-y6 and z1-z6). Univariate analyses showed that, the mean and variance of the slope for variable “y” are significant; however, the mean and variance of the slope for variable “z” are not significant. I am wondering, can I still keep the slope of variable “z” as the indicator of individuals’ changes over time in the final conditional bivariate growth curve model? Or, existing significant mean (or variance) of slope would be a strict criterion to conduct a conditional or bivariate growth curve model? I need an answer for this question because my bivariate model was found to fit the data moderately well and it shows a significant correlation between the two slopes.
i1 s1| y1@0 y2@1 y3@2 y4@3 y5@4 y6@5;
i2 s2| z1@0 z2@1 z3@2 z4@3 z5@4 z6@5;
 Bengt O. Muthen posted on Tuesday, October 25, 2016 - 10:10 am
If the slope growth factor for z has a significant regression on some predictors, I would keep the slope growth factor.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message