Message/Author 

Anonymous posted on Wednesday, October 30, 2002  12:25 pm



One important advantage of LGM is that the model allows one to examine interindividual differences in intraindividual growth in longitudinal studies. And the interindividual differences are captured by the variances of the growth factors. To my understanding, if the variance of a slope factor is not statistically significant, I would say that there is no significant interindividual differences, in regard to changes in the outcome measure under study. In my recent data analysis, I found that the variance of a lope factor, without covariates, was not statistically significant. This indicates that the changes over time in the outcome measure were not significant different across individuals. However, once I added covariates into the LGM, I found two covariates (ethnicity and age) had significant effects on the slope factor and the Rsquare was 0.42 (i.e., about 42% of the variation in the slope factor was explained by the covariates). How should I interpret these results? Your help will be appreciated! 


If the slope growth factor mean is significant, this means that it is significantly different from zero which means that there is development over time on average. If the slope growth factor variance is significant, this means that not all individuals grow at the same rate, but that there is significant variability in their growth rates. If the slope growth factor variance is not significant, this means that all individuals have the same growth rate. Even if the slope growth factor variance is not statistically significant without covariates, inclusion of covariates often shows that they have significant influence on the slope so that the slope does vary (as a function of the covariates). These seemingly conflicting results may be due to higher power to detect slope variability when covariates are included. 

Anonymous posted on Tuesday, August 10, 2004  7:18 am



I have a similar situation in a simultaneous process model. I have a nonsignificant mean slope with nonsignificant variance. Yet, this slope factor is significantly correlated with the slope factor of the second growth process. There are no regression parameters in the model, just the intercept and slope parameters for the two processes. How should I interpret this? Thank you, Patrick 

bmuthen posted on Friday, August 13, 2004  4:57 pm



I think it could be possible that you cannot reject a zero variance, but be able to reject a zero covariance. It may just be a matter of power. On the other hand, the nonzero covariance between the growth factors may mask a model misspecification where the outcomes of the 2 processes need to correlate  and the correlation between the slopes help them do that  but the correct way to represent the outcome correlation may be between contemporaneous residuals. 

Jungmeen Kim posted on Thursday, February 21, 2008  2:28 pm



Dear Linda, We ran parallel process growth models (between attention and anger) using standardized scores (since we had different reporters and slightly different questions for the variables we had to standardize the scores to make composites), thus we know that there is going to be no significant mean changes over time. Then, we found that the intercept of the attention variable was significantly predictive of the slope of the anger variable. The slope of the anger had a significant variance. How can we interpret this significant and negative regression path? Since the mean of anger slope is not significant, does this mean that the higher intercept of attention is predictive of smaller (since it is negative) "variances" of anger? Thank you for your attention and guidance in advance! 


No, a negative influence on a slope implies that as the predictor increases the slope decreases; it has nothing to do with the slope variance. Because your anger slope mean is not significantly different from zero, this would mean that it gets negative as the predictor value increases. But you should not do growth modeling on standardized scores  see the dangers described in the Seltzer article "The Metric Matters". 


Dear Dr. Muthen, I am doing a twinsingleton comparison (with grouping option) on latent growth curves of externalizing problem behavior. I want to investigate if the growth factor variances of the twins are different from the growth factor variances of the singletons. I tested this by using equality constraints, like: i (1). Then, I did a chisquare diff test by comparing the chisq of this model with the chisq of the unconstrained model. My question is: is this a correct approach to test for variance differences in growth factors? I believe I should do an Ftest instead of a chisquare diff test, but how can I do this in Mplus? A related question: is it okay to test for group differences in growth factor MEANS using chisq diff testing, or do I also need another test for this? Thanks in advance for your time. Sylvana 


In addition, the estimator used in my analyses is ML. Thanks. 


Using a chisquare difference test should be fine for both variances and means. The variances are not on the border of the admissible parameter space. 


Thank you very much for your swift reply. I am wondering, could I also use loglikelihood difference testing instead? When should chisquare difference testing be applied and not loglikelihood difference testing, or vice versa? Thanks. Sylvana 


You can use either one. They will yield the same results when both are available. 


Thanks alot! Sylvana 

jemila seid posted on Monday, August 25, 2008  7:04 pm



I am new to Mplus and doing a simple growth curve modeling  but with four dependent variables simultaneously. I am just wondering if it is possible to get confidence intervals and/or pvalues for the estimated correlations of the latent variables. If so, I would appreciate if you could tell me in which output command I can find these values. Thanks a lot Jemila 


You would need to use MODEL CONSTRAINT to define these correlations. Then you would be able to get confidence intervals and pvalues. See the user's guide for further information about MODEL CONSTRAINT. 

jemila seid posted on Saturday, October 04, 2008  4:58 am



Thanks a lot, Linda  I appreciate it. Is it possible to get residual correlations? that is a correlation between latent variables left unexplained by the model? how do I get it in the out put? Best regards Jemila 


If you ask for RESIDUAL in the OUTPUT command, you can get residual covariances. 

jemila seid posted on Monday, December 01, 2008  11:40 am



Thanks again, Linda. I appreciate it. 


I am having a little bit of trouble calculating the variances for my intercept and slope in my latent growth curve. When examining the standardized values for the variances, I get an estimate of 1.00, 999 for the est/se and 999 for the pvalue. My outcome is physical activity levels (continuous variable). I know that 70% of my sample is sedentary at some point during the seven waves of data and the data is no where near normally distributed. Is a negative variance the problem that I am encountering. If so, would this be caused by nonnormal data or some type of floor effect? How can I fix it? Thanks 


All variances are standardized to the value of one. Given the preponderance of zeroes in your variable, you might consider twopart growth modeling as shown in Example 6.16. See also TwoPart Growth Modeling under Papers on the website. 


Thank you for the suggestion on twopart growth modelling. My research looks at how change in one variable influences change in another variable (parallel process model). Would I still be able to see how change influences change with a twopart growth model? 


Yes, you can two twopart models or one twopart and one regular. 


Dear Dr. Muthen, just a short question regarding slope variances and individual differences in change? 1.) Should I use a onetailed pvalue for the evaluation of the slope variance? Because the question is, whether or not the variance is greater than 0. 2.) The strategie proposed by Hertzog et al (2008) (comparing a "randomintercept, fixed slope model" with a "random intercept, random slope model") yields often other conclusions. The random slope is more often objected. What would you suggest? What is the best way to analyse individual differences, especially in connection with the second posting on top (beside non significant slope variances, covariates have significant effects)? Would it be adequate not to "overrate" the slope variance, in case that one is especially interested in effects of covariates? best regards Christoph Weber 


I don't think you need to emphasize how precisely the slope variance is estimated in order to let the slope be either predicted by a covariate or predicting another slope. You are more interested in how precisely those relationships are estimated. 


Hello Dr. Muthen In a previous posting you mentioned that all variances are standardized to a value of one. Why is this? Thanks Wayne 


Standardization implies that variances are turned into ones. You don't need to consider the standardized solution if you don't want to and then the variances would not be one. 


Dear Linda & Bengt, I'm running some unconditional twopart LGM and would like to add covariates to my models. I would have two questions in this regard: 1) the variances of my growth factors (linear and quadratic terms)are not significant. Can I still add (and interpret in a meaningful way)covariates? 2)the continuous part of the model fits the data better when I introduce a cubic term. Yet, I get a warning "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES." I've solved the problem by fixing the variances of the quadratic and cubic term to zero. However, again, can I still add and interpret covariates? Many thanks!!! Matteo 


1. Yes, you will have more power when you add covariates. 2. Yes for the same reason stated above. 


Thanks a lot Linda! I would like to ask you one more question: some of the covariates I add in my model are significantly related to the growth factors. Yet, the growth factors that in the unconditional model were significant now turn out to be nonsignificant. Is this possible? can I interpret my results without problem? Thanks!!! 


I think you are saying the variances of the growth factors were significant in the unconditional model and that the residual variances of the growth factors are not significant in the conditional model. These are not the same parameters. This is to be expected because you are explaining the variance by the covariates. The residual variance is what is still not explained. 


Sorry, I was not clear enough. Actually I meant the means of the growth factors, not variances. In the unconditional model I obtain means for all my growth factors and they are all significant. When I add predictors (even simply gender) I do not obtain estimated means for the growth factors anymore, but estimated intercepts which are all not significant. I thought that the estimated means and intercepts for the growth factors were the same parameters but I assume I am making some confusion! Thus, is it possible that I obtain significant estimated means for growth factors in the unconditional model and nonsignificant estimated intercepts for growth factors in the conditional model? Can I interpret the effects of my covariates based on the means of growth factors I obtained in the unconditional model? Thanks a lot for your help!!! 


The same situation holds. In an unconditional model, a mean is estimated. In a conditional model, an intercept is estimated. 


Dear Linda, thanks you so much for your reply! I still have a main issue with my twopart models. In the unconditional model I fixed at zero the variances for the the slope and quadratic terms of the continuous part of the model, as well as all the covariances except the one between the two intercepts. This model worked well. Yet, when I introduce predictors the covariance between the intercepts is higher than 1. I tried to center the intercept at a different time point but did not help. Also both the variances around the intercepts are significant. Do you have any suggestion about how to solve it? Can I fixed it at zero? Thanks a lot! 


Dr. Muthen, I am writing up a paper explaining a situation in which the slope variance in an unconditional model was not significant, however, I have significant timeinvariant covariates predicting slope variation. From your previous posts I understand that there is more power to detect slope variability, when covariates are added. I was wondering, if you could point out a paper or a chapter that discusses the above mentioned issue in more detail. Best regards, Yuliya 


This may be discussed in the Raudenbush and Bryk book. You might find an answer by posting on multilevel net. 


Matteo: Please send the conditional and unconditional outputs and your license number to support@statmodel.com. 

Reiko Hirai posted on Saturday, February 02, 2013  2:58 pm



Dear Dr. Muthen, I am new in Mplus discussion board. I am looking for a reference to justify my analyses. Two of my outcome variables do not have significant slope variances. I have significant intercept variances for both outcomes. From what I read, Nagin recommends not to run trajectory analyses if there is no slope variance. I heard that you take a different approach  it is still useful to run trajectory analyses if there is significant intercept variance. I was unable to find the citation. Could you kindly point me to the appropriate article or book? I appreciate your help very much. 


When you say trajectory analysis, do you mean using latent trajectory classes? And when you say significant slopes variance, are you referring to a regular growth model with random effects, so a 1class model? 

Reiko Hirai posted on Saturday, February 02, 2013  4:34 pm



Dear Dr. Muthen, I am sorry that I was not clear. Yes, I believe it is latent trajectory class analysis. I ran a single class model without constraining the variance zero to test the significance of slope and intercept variances. The output indicated significant variance in intercept but nonsignificant variance in slope. Hope I provided enough information. Thank you very much for your prompt response. Reiko 


I think you can find trajectory classes even if only the intercept has significant variance in a singleclass random effect growth model. For instance, groups of individuals may have distinctly different starting points but develop approximately at the same rate. I know of no specific citation for this, but I don't think it needs it. 

Reiko Hirai posted on Tuesday, February 05, 2013  8:12 pm



Dear Dr. Muthen, Thank you very much for your answer. Hearing it from you made me feel good about all the analyses I have done. Reiko 


Dr. Muthen: I have a question about setting a slope factor variance to 0 in a piecewise growth model. The observed dependent variables are five time points of suicidal ideation. There are 2 linear slope factors: S1 (from time 1 to time 2), and S2 (time 2 through time 5). In order to get the model to run properly we set the variance of S1 to 0 (otherwise the standard errors could not be computed). Setting S1@0 is fine, because the computed variance is very small and nonsignificant. In our model we examined the regressions of the intercept and both slope factors on our covariates. Mplus provides answers for the regressions of S1 on the family predictors, a couple of which are significant. My question is: What do those findings for S1 mean? That is, how can covariates predict S1 when there is no variance in S1? Are the answers interpretable? If not, should we omit regressions of S1 on predictors in the model, in other words, only look at regressions of I and S2 on the predictors? Omitting S1 regressions changes the answers for regressions involving S2, so it is not an inconsequential decision. Thank you! 


Without covariates the s1 variance is not identified, so any estimates you see are not trustworthy (as you say, the SE signals the nonident). With covariates, fixing s1@0 implies that the s1 residual variance is zero. The s1 regression on covariates is still fine. For instance, a gender covariate says that the s1 mean is different for males and females. 


Dear Drs. Muthen, My colleagues and I are working on a paper in which we have conducted a LGM and found a nonsignificant slope variance (p =.12). Based on the previous posts we have decided to still conducted a conditional LGM and we found significant effects of predictors on the slope. Yet, a reviewer is now questioning our analyses stating that given the nonsignificant slope we should have never moved to examine a conditional model. So, my question is: do you know any paper or book to which we could refer to support our decision? May we cite this post as a "personal communication"? Thank you so much for your help! Matteo 


I would not go by the reviewer's rule. It is often found that adding covariates can tease out variance in a slope, presumably due to increased power. I can't put my finger on a reference right now  others? Also, the usual z test (Wald test for one parameter) is known to have low power; see, e.g., BerkhofSnijders (2001) in JEBS. 


Thank you very much for your answer! 


Dear Drs Muthen, I'm testing a simple growth model with 4 time points. The linear (a) and quadratic (b) models show poor fit, so I tested a "free growth model" (c) with two of the 4 time scores free (estimated). This model has a good fit (and visually, the growth seems to be nonlinear and nonquadratic, so this makes sense). However, for each of the above models I get the "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE..." warning. In models a and c, the warning says "PROBLEM INVOLVING VARIABLE I" and in model b, the problematic variable is S. I found in the outputs that the variance of the problematic latent variable (I or S) is negative, causing the warning (correlations between latent variables are clearly below 1). But I don't know why this happens and what to do with it. The data looks OK, kurtosis and skew are within acceptable limits for all variables etc. My input for the free growth model: Model: i s  LS1@0 LS2@2.5 LS3* LS4*; I'm grateful for all suggestions! 


An addition: constraining the variance of I or S to zero causes a very poor fit. 


It sounds like the negative residual variances are not small and insignificant. In this case, they should not be fixed at zero. Instead you need to find a better model for your data. 

Katy Roche posted on Tuesday, January 07, 2014  10:07 am



I am having trouble finding the slope variances in my conditional model (and I did request Tech 4). Thus, although I can see them in my unconditional model output, I just do not see them in the conditional model output. Can you let me know if there is a default supressing these or how I can call them out? 


They will be residual variances in the conditional model. 


Hello, I have computed a growth model with four time points and results indicate significant intercept variance. I went to examine this +1sd and 1sd below the intercept and the value for +1sd was above my scale. My scale is 15 and the value for +1sd above = 5.09. I have double checked my values and they are in the correct range (i.e., 15). Do you know what could be happening here? Also, the SRMR for model fit is indicating poor fit (i.e., .14) but the chisquare and SRMR are not. Can you please help me? Thanks so much. Danyel 


You would have to send the output and license number to Support. 

Jason Bond posted on Tuesday, March 18, 2014  12:19 pm



Bengt/Linda, I’m trying to estimate the effect of a grouping variable G in a 3time point design using the random intercept model: y(i,t) = a(i) + b(i)I(T=1) + c(i)I(T=2) + e(i,t) a(i) = a1 + a2*G + u(i) b(i)= b1 + b2*G c(i)= c1 + c2*G where I(T=x) is an indicator variable for data from wave x. When I ran the below syntax, it indicates that PSI is not positive definite. Any suggestions re the problem? Thanks much, Jason ANALYSIS: TYPE = RANDOM; ESTIMATOR = ML; MODEL: i BY n28_hdd@1 fn28_hdd@1 gn28_hdd@1; s1 BY n28_hdd@0 fn28_hdd@1 gn28_hdd@0; s2 BY n28_hdd@0 fn28_hdd@0 gn28_hdd@1; [n28_hdd@0 fn28_hdd@0 gn28_hdd@0 i s1 s2]; n28_hdd fn28_hdd gn28_hdd (1); s1@0 s2@0; i on group; s1 on group; s2 on group; 


Are you sure that the 3 growth factors have fixed zero covariances? If that's not the problem, send output and license number to Support. 

Jason Bond posted on Tuesday, March 18, 2014  4:24 pm



That was indeed the problem...thanks. Jason 

RuoShui posted on Sunday, March 23, 2014  7:45 pm



Dear Drs. Muthen, I read that when doing LGCM, we need to compute pseudo R squared to understand the variance of I and S explained by the covariates. I am wondering does Mplus provide pseudo R squared statistics or the R squared provided in the output serve the same function? Thank you very much! 


I believe Pseudo Rsquare is for logistic regression. The regression of one growth factor on another is a linear regression. Growth factors are continuous variables. Pseudo Rquare would not apply in this situation. 

RuoShui posted on Monday, March 24, 2014  3:49 pm



I see. Thank you very much Dr. Muthen. On a related note, the variance of the slope growth factor is .003 in the unconditional LGCM model. When I included covariates, R square statistics indicate 4% of the variance in S was explained. But the residual variance of the slope growth factor in the conditional model is .004. Is this even possible? Thank you a lot! 


Please send the output and your license number to support@statmodel.com so I can take a look at it. Are you looking at the standardized residual variance? 

Back to top 