Anonymous posted on Wednesday, October 30, 2002 - 12:25 pm
One important advantage of LGM is that the model allows one to examine inter-individual differences in intra-individual growth in longitudinal studies. And the inter-individual differences are captured by the variances of the growth factors.
To my understanding, if the variance of a slope factor is not statistically significant, I would say that there is no significant inter-individual differences, in regard to changes in the outcome measure under study.
In my recent data analysis, I found that the variance of a lope factor, without covariates, was not statistically significant. This indicates that the changes over time in the outcome measure were not significant different across individuals. However, once I added covariates into the LGM, I found two covariates (ethnicity and age) had significant effects on the slope factor and the R-square was 0.42 (i.e., about 42% of the variation in the slope factor was explained by the covariates). How should I interpret these results? Your help will be appreciated!
If the slope growth factor mean is significant, this means that it is significantly different from zero which means that there is development over time on average. If the slope growth factor variance is significant, this means that not all individuals grow at the same rate, but that there is significant variability in their growth rates. If the slope growth factor variance is not significant, this means that all individuals have the same growth rate. Even if the slope growth factor variance is not statistically significant without covariates, inclusion of covariates often shows that they have significant influence on the slope so that the slope does vary (as a function of the covariates). These seemingly conflicting results may be due to higher power to detect slope variability when covariates are included.
Anonymous posted on Tuesday, August 10, 2004 - 7:18 am
I have a similar situation in a simultaneous process model. I have a non-significant mean slope with non-significant variance. Yet, this slope factor is significantly correlated with the slope factor of the second growth process. There are no regression parameters in the model, just the intercept and slope parameters for the two processes. How should I interpret this? Thank you,
bmuthen posted on Friday, August 13, 2004 - 4:57 pm
I think it could be possible that you cannot reject a zero variance, but be able to reject a zero covariance. It may just be a matter of power. On the other hand, the non-zero covariance between the growth factors may mask a model misspecification where the outcomes of the 2 processes need to correlate - and the correlation between the slopes help them do that - but the correct way to represent the outcome correlation may be between contemporaneous residuals.
Jungmeen Kim posted on Thursday, February 21, 2008 - 2:28 pm
We ran parallel process growth models (between attention and anger) using standardized scores (since we had different reporters and slightly different questions for the variables we had to standardize the scores to make composites), thus we know that there is going to be no significant mean changes over time. Then, we found that the intercept of the attention variable was significantly predictive of the slope of the anger variable. The slope of the anger had a significant variance. How can we interpret this significant and negative regression path? Since the mean of anger slope is not significant, does this mean that the higher intercept of attention is predictive of smaller (since it is negative) "variances" of anger?
Thank you for your attention and guidance in advance!
No, a negative influence on a slope implies that as the predictor increases the slope decreases; it has nothing to do with the slope variance. Because your anger slope mean is not significantly different from zero, this would mean that it gets negative as the predictor value increases.
But you should not do growth modeling on standardized scores - see the dangers described in the Seltzer article "The Metric Matters".
I am doing a twin-singleton comparison (with grouping option) on latent growth curves of externalizing problem behavior. I want to investigate if the growth factor variances of the twins are different from the growth factor variances of the singletons. I tested this by using equality constraints, like: i (1). Then, I did a chi-square diff test by comparing the chi-sq of this model with the chi-sq of the unconstrained model.
My question is: is this a correct approach to test for variance differences in growth factors? I believe I should do an F-test instead of a chi-square diff test, but how can I do this in Mplus?
A related question: is it okay to test for group differences in growth factor MEANS using chi-sq diff testing, or do I also need another test for this?
jemila seid posted on Monday, August 25, 2008 - 7:04 pm
I am new to Mplus and doing a simple growth curve modeling - but with four dependent variables simultaneously.
I am just wondering if it is possible to get confidence intervals and/or p-values for the estimated correlations of the latent variables. If so, I would appreciate if you could tell me in which output command I can find these values.
You would need to use MODEL CONSTRAINT to define these correlations. Then you would be able to get confidence intervals and p-values. See the user's guide for further information about MODEL CONSTRAINT.
jemila seid posted on Saturday, October 04, 2008 - 4:58 am
Thanks a lot, Linda - I appreciate it. Is it possible to get residual correlations? that is a correlation between latent variables left unexplained by the model? how do I get it in the out put?
I am having a little bit of trouble calculating the variances for my intercept and slope in my latent growth curve. When examining the standardized values for the variances, I get an estimate of 1.00, 999 for the est/se and 999 for the p-value. My outcome is physical activity levels (continuous variable). I know that 70% of my sample is sedentary at some point during the seven waves of data and the data is no where near normally distributed. Is a negative variance the problem that I am encountering. If so, would this be caused by nonnormal data or some type of floor effect? How can I fix it?
All variances are standardized to the value of one. Given the preponderance of zeroes in your variable, you might consider two-part growth modeling as shown in Example 6.16. See also Two-Part Growth Modeling under Papers on the website.
Thank you for the suggestion on two-part growth modelling. My research looks at how change in one variable influences change in another variable (parallel process model). Would I still be able to see how change influences change with a two-part growth model?
just a short question regarding slope variances and individual differences in change?
1.) Should I use a one-tailed p-value for the evaluation of the slope variance? Because the question is, whether or not the variance is greater than 0.
2.) The strategie proposed by Hertzog et al (2008) (comparing a "random-intercept, fixed slope model" with a "random intercept, random slope model") yields often other conclusions. The random slope is more often objected.
What would you suggest? What is the best way to analyse individual differences, especially in connection with the second posting on top (beside non significant slope variances, covariates have significant effects)?
Would it be adequate not to "overrate" the slope variance, in case that one is especially interested in effects of covariates?
I don't think you need to emphasize how precisely the slope variance is estimated in order to let the slope be either predicted by a covariate or predicting another slope. You are more interested in how precisely those relationships are estimated.
I'm running some unconditional two-part LGM and would like to add covariates to my models. I would have two questions in this regard:
1) the variances of my growth factors (linear and quadratic terms)are not significant. Can I still add (and interpret in a meaningful way)covariates?
2)the continuous part of the model fits the data better when I introduce a cubic term. Yet, I get a warning "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES." I've solved the problem by fixing the variances of the quadratic and cubic term to zero. However, again, can I still add and interpret covariates?
Thanks a lot Linda! I would like to ask you one more question: some of the covariates I add in my model are significantly related to the growth factors. Yet, the growth factors that in the unconditional model were significant now turn out to be non-significant. Is this possible? can I interpret my results without problem? Thanks!!!
I think you are saying the variances of the growth factors were significant in the unconditional model and that the residual variances of the growth factors are not significant in the conditional model. These are not the same parameters. This is to be expected because you are explaining the variance by the covariates. The residual variance is what is still not explained.
Sorry, I was not clear enough. Actually I meant the means of the growth factors, not variances. In the unconditional model I obtain means for all my growth factors and they are all significant. When I add predictors (even simply gender) I do not obtain estimated means for the growth factors anymore, but estimated intercepts which are all not significant. I thought that the estimated means and intercepts for the growth factors were the same parameters but I assume I am making some confusion! Thus, is it possible that I obtain significant estimated means for growth factors in the unconditional model and non-significant estimated intercepts for growth factors in the conditional model? Can I interpret the effects of my covariates based on the means of growth factors I obtained in the unconditional model? Thanks a lot for your help!!!
I still have a main issue with my two-part models. In the unconditional model I fixed at zero the variances for the the slope and quadratic terms of the continuous part of the model, as well as all the covariances except the one between the two intercepts. This model worked well. Yet, when I introduce predictors the covariance between the intercepts is higher than 1. I tried to center the intercept at a different time point but did not help. Also both the variances around the intercepts are significant. Do you have any suggestion about how to solve it? Can I fixed it at zero? Thanks a lot!
I am writing up a paper explaining a situation in which the slope variance in an unconditional model was not significant, however, I have significant time-invariant covariates predicting slope variation. From your previous posts I understand that there is more power to detect slope variability, when covariates are added. I was wondering, if you could point out a paper or a chapter that discusses the above mentioned issue in more detail.
Reiko Hirai posted on Saturday, February 02, 2013 - 2:58 pm
Dear Dr. Muthen,
I am new in Mplus discussion board. I am looking for a reference to justify my analyses. Two of my outcome variables do not have significant slope variances. I have significant intercept variances for both outcomes. From what I read, Nagin recommends not to run trajectory analyses if there is no slope variance. I heard that you take a different approach - it is still useful to run trajectory analyses if there is significant intercept variance. I was unable to find the citation. Could you kindly point me to the appropriate article or book? I appreciate your help very much.
When you say trajectory analysis, do you mean using latent trajectory classes?
And when you say significant slopes variance, are you referring to a regular growth model with random effects, so a 1-class model?
Reiko Hirai posted on Saturday, February 02, 2013 - 4:34 pm
Dear Dr. Muthen,
I am sorry that I was not clear. Yes, I believe it is latent trajectory class analysis. I ran a single class model without constraining the variance zero to test the significance of slope and intercept variances. The output indicated significant variance in intercept but non-significant variance in slope. Hope I provided enough information. Thank you very much for your prompt response.
I think you can find trajectory classes even if only the intercept has significant variance in a single-class random effect growth model. For instance, groups of individuals may have distinctly different starting points but develop approximately at the same rate. I know of no specific citation for this, but I don't think it needs it.
Reiko Hirai posted on Tuesday, February 05, 2013 - 8:12 pm
Dear Dr. Muthen,
Thank you very much for your answer. Hearing it from you made me feel good about all the analyses I have done.
Dr. Muthen: I have a question about setting a slope factor variance to 0 in a piecewise growth model. The observed dependent variables are five time points of suicidal ideation. There are 2 linear slope factors: S1 (from time 1 to time 2), and S2 (time 2 through time 5). In order to get the model to run properly we set the variance of S1 to 0 (otherwise the standard errors could not be computed). Setting S1@0 is fine, because the computed variance is very small and non-significant. In our model we examined the regressions of the intercept and both slope factors on our covariates. Mplus provides answers for the regressions of S1 on the family predictors, a couple of which are significant. My question is: What do those findings for S1 mean? That is, how can covariates predict S1 when there is no variance in S1? Are the answers interpretable? If not, should we omit regressions of S1 on predictors in the model, in other words, only look at regressions of I and S2 on the predictors? Omitting S1 regressions changes the answers for regressions involving S2, so it is not an inconsequential decision. Thank you!
Without covariates the s1 variance is not identified, so any estimates you see are not trustworthy (as you say, the SE signals the non-ident). With covariates, fixing s1@0 implies that the s1 residual variance is zero. The s1 regression on covariates is still fine. For instance, a gender covariate says that the s1 mean is different for males and females.
My colleagues and I are working on a paper in which we have conducted a LGM and found a non-significant slope variance (p =.12). Based on the previous posts we have decided to still conducted a conditional LGM and we found significant effects of predictors on the slope. Yet, a reviewer is now questioning our analyses stating that given the non-significant slope we should have never moved to examine a conditional model. So, my question is: do you know any paper or book to which we could refer to support our decision? May we cite this post as a "personal communication"? Thank you so much for your help!
I would not go by the reviewer's rule. It is often found that adding covariates can tease out variance in a slope, presumably due to increased power. I can't put my finger on a reference right now - others? Also, the usual z test (Wald test for one parameter) is known to have low power; see, e.g., Berkhof-Snijders (2001) in JEBS.
I'm testing a simple growth model with 4 time points. The linear (a) and quadratic (b) models show poor fit, so I tested a "free growth model" (c) with two of the 4 time scores free (estimated). This model has a good fit (and visually, the growth seems to be non-linear and non-quadratic, so this makes sense).
However, for each of the above models I get the "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE..." warning. In models a and c, the warning says "PROBLEM INVOLVING VARIABLE I" and in model b, the problematic variable is S.
I found in the outputs that the variance of the problematic latent variable (I or S) is negative, causing the warning (correlations between latent variables are clearly below 1).
But I don't know why this happens and what to do with it. The data looks OK, kurtosis and skew are within acceptable limits for all variables etc.
It sounds like the negative residual variances are not small and insignificant. In this case, they should not be fixed at zero. Instead you need to find a better model for your data.
Katy Roche posted on Tuesday, January 07, 2014 - 10:07 am
I am having trouble finding the slope variances in my conditional model (and I did request Tech 4). Thus, although I can see them in my unconditional model output, I just do not see them in the conditional model output. Can you let me know if there is a default supressing these or how I can call them out?
I have computed a growth model with four time points and results indicate significant intercept variance. I went to examine this +1sd and -1sd below the intercept and the value for +1sd was above my scale. My scale is 1-5 and the value for +1sd above = 5.09. I have double checked my values and they are in the correct range (i.e., 1-5). Do you know what could be happening here? Also, the SRMR for model fit is indicating poor fit (i.e., .14) but the chi-square and SRMR are not.
You would have to send the output and license number to Support.
Jason Bond posted on Tuesday, March 18, 2014 - 12:19 pm
I’m trying to estimate the effect of a grouping variable G in a 3-time point design using the random intercept model: y(i,t) = a(i) + b(i)I(T=1) + c(i)I(T=2) + e(i,t) a(i) = a1 + a2*G + u(i) b(i)= b1 + b2*G c(i)= c1 + c2*G where I(T=x) is an indicator variable for data from wave x. When I ran the below syntax, it indicates that PSI is not positive definite. Any suggestions re the problem? Thanks much,
Are you sure that the 3 growth factors have fixed zero covariances? If that's not the problem, send output and license number to Support.
Jason Bond posted on Tuesday, March 18, 2014 - 4:24 pm
That was indeed the problem...thanks.
RuoShui posted on Sunday, March 23, 2014 - 7:45 pm
Dear Drs. Muthen,
I read that when doing LGCM, we need to compute pseudo R squared to understand the variance of I and S explained by the covariates. I am wondering does Mplus provide pseudo R squared statistics or the R squared provided in the output serve the same function?
I believe Pseudo R-square is for logistic regression. The regression of one growth factor on another is a linear regression. Growth factors are continuous variables. Pseudo R-quare would not apply in this situation.
RuoShui posted on Monday, March 24, 2014 - 3:49 pm
I see. Thank you very much Dr. Muthen. On a related note, the variance of the slope growth factor is .003 in the unconditional LGCM model. When I included covariates, R square statistics indicate 4% of the variance in S was explained. But the residual variance of the slope growth factor in the conditional model is .004. Is this even possible? Thank you a lot!
I have developed a quadratic LGCM with varying times of observation.
I notice that in the unconditional growth model I have created the variance of the quadratic growth factor is 0, yet still appears to be significant. Can you tell me why this is, and if there is a way of rectifying this?
In my recent analysis, I found that according to the unstandardized model results the mean of the slope factor was significant. However, according to the standardized model results the mean was not significant. How can I interpret these results? Thanks for your help!
I'm running an LCGA and I want to know whether the slope and quadratic term are significant in the different classes. In my output I don't have any p-values in the standardized results. Is it ok to look at the unstandardized p-values to conclude on significance for growth factors?
I conducted a bivariate growth latent curve model over 6 weekly time points (y1-y6 and z1-z6). Univariate analyses showed that, the mean and variance of the slope for variable “y” are significant; however, the mean and variance of the slope for variable “z” are not significant. I am wondering, can I still keep the slope of variable “z” as the indicator of individuals’ changes over time in the final conditional bivariate growth curve model? Or, existing significant mean (or variance) of slope would be a strict criterion to conduct a conditional or bivariate growth curve model? I need an answer for this question because my bivariate model was found to fit the data moderately well and it shows a significant correlation between the two slopes. i1 s1| y1@0y2@1y3@2y4@3y5@4y6@5; i2 s2| z1@0z2@1z3@2z4@3z5@4z6@5;
We usually don't even though on the surface it would make some sense. There are two issues. If you are using latent growth factor variances (rather than MLM) the variance parameter can be estimated as negative. The second issue is that you are really dealing with a very tricky issue. It is well known that LRT and asymptotic standard errors do not perform correctly for parameters on the border of admissible space (so dividing the p-value would make sense if we actually have the correct p-value, which we don't). Our rule of thumb is that if the T-test value is >5 you can be sure that the variance is significant, if it is >3 somewhat sure. If it is less than 1.5 or 2 it is probably not significant and it is marginal even if it is. For values between 2 and 3 (or close to those values) you will need a real computation to determine this. The best way is to simulate the LRT. Generate data with Var=0 and estimate it with "Var=0" and "Var estimated" to see what the LRT distribution is.
There are many articles on this topic but most of them are advanced. here are two that I would recommened
Anonymous posted on Tuesday, October 23, 2018 - 10:24 am
hi i am a new user to Mplus and can't work out where exactly my question would fit
I have a dataset in which all baseline scores are zeros, as everyone starts off at zero and then people are followed up and change over time. For the growth model and subsequently the GMM models Mplus doesn’t like the fact that there is no variance at time point zero but this is the true state and not artificial, so I would expect it to estimate the intercept as zero but then give me a slope estimate and for the GMM would expect the classes to be determined by the slopes. I get the following error: *** ERROR One or more variables have a variance of zero. Check your data and format statement.
I have tried the “Variance=no check;” option which gives me an estimate of the mean that is wrong because I know it should be zero (it estimates it as -0.006), should I trust the slope estimate from this model? Or is there a workaround for this scenario?
If you used the variance no check option and don't get convergence, send your full output to Support along with your license number so we can see what's going on.
rgm smeets posted on Tuesday, December 04, 2018 - 1:52 am
I ran a 4-class model and found that one class has an unsignificant mean slope and unsignificant quadratic slope growth factor and a second class has an unsignificant slope growth factor. I can see where this comes from as these classes have a quite flat line. Is there now a reason to change anything in my model? Or does this just give me information about what the growth in each class looks like?
I'm trying to predict an outcome variable based on the slope of the latent growth analysis. I ran a linear LGM analysis and my model seems to fit the data well. I had a non-significant mean slope (p = 0.1), however, the variance of the slope was significant, indicating there are inter-individual differences in growth. When I predicted my outcome variable based on the intercept and the slope, the results were significant, indicating that intercept and the slope significantly predict my outcome variable. My question is: if there’s a non-significant mean slope, but significant slope variance, can I still use slope as a predictor of my outcome variable?
When I ran a growth curve model, the variance of my slope is negative so I constrained the slope to zero. but when I plot the individual data graph, my graph visually appears to have variation in slope. if my graph visually appears to have variation, I am confused as to why I need to constrain my slope to zero just because my slope shows a negative variance.
Perhaps you use a growth model that doesn't fit well. Such as a linear model when the development is more complex.
Also, I assume that when you say you constrain the slope to zero, you mean that you fix the slope variance to zero, not the slope mean.
TJ posted on Saturday, September 07, 2019 - 5:03 pm
yes i constrained the slope variance to zero (s@0) because my slope variance was negative. My individual data graph visually looks linear over 3 time points. when I checked MI, the output shows 999. With that, are there ways to check if my model is a good fit and to modify my model?