Message/Author 

Anonymous posted on Tuesday, January 16, 2001  3:47 pm



In a timeinvariant conditional LTM, how do you interpret the significant effects of the predictors on the slope (in this case, an overall downward trend in health over time). Example: Education: estimate .074 (se .056) Energy: estimate .231 (se .054) For a 1 unit increase in education, you see... For a 1 unit increase in energy, you see... Thanks in advance. 


Can you describe your model more fully? Does LTM stand for latent transition or latent trait modeling? Do you have dichotomous dependent variables? 

Anonymous posted on Tuesday, January 23, 2001  9:31 am



The model has four timepoints with a categorical dependent health variable (measured on a 5point scale). The model is timeinvariant with baseline predictors (e.g. education) predicticing the intercept and slope of the model. The education and energy variables are categorical, with higher numbers representing higher levels of education and energy to participate in daily activities. I used the acronym LTM to mean latent trajectory modeling. 


With categorical repeated measures, the interpretation of the effects of timeinvariant covariates on the slope growth factor can be expressed in several different ways. First, the coefficient (unstandardized) can be simply interpreted as in regular regression in terms of change in the slope for a unit change in a covariate (holding other covariates constant). This may not carry much meaning because the scale of the slope is arbitrary. Second, one can consider the standardized coefficient, in which case the change in the slope is expressed in slope standard deviations. This still doesn't mean much given that the outcome is categorical. Third, one can express the ultimate effect of the change in the covariate on the outcome variable probabilities. This may give a more "down to earth" interpretation. For instance, you can compute the outcome probabilities for some chosen values of your covariates. You do this by first computing the mean values of the slope given the chosen covariate values and then computing the outcome variable probabilities for these mean values. 

Anonymous posted on Thursday, June 28, 2001  2:00 pm



How do you calcute the mean value of the slope given the chosen covariate value and then compute the outcome variable probabilities? Can this be done directly in MPLUS or handcalculated. Thanks for your assistance. 

bmuthen posted on Sunday, July 01, 2001  12:10 pm



I assume that you have a categorical outcome and that by slope you mean the slope growth factor. The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x. The probability has to be computed by hand, for instance with unit scale factors delta (see User's Guide), you have for a binary y scored 0/1, P (y=1 x) = F(tau + s_m*x_t), where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings). 


I'm looking at social mobility across 4 different time points and according to the BIC value the best fitting unrestricted latent class growth analysis is a 7 class model. Seven different classes is not substantively useful and I notice from your June 2000 paper (Muthen and Muthen 'Integrating PersonCentered and VariableCentered Analyses: Growth Mixture Modeling with Latent Variables' in Alchoholism: Clinical and Experimental Research) that you outline other criteria on page 887 for assessing how many latent classes to use. Please could you explain how high the average posterior probabilites should be  for my 7 class model I've got crossclassification figures as low as 0.683, 0.610 and 0.662. I would prefer to use a 4 class model which has a higher BIC (26224 compared to 26165) but better crossclassification values ie between 0.764 and 0.91. Am I justified in using a 4 class model? 

bmuthen posted on Sunday, November 14, 2004  12:35 pm



The posterior probabilities tell you how useful the model is, but not how many classes fit the data best. You can consider other fit statistics. For example, several simulation studies indicate that Mplus' samplesize adjusted BIC is better than BIC. Also, the LoMendellRubin test in Mplus' Tech11 can be used. Ultimately, the usefulness of the model is a key consideration besides statistical fit indices, e.g. predictive performance. 


I want to perform hypothesis testing on the individual parameters in my model. I know that I can use est/error but should I use a T or Z distribution? 


The estimate divided by the standard error shown in the Mplus output follows an approximate z distribution. 


I am relatively new to growth modeling and have managed to confuse myself. I have a simple question regarding interpretation of coefficients. I understand that including a timeinvariant covariate into a model influences the latent slope and intercept, so that estimates listed under the "intercept" section of the output for the slope and intercept account for the influence of the covariate on the latent factors. In my case, the slope mean in the tech4 output is negative, but I have a positive coefficient estimate for my slope in the intercept section of the output (slope estimate = .113). I'm currently exploring why that may have occurred. But in the meantime, the covariate has a negative association with the slope. (.05). Would I interpret this so that higher scores on the covariate at time 1 are associate with more slowly increasing slopes (based on the slope coefficent)? Thank you in advance. 


If s is the slope growth factor, mean (s) = a + b mean (x) When x is zero, the mean (s) is equal to the intercept (s), that is, a. In your case, a is positive and b is negative, so the mean of x must be a positive value large enough to cause the product to be negative and larger than a resulting in a negative mean of s. The interpretation is that as x increases the slope is a larger negative value. 

anonymous posted on Saturday, February 14, 2009  8:50 am



Hi, First, I'd like to thank you for making this forum available, it is such a great help! I am attempting to revise a paper and have some questions related to interpreting the correlation between intercept and growth factors. The LGM focuses on symptoms from time 1 to time 7. 1. Given a positive intercept mean (0.82), a negative linear slope mean (0.16), and a positive quadratic slope mean (0.02), how do you interpret a negative correlation between the slope and intercept? Is it that the higher individuals are on symptoms at time 1, the slower the rate of decline in symptoms? 2. If the variance of the quadratic factor is fixed to 0, is it necessary to include it in your interpretation or to include a correlation between the intercept or linear slope and quadratic factor? 3. Given a positive intercept mean (0.97), a negative linear slope mean (0.13), and a positive quadratic slope mean (0.02), how do you interpret: a) a positive (but nonsignificant) correlation between the intercept and quadratic slope b) a negative correlation between the linear and quadratic slope? Thanks very much in advance! 


1. If you center at time 1, then the higher an individual is at time 1, the lower his/her slope  that is, the steeper the decline. I am referring to the correlation between the intercept and the linear slope (but see also the caveat in 3 below). 2. With Var(q)=0 you don't have covariances between q and other growth factors. You still have the mean of q to explain. 3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat, a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end. b) when the initial decline is steeper, the ending upturn is higher. 

anonymous posted on Tuesday, February 17, 2009  3:20 pm



Hello, I conducted a conditional twogroup LGM and I'm having some trouble wrapping my head around interpreting the effect of two predictors on the slope functions. In the first group, which consists of only an intercept (intcpt=.315) and linear factor (intercept = .023), how do you suggest I interpret the following: 1. a positive path coefficient (0.032) for the regression of the slope on predictor 1. 2. a negative path coefficient of 0.034 for the regression of the slope on predictor 2. In the second group, which consists of an intercept (int= 0.844), linear factor (int=0.015), and a quadratic factor (int=0.004), how do you suggest I interpret the following: 1. a negative path coefficient of 0.062 for the regression of the linear slope on predictor 2. 2. a positive path coefficient of 0.012 for the regression of the quadratic slope on predictor 2. Thanks for your assistance! 


Use the rules for interpreting coefficients in linear regression. As in that case, these path coefficients are partial regression coefficients, so giving the effect on the DV as the predictor changes 1 unit while holding the other predictors constant. 

anonymous posted on Thursday, February 19, 2009  11:52 am



Thanks for your help. However, I still am not clear on whether the predictor is predicting a faster or slower rate of change. 


Since you mention predictor (singular form) I assume you refer to the question you have for the second group, regarding the quadratic model. Is that right? 

anonymous posted on Friday, February 20, 2009  5:43 am



Yes, that is correct. I think (but please correct me if I'm wrong!) that for the group with the negative linear trend (the first group), predictor 1 (with a positive coefficient) predicts a slower decline and predictor 2 (with a negative coefficient) predicts a faster decline However, I am completely confused as to how to interpret the effect of the predictor on the quadratic group. Thanks again in advance! 


See the following post from Sunday, February 15: 3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat, a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end. b) when the initial decline is steeper, the ending upturn is higher. 

anonymous posted on Friday, February 20, 2009  9:05 am



Thanks  is this also true for the effect of covariates on linear and quadratic slope? 


Yes. 


Hello, I am really workinng on this to get it right, but I am now confused. I have several covariates predicting latent growth in body image measured at several time points between ages 13 and 30. (I am here using the ´model results´,is STDYX preferable?) To use the covariate close parentadolescent relationship as an example; for boys there is a positive estimate at initial level at age 13 (0.14) (understandable!), negative sign. estimate for slope (0.25), and a positive estimate for q (0.14). Do I interpete the s and q as that parent adolescent relationship influences body image growth to a less degree (during adolescence) for so to be of more importance again in early adulthood (q)? The body image curve for boys are increasing between the ages 13 and 18, so leveling off and decresing some at ages 21 and 23 .(so an increase again up to age 30). 


Hi again, We are treating close adolescent relationship and peer relationship as ´time invariant covariates´from time 1. I have BMI as a time varying covariate at six points in time. When we include BMI in the model the effects of the time invariant covariates for girls on slope and quadratic growth disappears, while almost no difference for boys. Is there a way that we in one model (one step) can reveal this effect. Now we run it twice. Thanks in advance (for both my two posts) 


Regarding your first post, it is difficult to separately interpret effects on linear and quadratic slopes. This is why sometime "orthogonal polynomials" are used. The effect on the intercept is straightforward, however, and one approach to this issue is shown in Muthén, B. & Muthén, L. (2000). The development of heavy drinking and alcoholrelated problems from ages 18 to 37 in a U.S. national sample. Journal of Studies on Alcohol, 61, 290300. which is on our web site under Papers. Regarding the choice of standardization, see the UG. 


When you say run it twice, I think you don't mean for boys and girls but with and without BMI. If so, it seems difficult to capture the changing gender role in one model. The growth is different with BMI as a tvc. I wonder if having BMI as a parallel growth process instead of as a tvc would be useful. 


Hi, I'm analysing a latent growth curve model with four timepoints and timeinvariant and timevarying covariates. I'm interested in the total effect of TVCs on the growth factors, especially on the mean (or intercept) of the slope factor i.e. does the mean growth rate change significantly after TVCs are specified. In the model without the TVCs the intercept of the slope factor is .35 and in the model with the TVCs .45 indicating that growth rate would be higher if the effects of TVCs were removed from the equation. How can I assess the significance of this change? Is it okay to constrain the intercept of the slope factor in the model with TVCs to the value it had in the model without TVCs and then analyse the chisquare change in model fit? Or is there a better/correct way to do this? Thanks in advance. 


No, that doesn't sound correct. As a first step you want to think about how to make the question well defined. What is the intercept/mean of the slope growth factor when the model includes the TVCs  does it mean the same thing as when TVCs are not included? When included, does your model let the TVCs influence the slope growth factor? If not, doesn't the slope refer to the development of the Ys at zero values of the TVCs? Which raises the question, are the TVCs centered (sample means subtracted)? 


My model looks like this: MODEL: ! Nonlinear crowth curve; ylevel yslope  y1@0 y2@0.6 y3@1.6 y4@1.6; ! TICs; ylevel on tica ticb; yslope on tica ticb; ! TVCs/concurrent effects; y1 on tvc1; y2 on tvc2; y3 on tvc3; y4 on tvc4; ! TVCs/lagged effects; y2 on tvc1; y3 on tvc1 tvc2; y4 on tvc1 tvc2 tvc3; Y is a personality variable and TVCs are the number of certain types of events. Regressions of Ys on TVCs show small, but significant negative effects. If I understand your point right, to me, the essential meaning of Ys and hence (I think) its growth parameters is the same whether TVCs are specified or not. I can do the centering of TVCs, but the interpretation of growth parameters at zero number of events as the TVCs now stands is also well motivated. TVCs do not directly influence the growth factors  actually regressing the slope factor on the TVCs would in a way be the easiest solution to my problem, but I don't think that it is allowed here, or is it? Thanks again. 


Say that the TVC means decline linearly over time. Then the direct negative effects onto the Ys will help pull down the Y means instead of the slope mean being the only source affecting the Y means. This affects the interpretation of the slope mean changing across the two models. You can regress the slope on TVCs. For instance, TVC1 happens before the slope affects the change from time 1 to time 2. TVC1 might also be correlated with the intercept. These points illustrate the complexity of models with TVCs. 


Thanks for your help. I regressed the slope (and intercept) factors on TVC1 (and on TVC2 in a three timepoint model) and there were no significant effects on the slope. These models seem to me less than perfect however as TVCs 3 and 4 can't (I think) be used in them. If there is no way to assess the joint effect of all TVCs on the growth factors I guess I need to consider some other models than LGC. Any suggestions? 


You might want to take a look at how intercept changes can be modeled by TVCs  see slides 157159 of the Topic 3 handout of 05/17/2010. You can also formulate a growth model for the TVC process and do parallel growth modeling where the TVC growth factors influence the growth factors for the Y process. 


I am modeling gender and race centrality as predictors of change in cross race contact. In terms of the analysis interpretation we are unsure of how to interpret the output from MPlus (gender is 0=female and 1=Male)). INT ON GENDER 0.040 0.073 0.540 0.589 CENTRALITY 0.101 0.060 1.679 0.093 SLOPE ON GENDER 0.111 0.061 1.802 0.071 CENTRALITY 0.102 0.055 1.866 0.062 


You interpret these slopes just like you would in a regular linear regression with a continuous dependent variable  that is, if INT was observed and if SLOPE was observed. 


Hello, I have a question regarding the interpretion of BMI as a time varying covariate. It is a latent growth curve, i s and q, outcome; body image at 6 ages from 13 to 30, also background variables. The time varying covariate BMI has a sign neg estimate for males at age 13 (.18) and 30 (.27), but a sign. pos at age 21 (.04, p<0.01). Females have a sign pos at age 21 and a neg at age 30. I have checked this several times now, it seems correct. So BMI has an additional effect on body image at this ages; at ages 13 and 30 boys´ relatively high BMI led to further decline in body satisfaction, while at age 21 the opposite occured? Or am I interpreting the estimates with time varying covariates wrong here? How is it best to express it? Many thanks! 


See the following book whcih has a section on the interpreation of timevarying covariates: Bollen, K.A. and Curran, Patrick, J. Latent Curve Models: A Structural Equation Modeling Perspective. Wiley 2006. 


Thanks for quick reply! My worry is A) that the results (see above) might be incorrect. Based on previous research I just cannot see how a positive prediction age 21 (BMI (tvc)/body satisf.) can be correct, particularly not for girls. Also the correlations are negative, around .30. B)The fit measures for the model are not that good, cfi .93, RMSEA 0.04, SRMR 0.08. I have looked at mod. indic.: When I include a path Q on bmi30 the fit is much better, cfi .97, RMSEA 0.03 and SRMR 0.05.. Chi square much lower too (but still sign, sample is 1082). It makes sense to me that BMI30 predicts Q; males (.20), females (.39). (curve in body image adulthood levels off), but can I do that? Then BMI age21 are no longer positive! and not significant, all effects go through Q. 


A) This I cannot comment on without more information than can be handled on Mplus Discussion. B) If the model doesn't fit, any interpretation of the results is invalid. 


Ok, we are struggling with this. Can I post more information here or can we do it another way? 

Carolin posted on Monday, August 22, 2011  2:38 am



Hello, I'm analyzing a quadratic GMM with four timepoints and covariates. One covariate has a significant influence on the linear slope factor, but insignificant influence on the quadratic factor. How can I interpret this? Does this mean that the covariate only affects the change between T1 and T2 and after this there is no influence? Thanks a lot 


Not quite. Telling effects on the linear and quadratic parts growth factors is difficult because those two factors interact. The covariate that influnces the linear factor significantly continues to have an influence after T2 because the linear slope continues to have an influence beyond T2. But beyond that it is hard to parse out the influences via the two growth factors. 


Dear Linda and Bengt, I am analyzing the following longitudinal growth model on a continuous variable (perceived alcohol availability), including multiple group analyses on age: GROUPING is age (13=13 14=14 15=15); MODEL: iPalav sPAlav  M1PAlAv@0 M2PAlAv@1 M3PAlAv@2; iPAlav on group1; sPAlav on group1; I get the following warning in the output: THE MODEL ESTIMATION TERMINATED NORMALLY WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP 14 IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/ RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE SPALAV. However I do not know what to check in the technical output 4 and what I can conclude from this?? I hope you can help me further. 


Just to give some more information in relation to my previous question; In the tech 4 outcome concerning 14 year olds Mplus doesn't give the correlations between sPalav and the other latent variables (stated as 999) or itself. Furthermore, in the estimated covariance matrix for latent variables I can see a negative covariance between spalav and spalav of .107. Is there anything we can do to solve this problem? Looking forward to your reply. 


It sounds like spalav has a negative variance. This makes the model inadmissible. You would need to change the model. 

Gareth posted on Monday, April 09, 2012  4:55 am



I have two questions about this formula for calculating outcome variable probabilities (binary y scored 0/1) at different levels of a covariate over time, in categorical growth models: "P (y=1 x) = F(tau + s_m*x_t), where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings). The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x" 1. The mean value of the slope obtained in TECH4 is different from the the intercept of the slope factor. If the intercept of the slope factor is used in the formula, why is the mean value of the slope obtained in TECH4 relevant? 2. Is this formula the same for logit and probit coefficients? If different, how should it be modified? 


1. The mean and intercept are different parameters. The mean of y, y_bar, is y_bar = a + b*x_bar; The intercept is a = y_bar  x_bar. 2. See Chapter 14 of the user's guide. There is a section on probit and another on logit. 

Meg posted on Monday, June 11, 2012  6:48 am



Hello, I have a question regarding the interpretation of my growth model. I am looking at depression (outcome) across four time points and assessing the influence of timevariant and invariant predictors on the slope and intercept of the depression curve. The UGM suggests that depression decreases from mid adolescence through young adulthood (I used a freedfactor loading approach). I am a little confused as to how to interpret the regression coefficients because most of the examples have positive slopes. Here are my questions: 1. if the estimate for gender (boys) predicting the slope of depression is positive, does this mean that boys have a slower rate of decline in depression over time? 2. The timevarying covariate also has a declining slope. The regression estimate for the effect of the intercept of the TVC on the slope of depression is positive (.085). Does this mean that those people with higher levels on the TVC have a slower rate of decline in depression? Thanks 


1. Yes. 2. Yes. 

Gareth posted on Thursday, November 01, 2012  7:02 am



Suppose I have a parallel process growth model with categorical outcomes. The intercepts and slopes are regressed on covariates, and the intercepts and slopes are correlated. For each covariate, I have calculated probabilities using the formulae in the discussion above: (1) s_m = a+g*x for covariate value x (2) F(tau + s_m*x_t). How can this formula be modified to estimate the probability at the first time point that the one outcome is already present, when the other outcome is already present at baseline? The two intercepts are correlated, so I want to illustrate that someone already having outcome 1 is more likely to already have outcome 2 at baseline. The correlation between the intercepts is captured by a covariance rather than by a regression coefficient. 


You say P (y=1 x) = F(tau + s_m*x_t) but that is P(y=1  x, s=s_m). If s is a random effect, to get P(y=1  x) you have to integrate over s. Likewise, P(y1=1, y2=1  x) requires bivariate integration over s1, s2. 


Hello, I am estimating longitudinal models with binary and ordinal observed outcomes using the multilevel features (TWOLEVEL RANDOM) in Mplus (as opposed to the SEM/LGCM approach). I know that with SEM/LGCM, the regressions of growth factors on covariates are linear regressions (as the intercept and slope are continuous latent variables with arbitrary metrics) and the factor loadings for the intercept and slope are fixed logit or probit coefficients. One can, however, get predicted probabilities for the observed categorical indicators of growth by combining the appropriate parameter estimates. But in the case of MLM, where the intercept and slope are estimated by directly regressing the categorical outcome on an observed ordinal time variable using stacked (long) data (rather than creating a measurement model for the growth factors) and a logit or probit link, can the estimated growth parameters themselves be more directly interpreted on the logit (or probit) scale... following which one could simply convert these to odds ratios and thus predicted probabilities? Is this correct or am I missing something? Thanks, Cam 


I think this is the right way to look at it. This is like UG ex 9.16 but with categorical outcomes. Here x1 and x2 influence s and y on the Between (subject) level) which in turn influence the outcome at each time point as the figure implies. So x1 and x2 ultimately have a logit/probit influence on the outcomes. 


I am using a simple linear growth curve to predict a distal outcome. In explaining the contribution of the slope as a predictor, in addition to interpreting the estimates provided, is it practical to convey this information by using R2? Specifically, would it be feasible to run the model with only the intercept or slope being used as a predictor (ex. y on I), then the same model but with both I and S as predictors (y on I S) and report the differences in R2 between these two models? 


That doesn't seem unreasonable, as long as i and s are not too highly correlated. 


I am estimating a linear latent growth curve model across three time points using ordinalcategorical data. Specifically the response scale of the outcome variable is a 4point likert scale (none, 12 days, 35 days, 67 days). As such I have used WLSMV to estimate the model. In the output, the mean of the intercept is fixed at 0 and the mean of the slope is estimated. The output tells me that the mean of the slope is 0.3 (p<.05). So the outcome variable decreases by 0.3 points between each time point. Reviewers of my work continue to ask me, what does this mean in term of how much change occurred. Because the outcome variable is ordinalcategorical I am finding it difficult to answer this. Would this best be answered in terms of calculating an effect size for the slope, and if so, how would I do this? Thank you in advance for your help. 


I think looking at a plot of probabilities would be helpful. See the SERIES option of the PLOT command. 


Thank you for your response Linda. Just to follow up, it is possible to calculate an effect size for the slope, in order to report that the decrease represented a small, medium, or large change? 


It is possible, but does not convey how that impacts the probabilities of the observed variables. 

Ivana Igic posted on Wednesday, August 14, 2013  3:17 am



Dear Drs. Muthen, I’m running a 3step GMM (Mplus Web Notes: No. 15). In the first step after I have tested different models I got a 5 class curvilinear solution as the most suitable.In the 3step I have predicted the distal outcome in T5, while I have controlled for the T1 value of distal outcome. 1.Is the intercept value of distal outcome within the class the mean value of the distal outcome per class? 2. The values of distal outcome are 16, what is wrong if I got the negative values for intercept or the values higher than 6 for the intercept per class? 3.I also analyzed the same data in SPSS using ANCOVA and I got very different values for the estimated mean value of distal outcome per class. 4.I used Wald test for distal outcome means comparison as suggested but this doesn’t work. Did I do something wrong? %c#1% …. [t5_y ] (m1); %c#2% ….. [t5_y ] (m2); %c#3% ….. [t5_y ] (m3); %c#4% …. [t5_y ] (m4); %c#5% …. [t5_y ] (m5); Model test: m1=m2; m1=m3; m1=m4; m2=m3: m2=m4; m3=m4; Thank you very much for your help. 


13. The intercepts not means are being estimated. 4. Remove m2=m3: m2=m4; m3=m4; The other tests imply those tests. 

Ivana Igic posted on Wednesday, August 14, 2013  10:49 am



Thank you very much for answering me! 1. I want to compare the value of distal outcome within diff. classes, how should I then interpret the intercept values? I want to be able to say that people within one class feel better/worse (my distal outcome) compare to people in other classes and to test the significance of these differences using the wald test. 2.The model test is still not working. Thank you very much for your help and have a nice day. 


1. The intercept is the mean controlled for the covariate. 2. Please send the output and your license number to support@statmodel.com. 


my question relates to conditional linear latent growth curve models. I have an unconditional model which shows that the endogenous variable declines over time. If I regress the slope Factor on an exogenous variable, its effect on the slope factor is negative. Example: Unconditional model: Unstandardized Means I 3.902 0.028 141.290 0.000 S 0.037 0.010 3.506 0.000 Conditional Model: S ON AGSLB 0.194 0.039 4.941 0.000 As you see there is a negative effect of AGSLB on the slope Factor. Does this mean that if AGSLB increases by one, the curve of the endogenous variable will move (0.194 units) towards zero? And in general does a positive effect on the SlopeFactor mean that if the exogenous variable increases, the curve of the endogenous variable will move more into the direction it has in the unconditional model and a negative Effect that it will move more towards zero, indifferent of the curve’s shape in the unconditional model? 


A covariate that has a negative effect on a slope is interpreted as follows. As the covariate value increases, the slope value decreases. It doesn't matter if the slope mean is negative or positive. If the mean is negative, increasing covariate value beyond its mean makes it even more negative. 


I am running a linear growth curve model. The mean slope (0.003) is not signifcant and negative, but the variance of the slope (0.002) is significant. I am testing how the slope predicts an outcome variable, and I find a significant and positive unstandardized regression coefficient (0.245) for slope predicting that outcome. How do I interpret this finding? I know a positive regression coefficient means, that with an increasing slope, the outcome variable increases. But since my slope is slightly negative to begin with, does that mean the more negative my slope, the more increased the outcome? Does this logic apply even when my slope is not significant to begin with (it may be sort of random that it is slightly negative)? 


There is variability around your slope mean. It can increase by going from, for example 1 to .5. This increase is associated with an increase in the outcome. 


Thank you Linda. Just a follow up question to make sure I understand you right: When you talk about a change in a negative slope from 1 to .5 you would refer to that as an increase in the slope? Because I thought this would be called a decrease as the absolute value of the slope is decreasing. Could you once again explain what we call an increase or decrease in the case of a negative slope? I believe, this difference is quite important for how we interpret the regression coefficient from the slope to the outcome. Thank you, Martina 


We aren't talking absolute values. We are talking the real values. 


Hello, I have five longitudinal binary outcomes. I’m fitting linear growth curve models and I want to check my understanding of the output by computing the model predicted probability of the outcome at the first time point. Fitting using probit/WLS, I’m using the equation from chapter 14 of the manual (v7, p.492): Pr(u=1x_t) = F(tau + slope*x_t), where F is the normal distribution function, tau is the threshold parameter (same for all time points), and x_t are the time scores (the slope loadings). So, for x_t = 0 the probability of u is given just by –tau. Fitting the model, I obtain tau = 0.721, which is Pr=0.235, which fits the observed proportion pretty well (0.225). My difficulties start when I fit the model by logit/MLR. When I fit the same model I get a value of 3.67 for the thresholds. To compute a probability I use equation (1) (p.493), with tau = a, Pr(u=1x_t) = 1/(1+exp(tau  slope*x_t)), so again with x_t = 0 it should be 1/(1+exp(tau). But this gives me Pr = 0.023 for the fitted threshold (3.67). When I use plot3 to graph the predicted probabilities they come out fine (i.e. around Pr = 0.235), so I presume I’m doing something wrong. Any help would be appreciated. 


First, do you have random effect, that is, growth factors with variances? If so, the computations have to take that into account by numerical integration. 


Thanks for getting back to me. Yes, the model has latent growth intercept and slope factors. The model statement is: i s  u1@0 u2@2 u3@4 u4@6 u5@8 ; I'm interested in the predicted probability for u1. I thought that, for time 1 (u1), the contribution of the latent growth factors would be zero. This assumption appeared to hold (i.e. the equation above produced the correct answer) when estimating the model by probit/WLSMV but not when using logit/MLR. 


A quick update to my query. Thanks for your mention of the latent variable variances  when I fit the logit/MLR model with these constrained to zero the threshold on its own gives the predicted probability I was expecting (logit 1.043, Pr(0.26)). But I'm still curious as to why this should matter for predicting the probability using MLR but not for WLSMV. 


The WLSMV model uses a probit link together with the normality for the growth factors and this results in a normal y* which gives an explicit form for the probability in terms of the normal dist fcn. The ML model uses a logit link together with the normality for the growth factors which does not give an explicit form but needs numerical integration. 


Aha. Thanks for this. One final question in this case  what is the interpretation of the item threshold parameter in such a logit model? I have always assumed it was the logit of the probability of an item response for a case located at zero on the latent intercept and slope parameters, but it seems that this isn't the case. 


When you condition on zero latent variable values, the threshold interpretation is as you say (although with logit that reverses the sign of the threshold). It is when you don't condition that you need the numerical integration. 

Kelly Murphy posted on Wednesday, August 05, 2015  11:15 am



Hello, I estimated a parallel process model/dual domain model (two latent growth curve models at once), and am having trouble interpreting the covariance between the two slopes. How do I interpret a negative covariance between a negative slope and a positive slope? Thank you! 


When the positive slope increases the negative slope decreases  becomes more negative. 


Thank you so much for taking the time to respond to my question, I sincerely appreciate it. So would a positive covariance between a negative slope and a positive slope mean that when the positive slope increases, the negative slope decreases less? 


Yes. 


Dear Linda and Bengt Muthén, I have a cohortsequential LGM with distal outcomes and I have some trouble interpreting the results. The linear slope is positive (.255) and the quadratic slope is negative (.191). I understood from the forum to always interpret them together. I plotted the growth curve and there is an increase first and then a decrease. 1) the linear and quadratic slope are negatively related. Can I interpret this relation? It seems obvious that they are related as a linear increase is followed by quadratic decrease? 2) The linear slope positively and the quadratic slope negatively predicted 3 of my outcomes. Does that mean that a developmental course with a steeper increase and a steeper decrease, so a higher overall curve, predicts my outcomes? 3) would you say the intercept is the initial level? Or a constant determined by all data waves? Thank you! 


Consider centering the time scores to reduced the s, q correlation. Some answers are in the paper on our website: Muthén, B. & Muthén, L. (2000). The development of heavy drinking and alcoholrelated problems from ages 18 to 37 in a U.S. national sample. Journal of Studies on Alcohol, 61, 290300. download paper contact first author show abstract 

Adam Milam posted on Wednesday, October 26, 2016  9:40 am



I am conducting parallel process GMM with continuous indicators...new at this. Both processes have 4 time points. I am having difficulty interpreting the results. I ran separate GMM to identify the appropriate number of classes (process 1  4 classes; process 2  3 classes). When I run the parallel process the class structure seems to change (i.e. different slope and intercept); how do I interpret the different classes now with the 12 class patterns? Should the classes hold up and should there be consistency of slope and intercept for the different class patterns (should pattern 1 1 and pattern 1 2 have similar slope and intercept for process 1?). Also how do I find the conditional probabilities of class membership given membership in another class in the Mplus output. 


It all depends on how you set up the model. I would use 2 latent class variables, one for each process, and then use the dot command like (see UG): %c1#1.c2#1% to impose exactly the equality constraints you want. 

Daniel Lee posted on Wednesday, December 07, 2016  12:07 pm



Hi Dr. Muthen, I ran a multigroup conditional LGM, and I am having some trouble wrapping my head around the fixed effects. In particular, the mean of the intercept factor (=~4.80 [on a Likert type scale was 05]) for one of the groups was extremely high! However, when I looked at the mean of dependent variable at the first time point for that group, the mean was a lot less. I know that the mean of the intercept is an adjusted mean at the first time point for a particular group, but the intercept of this particular group seems way to high (especially since most of the covariates are not significant). I was wondering if my understanding of the mean of the intercept is correct (an adjusted mean for a group at the first point), and if there are other factors in a multigroupLGM that might inflate the intercept estimate (especially if most covariates are not significant). 


Please send the output to Support along with license number  so we know exactly what you are looking at. 


Hi Dr. Muthen, Could you please help clarify the following interpretation: If the mean slope of the outcome is significantly negative, and when that is regressed on a covariate, the regression coefficient is also negative, does that mean the covariate is predicting a more or less steep slope? We know that a positive regression for a positive slope means the steepness of the slope increases, but we are unsure how to interpret it for negative slopes. 


It is simple: If x increases by a certain amount, the negative coefficient says that a negative value is added to the slope. This means that the slope decreases. So a big x value gives a larger negative slope, that is, a more steep slope downwards. 

Uzay Dural posted on Monday, June 12, 2017  12:30 am



Hello Drs. Muthen, I conducted conditional multiplegroup (experimental versus control groups) MLGM. The interaction between gender (men = 1) and a time invariant covariate is significantly predicting the negative slope in the experimental group. (gender x covariate > slope = .237). Does this mean: as the covariate increases the slope decreases for men (compared to women) in the experimental group? It is probably not, and I am confused. Due to limited sample size I could not conduct 4group MLGM (female experimental, female control, male exp., male control). Instead, should I focus on the experimental group and conduct multiple group MLGM with gender groups (covariate > slope)? Or which posthoc analysis should I consult to? Thank you very much in advance! 


Q1: Yes Q2: With limited sample size you may want to represent the 4 different groups by 3 dummy variables. 

Uzay Dural posted on Tuesday, June 13, 2017  1:02 am



Thank you very much Dr. Muthen! A follow up: is it possible to get standard errors of model estimated means to plot the interaction effects? 


Yes, you can even get plots with confidence intervals for interactions  see the Table 1.8 runs on the web page for our book examples: http://www.statmodel.com/mplusbook/chapter1.shtml 

Daniel Lee posted on Wednesday, September 27, 2017  6:06 am



Hello, I ran an growth model for variable Z and the intercept and slope term was significant. However, upon including a predictor (X1), the slope term for Z was no longer significant, but X1 significantly predicted the slope of Z (e.g., .50). Can I interpret this result even though the slope term for variable Z is not significant in the model? Thank you! 


Perhaps you are looking at the intercept instead of the mean for the slope when you add the covariate. 

Daniel Lee posted on Thursday, September 28, 2017  5:01 am



Hi, in response to your previous post, the intercept was still significant after including the covariate (i.e., x1), but the slope was no longer significant. But x1 significantly predicted the nonsignificant, slope term. I'm wondering if that means that the trajectory is flat when the covariate is at 0, but an increase to the covariate increases the rate of change in the growth factor (effect size of x1 to s was .5). I would love your input. 


I was talking about the intercept for the slope growth factor regressed on the covariate. 


Dear Dr. Muthen, I’m working on a LGM, in which i and s of a variable influence i and s of inrole and extrarole performance. Also I included some predictors. The fit was not good. Modification indices suggested to include residual covariances between both outcomes (same time) and between one outcome (different times). If I include the ones that you see below, I got good fit. But, does it have sense? How can I justify including a residual covariance between INPE2EXPE2, and not between INPE1EXPE1, INPE3EXPE3..?? MODEL: ix sx flow1@0 flow2@1 flow3@2 flow4@3; iy sy inpe1@0 inpe2@1 inpe3@2 inpe4@3; iye sye expe1@0 expe2@1 expe3@2 expe4@3; iy ON ix X1; sy ON sx; iye ON ix X1; sye ON sx; ix ON X1 X2 X3; sy@0; sye@0; iy WITH iye@0; !they were nonsignificant so I fixed it to 0 INPE2 WITH EXPE2; EXPE4 WITH INPE4; !inrole and extrarole in the same time EXPE3 WITH EXPE1; INPE3 WITH INPE1; !same variable in diff times Thank you, 


This question is suitable for SEMNET. 


What does this mean? Should i post it in SEMNET? I'm really interested in knowing your opinion about covariate the residuals suggested by the modification indices or not. Thank you, 


A priori, I would include correlations between outcomes at the same time point for all time points and all outcomes at each time point. This is because there are presumably many leftout timespecific covariates that influence all outcomes at a certain time point. The outcomes may also have residual correlations across time due to leftout covariates that influence all time points. 

AT Jothees posted on Monday, April 16, 2018  5:07 am



Dear Linda, I am running higher order latent growth curve analysis with multiple indicators measured at five occasions. I have managed to save the slope and intercept score for presentation of trajectories in figures. When I plot slope (e.g cognitive function) against age. I see that y value (slope) range from 3, 0, 3; and with increasing age (x value), there is a significant decline in slope. I have a question: a) when the slope trajectory line cross "0" on the yaxis, is this means that there is no difference in slope for reference group in xaxis? 


No  y=0 means that you have a flat trajectory at that x value. 

mplususer00 posted on Tuesday, August 07, 2018  3:53 pm



Is it possible to have two groups that do not have a significantly different initial status but one group is significantly faster (slope is greater for one group and WALD test on slope shows significance) then the other yet they do not end up much different from one another (this over 4 time points)? I have this result and I'm not sure what to make of it. 


I assume you are talking about a linear growth model with fixed time scores. If so, yes I can imagine this happening with a large sample making the slope mean significant but not resulting in an outcome difference that is substantively important. 


I have a dataset for which I want to test factorial measurement invariance over 8 time points (1 factor of math achievement). At each time point there are 6080 dichotomous items, of which a few are anchor items. Is it computationally feasible to do this in MPLus? Just asking before I proceed. 


It shouldn't be a problem. See http://statmodel.com/MeasurementInvariance.shtml All three methods should be feasible: alignment, BSEM, random loadings. 


Hello. I’d like to check whether or not my interpretation of a dual process growth model in which the growth factors of process A predict the growth factors of process B is correct. Both processes follow a quadratic curve with an initial decrease followed by an increase. Essentially, I would like to establish if negative relationships between growth factors correspond with steeper increases and/or decreases, and if positive relationships correspond with slower/gentler increases and/or decreases. For example: When the intercept of process A predicts the intercept of process B, the relationship is positive – does this mean that high initial levels of process A predict high initial levels of process B? When the intercept of process A predicts the linear slope of process B and the relationship is positive, does this mean that high initial levels of process A predict a slower decline in process B? When there is a positive relationship between the linear slopes of both processes, does this mean that a slow decrease on process A predicts a slow decrease on process B? Thank you. 


When you say "predict", you need either time ordering between the 2 processes  A happening before B  or strong theory to back that. Q1Q4: Yes 

LS posted on Wednesday, July 24, 2019  1:43 am



Dear Drs. Muthen, I have run a conditional 3class GMMCI model. When inspecting the logistic coefficients vs the odds ratio of the three different predictors, the related p values change. For example: C#3 ON CLE_SA7 0.955 0.343 2.789 0.005 CLE_PV7 0.801 0.314 2.556 0.011 ODDS RATIO C#3 ON CLE_SA7 2.600 0.891 1.796 0.072 CLE_PV7 2.229 0.699 1.758 0.079 Why does this happen? Which one should I use in a report then? I was actually using odds ratio. Yours Sincerely, LS 


Q1: see our FAQs on Odds ratios Q2: Use CIs for ORs. 

Back to top 