Data interpretation
Message/Author
 Anonymous posted on Tuesday, January 16, 2001 - 3:47 pm
In a time-invariant conditional LTM, how do you interpret the significant effects of the predictors on the slope (in this case, an overall downward trend in health over time).

Example:
Education: estimate .074 (se .056)
Energy: estimate -.231 (se .054)

For a 1 unit increase in education, you see...
For a 1 unit increase in energy, you see...
 Bengt O. Muthen posted on Wednesday, January 17, 2001 - 9:22 am
Can you describe your model more fully? Does LTM stand for latent transition or latent trait modeling? Do you have dichotomous dependent variables?
 Anonymous posted on Tuesday, January 23, 2001 - 9:31 am
The model has four time-points with a categorical dependent health variable (measured on a 5-point scale). The model is time-invariant with baseline predictors (e.g. education) predicticing the intercept and slope of the model. The education and energy variables are categorical, with higher numbers representing higher levels of education and energy to participate in daily activities.

I used the acronym LTM to mean latent trajectory modeling.
 Bengt O. Muthen posted on Tuesday, January 23, 2001 - 11:47 am
With categorical repeated measures, the interpretation of the effects of time-invariant covariates on the slope growth factor can be expressed in several different ways. First, the coefficient (unstandardized) can be simply interpreted as in regular regression in terms of change in the slope for a unit change in a covariate (holding other covariates constant). This may not carry much meaning because the scale of the slope is arbitrary. Second, one can consider the standardized coefficient, in which case the change in the slope is expressed in slope standard deviations. This still doesn't mean much given that the outcome is categorical. Third, one can express the ultimate effect of the change in the covariate on the outcome variable probabilities. This may give a more "down to earth" interpretation. For instance, you can compute the outcome probabilities for some chosen values of your covariates. You do this by first computing the mean values of the slope given the chosen covariate values and then computing the outcome variable probabilities for these mean values.
 Anonymous posted on Thursday, June 28, 2001 - 2:00 pm
How do you calcute the mean value of the slope given the chosen covariate value and then compute the outcome variable probabilities? Can this be done directly in MPLUS or hand-calculated. Thanks for your assistance.
 bmuthen posted on Sunday, July 01, 2001 - 12:10 pm
I assume that you have a categorical outcome and that by slope you mean the slope growth factor. The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x. The probability has to be computed by hand, for instance with unit scale factors delta (see User's Guide), you have for a binary y scored 0/1,

P (y=1 |x) = F(-tau + s_m*x_t),

where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings).
 Louise Sullivan posted on Wednesday, November 10, 2004 - 11:17 am
I'm looking at social mobility across 4 different time points and according to the BIC value the best fitting unrestricted latent class growth analysis is a 7 class model. Seven different classes is not substantively useful and I notice from your June 2000 paper (Muthen and Muthen 'Integrating Person-Centered and Variable-Centered Analyses: Growth Mixture Modeling with Latent Variables' in Alchoholism: Clinical and Experimental Research) that you outline other criteria on page 887 for assessing how many latent classes to use. Please could you explain how high the average posterior probabilites should be - for my 7 class model I've got cross-classification figures as low as 0.683, 0.610 and 0.662. I would prefer to use a 4 class model which has a higher BIC (26224 compared to 26165) but better cross-classification values ie between 0.764 and 0.91. Am I justified in using a 4 class model?
 bmuthen posted on Sunday, November 14, 2004 - 12:35 pm
The posterior probabilities tell you how useful the model is, but not how many classes fit the data best. You can consider other fit statistics. For example, several simulation studies indicate that Mplus' sample-size adjusted BIC is better than BIC. Also, the Lo-Mendell-Rubin test in Mplus' Tech11 can be used. Ultimately, the usefulness of the model is a key consideration besides statistical fit indices, e.g. predictive performance.
 Marion Sheenan posted on Saturday, August 13, 2005 - 3:38 pm
I want to perform hypothesis testing on the individual parameters in my model. I know that I can use est/error but should I use a T or Z distribution?
 Linda K. Muthen posted on Sunday, August 14, 2005 - 11:14 am
The estimate divided by the standard error shown in the Mplus output follows an approximate z distribution.
 Gwen Marchand posted on Saturday, February 02, 2008 - 4:02 pm
I am relatively new to growth modeling and have managed to confuse myself. I have a simple question regarding interpretation of coefficients.

I understand that including a time-invariant covariate into a model influences the latent slope and intercept, so that estimates listed under the "intercept" section of the output for the slope and intercept account for the influence of the covariate on the latent factors. In my case, the slope mean in the tech4 output is negative, but I have a positive coefficient estimate for my slope in the intercept section of the output (slope estimate = .113). I'm currently exploring why that may have occurred. But in the meantime, the covariate has a negative association with the slope. (-.05).

Would I interpret this so that higher scores on the covariate at time 1 are associate with more slowly increasing slopes (based on the slope coefficent)?

Thank you in advance.
 Linda K. Muthen posted on Monday, February 04, 2008 - 9:12 am
If s is the slope growth factor,

mean (s) = a + b mean (x)

When x is zero, the mean (s) is equal to the intercept (s), that is, a.

In your case, a is positive and b is negative, so the mean of x must be a positive value large enough to cause the product to be negative and larger than a resulting in a negative mean of s.

The interpretation is that as x increases the slope is a larger negative value.
 anonymous posted on Saturday, February 14, 2009 - 8:50 am
Hi,
First, I'd like to thank you for making this forum available, it is such a great help!
I am attempting to revise a paper and have some questions related to interpreting the correlation between intercept and growth factors. The LGM focuses on symptoms from time 1 to time 7.
1. Given a positive intercept mean (0.82), a negative linear slope mean (-0.16), and a positive quadratic slope mean (0.02), how do you interpret a negative correlation between the slope and intercept? Is it that the higher individuals are on symptoms at time 1, the slower the rate of decline in symptoms?
2. If the variance of the quadratic factor is fixed to 0, is it necessary to include it in your interpretation or to include a correlation between the intercept or linear slope and quadratic factor?
3. Given a positive intercept mean (0.97), a negative linear slope mean (-0.13), and a positive quadratic slope mean (0.02), how do you interpret:
a) a positive (but nonsignificant) correlation between the intercept and quadratic slope
b) a negative correlation between the linear and quadratic slope?
Thanks very much in advance!
 Bengt O. Muthen posted on Sunday, February 15, 2009 - 11:17 am
1. If you center at time 1, then the higher an individual is at time 1, the lower his/her slope - that is, the steeper the decline. I am referring to the correlation between the intercept and the linear slope (but see also the caveat in 3 below).

2. With Var(q)=0 you don't have covariances between q and other growth factors. You still have the mean of q to explain.

3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat,

a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end.

b) when the initial decline is steeper, the ending upturn is higher.
 anonymous posted on Tuesday, February 17, 2009 - 3:20 pm
Hello,
I conducted a conditional two-group LGM and I'm having some trouble wrapping my head around interpreting the effect of two predictors on the slope functions.
In the first group, which consists of only an intercept (intcpt=.315) and linear factor (intercept = .023), how do you suggest I interpret the following:
1. a positive path coefficient (0.032) for the regression of the slope on predictor 1.
2. a negative path coefficient of -0.034 for the regression of the slope on predictor 2.
In the second group, which consists of an intercept (int= 0.844), linear factor (int=-0.015), and a quadratic factor (int=-0.004), how do you suggest I interpret the following:
1. a negative path coefficient of -0.062 for the regression of the linear slope on predictor 2.
2. a positive path coefficient of 0.012 for the regression of the quadratic slope on predictor 2.
Thanks for your assistance!
 Bengt O. Muthen posted on Thursday, February 19, 2009 - 9:21 am
Use the rules for interpreting coefficients in linear regression. As in that case, these path coefficients are partial regression coefficients, so giving the effect on the DV as the predictor changes 1 unit while holding the other predictors constant.
 anonymous posted on Thursday, February 19, 2009 - 11:52 am
Thanks for your help. However, I still am not clear on whether the predictor is predicting a faster or slower rate of change.
 Bengt O. Muthen posted on Friday, February 20, 2009 - 5:01 am
Since you mention predictor (singular form) I assume you refer to the question you have for the second group, regarding the quadratic model. Is that right?
 anonymous posted on Friday, February 20, 2009 - 5:43 am
Yes, that is correct. I think (but please correct me if I'm wrong!) that for the group with the negative linear trend (the first group), predictor 1 (with a positive coefficient) predicts a slower decline and predictor 2 (with a negative coefficient) predicts a faster decline
However, I am completely confused as to how to interpret the effect of the predictor on the quadratic group.
Thanks again in advance!
 Linda K. Muthen posted on Friday, February 20, 2009 - 6:24 am
See the following post from Sunday, February 15:

3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat,

a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end.

b) when the initial decline is steeper, the ending upturn is higher.
 anonymous posted on Friday, February 20, 2009 - 9:05 am
Thanks - is this also true for the effect of covariates on linear and quadratic slope?
 Linda K. Muthen posted on Saturday, February 21, 2009 - 9:35 am
Yes.
 Ingrid Holsen posted on Thursday, February 17, 2011 - 7:45 am
Hello,
I am really workinng on this to get it right, but I am now confused. I have several covariates predicting latent growth in body image measured at several time points between ages 13 and 30.
(I am here using the ´model results´,is STDYX preferable?)
To use the covariate close parent-adolescent relationship as an example; for boys there is a positive estimate at initial level at age 13 (0.14) (understandable!), negative sign. estimate for slope (-0.25), and a positive estimate for q (0.14). Do I interpete the s and q as that parent adolescent relationship influences body image growth to a less degree (during adolescence) for so to be of more importance again in early adulthood (q)? The body image curve for boys are increasing between the ages 13 and 18, so leveling off and decresing some at ages 21 and 23 .(so an increase again up to age 30).
 Ingrid Holsen posted on Thursday, February 17, 2011 - 12:58 pm
Hi again,
We are treating close adolescent relationship and peer relationship as ´time invariant covariates´from time 1. I have BMI as a time varying covariate at six points in time. When we include BMI in the model the effects of the time invariant covariates for girls on slope and quadratic growth disappears, while almost no difference for boys. Is there a way that we in one model (one step) can reveal this effect. Now we run it twice.
Thanks in advance (for both my two posts)
 Bengt O. Muthen posted on Thursday, February 17, 2011 - 3:52 pm
Regarding your first post, it is difficult to separately interpret effects on linear and quadratic slopes. This is why sometime "orthogonal polynomials" are used. The effect on the intercept is straightforward, however, and one approach to this issue is shown in

Muthén, B. & Muthén, L. (2000). The development of heavy drinking and alcohol-related problems from ages 18 to 37 in a U.S. national sample. Journal of Studies on Alcohol, 61, 290-300.

which is on our web site under Papers.

Regarding the choice of standardization, see the UG.
 Bengt O. Muthen posted on Thursday, February 17, 2011 - 3:56 pm
When you say run it twice, I think you don't mean for boys and girls but with and without BMI. If so, it seems difficult to capture the changing gender role in one model. The growth is different with BMI as a tvc. I wonder if having BMI as a parallel growth process instead of as a tvc would be useful.
 Olli Kiviruusu posted on Wednesday, February 23, 2011 - 7:27 am
Hi,
I'm analysing a latent growth curve model with four timepoints and time-invariant and time-varying covariates. I'm interested in the total effect of TVCs on the growth factors, especially on the mean (or intercept) of the slope factor i.e. does the mean growth rate change significantly after TVCs are specified. In the model without the TVCs the intercept of the slope factor is .35 and in the model with the TVCs .45 indicating that growth rate would be higher if the effects of TVCs were removed from the equation. How can I assess the significance of this change? Is it okay to constrain the intercept of the slope factor in the model with TVCs to the value it had in the model without TVCs and then analyse the chi-square change in model fit? Or is there a better/correct way to do this?
 Bengt O. Muthen posted on Wednesday, February 23, 2011 - 5:07 pm
No, that doesn't sound correct. As a first step you want to think about how to make the question well defined. What is the intercept/mean of the slope growth factor when the model includes the TVCs - does it mean the same thing as when TVCs are not included? When included, does your model let the TVCs influence the slope growth factor? If not, doesn't the slope refer to the development of the Ys at zero values of the TVCs? Which raises the question, are the TVCs centered (sample means subtracted)?
 Olli Kiviruusu posted on Thursday, February 24, 2011 - 6:44 am
My model looks like this:
MODEL:
! Non-linear crowth curve;
ylevel yslope | y1@0 y2@0.6 y3@1.6 y4@1.6;
! TICs;
ylevel on tica ticb;
yslope on tica ticb;
! TVCs/concurrent effects;
y1 on tvc1;
y2 on tvc2;
y3 on tvc3;
y4 on tvc4;
! TVCs/lagged effects;
y2 on tvc1;
y3 on tvc1 tvc2;
y4 on tvc1 tvc2 tvc3;

Y is a personality variable and TVCs are the number of certain types of events. Regressions of Ys on TVCs show small, but significant negative effects. If I understand your point right, to me, the essential meaning of Ys and hence (I think) its growth parameters is the same whether TVCs are specified or not. I can do the centering of TVCs, but the interpretation of growth parameters at zero number of events as the TVCs now stands is also well motivated. TVCs do not directly influence the growth factors - actually regressing the slope factor on the TVCs would in a way be the easiest solution to my problem, but I don't think that it is allowed here, or is it?
Thanks again.
 Bengt O. Muthen posted on Thursday, February 24, 2011 - 10:37 am
Say that the TVC means decline linearly over time. Then the direct negative effects onto the Ys will help pull down the Y means instead of the slope mean being the only source affecting the Y means. This affects the interpretation of the slope mean changing across the two models.

You can regress the slope on TVCs. For instance, TVC1 happens before the slope affects the change from time 1 to time 2. TVC1 might also be correlated with the intercept.

These points illustrate the complexity of models with TVCs.
 Olli Kiviruusu posted on Friday, February 25, 2011 - 9:12 am
Thanks for your help.
I regressed the slope (and intercept) factors on TVC1 (and on TVC2 in a three timepoint model) and there were no significant effects on the slope. These models seem to me less than perfect however as TVCs 3 and 4 can't (I think) be used in them.
If there is no way to assess the joint effect of all TVCs on the growth factors I guess I need to consider some other models than LGC. Any suggestions?
 Bengt O. Muthen posted on Friday, February 25, 2011 - 1:28 pm
You might want to take a look at how intercept changes can be modeled by TVCs - see slides 157-159 of the Topic 3 handout of 05/17/2010.

You can also formulate a growth model for the TVC process and do parallel growth modeling where the TVC growth factors influence the growth factors for the Y process.
 Elizabeth Adams posted on Tuesday, March 08, 2011 - 3:15 pm
I am modeling gender and race centrality as predictors of change in cross race contact.

In terms of the analysis interpretation we are unsure of how to interpret the output from MPlus (gender is 0=female and 1=Male)).

INT ON
GENDER -0.040 0.073 -0.540 0.589
CENTRALITY 0.101 0.060 1.679 0.093

SLOPE ON
GENDER -0.111 0.061 -1.802 0.071
CENTRALITY 0.102 0.055 1.866 0.062
 Bengt O. Muthen posted on Tuesday, March 08, 2011 - 6:42 pm
You interpret these slopes just like you would in a regular linear regression with a continuous dependent variable - that is, if INT was observed and if SLOPE was observed.
 Ingrid Holsen posted on Saturday, May 07, 2011 - 2:53 am
Hello,
I have a question regarding the interpretion of BMI as a time varying covariate. It is a latent growth curve, i s and q, outcome; body image at 6 ages from 13 to 30, also background variables. The time varying covariate BMI has a sign neg estimate for males at age 13 (-.18) and 30 (-.27), but a sign. pos at age 21 (.04, p<0.01). Females have a sign pos at age 21 and a neg at age 30. I have checked this several times now, it seems correct.
So BMI has an additional effect on body image at this ages; at ages 13 and 30 boys´ relatively high BMI led to further decline in body satisfaction, while at age 21 the opposite occured?
Or am I interpreting the estimates with time varying covariates wrong here? How is it best to express it?
Many thanks!
 Linda K. Muthen posted on Saturday, May 07, 2011 - 8:12 am
See the following book whcih has a section on the interpreation of time-varying covariates:

Bollen, K.A. and Curran, Patrick, J. Latent Curve Models: A Structural Equation Modeling Perspective. Wiley 2006.
 Ingrid Holsen posted on Sunday, May 08, 2011 - 2:42 am
Thanks for quick reply! My worry is
A) that the results (see above) might be incorrect. Based on previous research I just cannot see how a positive prediction age 21 (BMI (tvc)/body satisf.) can be correct, particularly not for girls. Also the correlations are negative, around -.30.

B)The fit measures for the model are not that good, cfi .93, RMSEA 0.04, SRMR 0.08. I have looked at mod. indic.: When I include a path Q on bmi30 the fit is much better, cfi .97, RMSEA 0.03 and SRMR 0.05.. Chi square much lower too (but still sign, sample is 1082). It makes sense to me that BMI30 predicts Q; males (-.20), females (-.39). (curve in body image adulthood levels off), but can I do that? Then BMI age21 are no longer positive! and not significant, all effects go through Q.
 Linda K. Muthen posted on Sunday, May 08, 2011 - 6:20 am
A) This I cannot comment on without more information than can be handled on Mplus Discussion.

B) If the model doesn't fit, any interpretation of the results is invalid.
 Ingrid Holsen posted on Sunday, May 08, 2011 - 6:34 am
Ok, we are struggling with this. Can I post more information here or can we do it another way?
 Carolin posted on Monday, August 22, 2011 - 2:38 am
Hello,

I'm analyzing a quadratic GMM with four timepoints and covariates. One covariate has a significant influence on the linear slope factor, but insignificant influence on the quadratic factor. How can I interpret this? Does this mean that the covariate only affects the change between T1 and T2 and after this there is no influence?

Thanks a lot
 Bengt O. Muthen posted on Monday, August 22, 2011 - 10:07 am
Not quite. Telling effects on the linear and quadratic parts growth factors is difficult because those two factors interact. The covariate that influnces the linear factor significantly continues to have an influence after T2 because the linear slope continues to have an influence beyond T2. But beyond that it is hard to parse out the influences via the two growth factors.
 Karen Offermans posted on Friday, September 09, 2011 - 1:22 am
Dear Linda and Bengt,

I am analyzing the following longitudinal growth model on a continuous variable (perceived alcohol availability), including multiple group analyses on age:

GROUPING is age (13=13 14=14 15=15);

MODEL:
iPalav sPAlav | M1PAlAv@0 M2PAlAv@1 M3PAlAv@2;
iPAlav on group1;
sPAlav on group1;

I get the following warning in the output:
THE MODEL ESTIMATION TERMINATED NORMALLY

WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP 14
IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/
RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL
TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE
THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION.
PROBLEM INVOLVING VARIABLE SPALAV.

However I do not know what to check in the technical output 4 and what I can conclude from this??
I hope you can help me further.
 Karen Offermans posted on Friday, September 09, 2011 - 1:55 am
Just to give some more information in relation to my previous question;
In the tech 4 outcome concerning 14 year olds Mplus doesn't give the correlations between sPalav and the other latent variables (stated as 999) or itself. Furthermore, in the estimated covariance matrix for latent variables I can see a negative covariance between spalav and spalav of -.107.
Is there anything we can do to solve this problem?

 Linda K. Muthen posted on Friday, September 09, 2011 - 7:01 am
It sounds like spalav has a negative variance. This makes the model inadmissible. You would need to change the model.
 Gareth posted on Monday, April 09, 2012 - 4:55 am
I have two questions about this formula for calculating outcome variable probabilities (binary y scored 0/1) at different levels of a covariate over time, in categorical growth models:

"P (y=1 |x) = F(-tau + s_m*x_t),

where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings).

The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x"

1. The mean value of the slope obtained in TECH4 is different from the the intercept of the slope factor. If the intercept of the slope factor is used in the formula, why is the mean value of the slope obtained in TECH4 relevant?
2. Is this formula the same for logit and probit coefficients? If different, how should it be modified?
 Linda K. Muthen posted on Monday, April 09, 2012 - 9:12 am
1. The mean and intercept are different parameters. The mean of y, y_bar, is

y_bar = a + b*x_bar;

The intercept is

a = y_bar - x_bar.

2. See Chapter 14 of the user's guide. There is a section on probit and another on logit.
 Meg posted on Monday, June 11, 2012 - 6:48 am
Hello,

I have a question regarding the interpretation of my growth model. I am looking at depression (outcome) across four time points and assessing the influence of time-variant and invariant predictors on the slope and intercept of the depression curve. The UGM suggests that depression decreases from mid adolescence through young adulthood (I used a freed-factor loading approach). I am a little confused as to how to interpret the regression coefficients because most of the examples have positive slopes. Here are my questions:

1. if the estimate for gender (boys) predicting the slope of depression is positive, does this mean that boys have a slower rate of decline in depression over time?

2. The time-varying covariate also has a declining slope. The regression estimate for the effect of the intercept of the TVC on the slope of depression is positive (.085). Does this mean that those people with higher levels on the TVC have a slower rate of decline in depression?

Thanks
 Linda K. Muthen posted on Monday, June 11, 2012 - 4:56 pm
1. Yes.
2. Yes.
 Gareth posted on Thursday, November 01, 2012 - 7:02 am
Suppose I have a parallel process growth model with categorical outcomes. The intercepts and slopes are regressed on covariates, and the intercepts and slopes are correlated.

For each covariate, I have calculated probabilities using the formulae in the discussion above: (1) s_m = a+g*x for covariate value x (2) F(-tau + s_m*x_t).

How can this formula be modified to estimate the probability at the first time point that the one outcome is already present, when the other outcome is already present at baseline? The two intercepts are correlated, so I want to illustrate that someone already having outcome 1 is more likely to already have outcome 2 at baseline. The correlation between the intercepts is captured by a covariance rather than by a regression coefficient.
 Bengt O. Muthen posted on Thursday, November 01, 2012 - 8:59 pm
You say

P (y=1 |x) = F(-tau + s_m*x_t)

but that is P(y=1 | x, s=s_m). If s is a random effect, to get P(y=1 | x) you have to integrate over s.

Likewise,

P(y1=1, y2=1 | x) requires bivariate integration over s1, s2.
 Cameron McIntosh posted on Wednesday, December 05, 2012 - 6:40 pm
Hello,

I am estimating longitudinal models with binary and ordinal observed outcomes using the multilevel features (TWOLEVEL RANDOM) in Mplus (as opposed to the SEM/LGCM approach). I know that with SEM/LGCM, the regressions of growth factors on covariates are linear regressions (as the intercept and slope are continuous latent variables with arbitrary metrics) and the factor loadings for the intercept and slope are fixed logit or probit coefficients. One can, however, get predicted probabilities for the observed categorical indicators of growth by combining the appropriate parameter estimates.

But in the case of MLM, where the intercept and slope are estimated by directly regressing the categorical outcome on an observed ordinal time variable using stacked (long) data (rather than creating a measurement model for the growth factors) and a logit or probit link, can the estimated growth parameters themselves be more directly interpreted on the logit (or probit) scale... following which one could simply convert these to odds ratios and thus predicted probabilities? Is this correct or am I missing something?

Thanks,

Cam
 Bengt O. Muthen posted on Wednesday, December 05, 2012 - 8:29 pm
I think this is the right way to look at it. This is like UG ex 9.16 but with categorical outcomes. Here x1 and x2 influence s and y on the Between (subject) level) which in turn influence the outcome at each time point as the figure implies. So x1 and x2 ultimately have a logit/probit influence on the outcomes.
 Dustin Pardini posted on Thursday, May 16, 2013 - 9:41 am
I am using a simple linear growth curve to predict a distal outcome. In explaining the contribution of the slope as a predictor, in addition to interpreting the estimates provided, is it practical to convey this information by using R2? Specifically, would it be feasible to run the model with only the intercept or slope being used as a predictor (ex.- y on I), then the same model but with both I and S as predictors (y on I S) and report the differences in R2 between these two models?
 Bengt O. Muthen posted on Thursday, May 16, 2013 - 11:28 am
That doesn't seem unreasonable, as long as i and s are not too highly correlated.
 B Chenoworth posted on Wednesday, June 12, 2013 - 2:11 am
I am estimating a linear latent growth curve model across three time points using ordinal-categorical data. Specifically the response scale of the outcome variable is a 4-point likert scale (none, 1-2 days, 3-5 days, 6-7 days). As such I have used WLSMV to estimate the model.

In the output, the mean of the intercept is fixed at 0 and the mean of the slope is estimated. The output tells me that the mean of the slope is -0.3 (p<.05). So the outcome variable decreases by 0.3 points between each time point.

Reviewers of my work continue to ask me, what does this mean in term of how much change occurred. Because the outcome variable is ordinal-categorical I am finding it difficult to answer this.

Would this best be answered in terms of calculating an effect size for the slope, and if so, how would I do this?

Thank you in advance for your help.
 Linda K. Muthen posted on Wednesday, June 12, 2013 - 1:13 pm
I think looking at a plot of probabilities would be helpful. See the SERIES option of the PLOT command.
 B Chenoworth posted on Wednesday, June 12, 2013 - 3:15 pm
Thank you for your response Linda.

Just to follow up, it is possible to calculate an effect size for the slope, in order to report that the decrease represented a small, medium, or large change?
 Bengt O. Muthen posted on Wednesday, June 12, 2013 - 3:39 pm
It is possible, but does not convey how that impacts the probabilities of the observed variables.
 Ivana Igic posted on Wednesday, August 14, 2013 - 3:17 am
Dear Drs. Muthen,
I’m running a 3-step GMM (Mplus Web Notes: No. 15). In the first step after I have tested different models I got a 5 class curvilinear solution as the most suitable.In the 3-step I have predicted the distal outcome in T5, while I have controlled for the T1 value of distal outcome.
1.Is the intercept value of distal outcome within the class the mean value of the distal outcome per class?
2. The values of distal outcome are 1-6, what is wrong if I got the negative values for intercept or the values higher than 6 for the intercept per class?
3.I also analyzed the same data in SPSS using ANCOVA and I got very different values for the estimated mean value of distal outcome per class.
4.I used Wald test for distal outcome means comparison as suggested but this doesn’t work. Did I do something wrong?
%c#1%
….
[t5_y ] (m1);
%c#2%
…..
[t5_y ] (m2);
%c#3%
…..
[t5_y ] (m3);
%c#4%
….
[t5_y ] (m4);
%c#5%
….
[t5_y ] (m5);
Model test:
m1=m2;
m1=m3;
m1=m4;
m2=m3:
m2=m4;
m3=m4;
Thank you very much for your help.
 Linda K. Muthen posted on Wednesday, August 14, 2013 - 9:17 am
1-3. The intercepts not means are being estimated.

4. Remove

m2=m3:
m2=m4;
m3=m4;

The other tests imply those tests.
 Ivana Igic posted on Wednesday, August 14, 2013 - 10:49 am
Thank you very much for answering me!

1. I want to compare the value of distal outcome within diff. classes, how should I then interpret the intercept values?
I want to be able to say that people within one class feel better/worse (my distal outcome) compare to people in other classes and to test the significance of these differences using the wald test.

2.The model test is still not working.

Thank you very much for your help and have a nice day.
 Linda K. Muthen posted on Thursday, August 15, 2013 - 8:45 am
1. The intercept is the mean controlled for the covariate.

2. Please send the output and your license number to support@statmodel.com.
 Maike Theimann posted on Wednesday, August 28, 2013 - 12:37 am
my question relates to conditional linear latent growth curve models.
I have an unconditional model which shows that the endogenous variable declines over time. If I regress the slope Factor on an exogenous variable, its effect on the slope factor is negative.
Example:
Unconditional model:
Unstandardized Means
I 3.902 0.028 141.290 0.000
S -0.037 0.010 -3.506 0.000
Conditional Model:
S ON AGSLB -0.194 0.039 -4.941 0.000

As you see there is a negative effect of AGSLB on the slope Factor. Does this mean that if AGSLB increases by one, the curve of the endogenous variable will move (0.194 units) towards zero?
And in general does a positive effect on the Slope-Factor mean that if the exogenous variable increases, the curve of the endogenous variable will move more into the direction it has in the unconditional model and a negative Effect that it will move more towards zero, indifferent of the curve’s shape in the unconditional model?
 Bengt O. Muthen posted on Wednesday, August 28, 2013 - 6:20 pm
A covariate that has a negative effect on a slope is interpreted as follows. As the covariate value increases, the slope value decreases. It doesn't matter if the slope mean is negative or positive. If the mean is negative, increasing covariate value beyond its mean makes it even more negative.
 Martina Narayanan posted on Tuesday, August 12, 2014 - 3:54 am
I am running a linear growth curve model. The mean slope (-0.003) is not signifcant and negative, but the variance of the slope (0.002) is significant. I am testing how the slope predicts an outcome variable, and I find a significant and positive unstandardized regression coefficient (0.245) for slope predicting that outcome.

How do I interpret this finding?

I know a positive regression coefficient means, that with an increasing slope, the outcome variable increases. But since my slope is slightly negative to begin with, does that mean the more negative my slope, the more increased the outcome? Does this logic apply even when my slope is not significant to begin with (it may be sort of random that it is slightly negative)?
 Linda K. Muthen posted on Tuesday, August 12, 2014 - 10:34 am
There is variability around your slope mean. It can increase by going from, for example -1 to -.5. This increase is associated with an increase in the outcome.
 Martina Narayanan posted on Tuesday, August 12, 2014 - 1:08 pm
Thank you Linda. Just a follow up question to make sure I understand you right:

When you talk about a change in a negative slope from -1 to -.5 you would refer to that as an increase in the slope? Because I thought this would be called a decrease as the absolute value of the slope is decreasing. Could you once again explain what we call an increase or decrease in the case of a negative slope? I believe, this difference is quite important for how we interpret the regression coefficient from the slope to the outcome.

Thank you, Martina
 Linda K. Muthen posted on Tuesday, August 12, 2014 - 2:58 pm
We aren't talking absolute values. We are talking the real values.
 Nick Shryane posted on Thursday, June 25, 2015 - 9:11 am
Hello,
I have five longitudinal binary outcomes. I’m fitting linear growth curve models and I want to check my understanding of the output by computing the model predicted probability of the outcome at the first time point.

Fitting using probit/WLS, I’m using the equation from chapter 14 of the manual (v7, p.492):

Pr(u=1|x_t) = F(-tau + slope*x_t),

where F is the normal distribution function, tau is the threshold parameter (same for all time points), and x_t are the time scores (the slope loadings).

So, for x_t = 0 the probability of u is given just by –tau. Fitting the model, I obtain tau = 0.721, which is Pr=0.235, which fits the observed proportion pretty well (0.225).

My difficulties start when I fit the model by logit/MLR. When I fit the same model I get a value of 3.67 for the thresholds. To compute a probability I use equation (1) (p.493), with tau = -a,

Pr(u=1|x_t) = 1/(1+exp(tau - slope*x_t)),

so again with x_t = 0 it should be 1/(1+exp(tau). But this gives me Pr = 0.023 for the fitted threshold (3.67).

When I use plot3 to graph the predicted probabilities they come out fine (i.e. around Pr = 0.235), so I presume I’m doing something wrong.
Any help would be appreciated.
 Bengt O. Muthen posted on Thursday, June 25, 2015 - 6:33 pm
First, do you have random effect, that is, growth factors with variances? If so, the computations have to take that into account by numerical integration.
 Nick Shryane posted on Friday, June 26, 2015 - 5:39 am
Thanks for getting back to me. Yes, the model has latent growth intercept and slope factors. The model statement is:

i s | u1@0 u2@2 u3@4 u4@6 u5@8 ;

I'm interested in the predicted probability for u1.

I thought that, for time 1 (u1), the contribution of the latent growth factors would be zero.

This assumption appeared to hold (i.e. the equation above produced the correct answer) when estimating the model by probit/WLSMV but not when using logit/MLR.
 Nick Shryane posted on Friday, June 26, 2015 - 6:23 am
A quick update to my query.

Thanks for your mention of the latent variable variances - when I fit the logit/MLR model with these constrained to zero the threshold on its own gives the predicted probability I was expecting (logit 1.043, Pr(0.26)).

But I'm still curious as to why this should matter for predicting the probability using MLR but not for WLSMV.
 Bengt O. Muthen posted on Saturday, June 27, 2015 - 4:41 pm
The WLSMV model uses a probit link together with the normality for the growth factors and this results in a normal y* which gives an explicit form for the probability in terms of the normal dist fcn.

The ML model uses a logit link together with the normality for the growth factors which does not give an explicit form but needs numerical integration.
 Nick Shryane posted on Monday, June 29, 2015 - 7:36 am
Aha. Thanks for this. One final question in this case - what is the interpretation of the item threshold parameter in such a logit model?
I have always assumed it was the logit of the probability of an item response for a case located at zero on the latent intercept and slope parameters, but it seems that this isn't the case.
 Bengt O. Muthen posted on Monday, June 29, 2015 - 11:22 am
When you condition on zero latent variable values, the threshold interpretation is as you say (although with logit that reverses the sign of the threshold). It is when you don't condition that you need the numerical integration.
 Kelly Murphy posted on Wednesday, August 05, 2015 - 11:15 am
Hello,

I estimated a parallel process model/dual domain model (two latent growth curve models at once), and am having trouble interpreting the covariance between the two slopes. How do I interpret a negative covariance between a negative slope and a positive slope?

Thank you!
 Bengt O. Muthen posted on Thursday, August 06, 2015 - 7:25 am
When the positive slope increases the negative slope decreases - becomes more negative.
 Kelly Murphy posted on Thursday, August 06, 2015 - 2:06 pm
Thank you so much for taking the time to respond to my question, I sincerely appreciate it.

So would a positive covariance between a negative slope and a positive slope mean that when the positive slope increases, the negative slope decreases less?
 Bengt O. Muthen posted on Friday, August 07, 2015 - 5:44 pm
Yes.
 Marike Deutz posted on Thursday, March 03, 2016 - 2:22 am
Dear Linda and Bengt Muthén,

I have a cohort-sequential LGM with distal outcomes and I have some trouble interpreting the results. The linear slope is positive (.255) and the quadratic slope is negative (-.191). I understood from the forum to always interpret them together. I plotted the growth curve and there is an increase first and then a decrease.

1) the linear and quadratic slope are negatively related. Can I interpret this relation? It seems obvious that they are related as a linear increase is followed by quadratic decrease?
2) The linear slope positively and the quadratic slope negatively predicted 3 of my outcomes. Does that mean that a developmental course with a steeper increase and a steeper decrease, so a higher overall curve, predicts my outcomes?
3) would you say the intercept is the initial level? Or a constant determined by all data waves?

Thank you!
 Bengt O. Muthen posted on Thursday, March 03, 2016 - 6:55 pm
Consider centering the time scores to reduced the s, q correlation.

Some answers are in the paper on our website:

Muthén, B. & Muthén, L. (2000). The development of heavy drinking and alcohol-related problems from ages 18 to 37 in a U.S. national sample. Journal of Studies on Alcohol, 61, 290-300.
download paper contact first author show abstract
 Adam Milam posted on Wednesday, October 26, 2016 - 9:40 am
I am conducting parallel process GMM with continuous indicators...new at this. Both processes have 4 time points. I am having difficulty interpreting the results. I ran separate GMM to identify the appropriate number of classes (process 1 - 4 classes; process 2 - 3 classes). When I run the parallel process the class structure seems to change (i.e. different slope and intercept); how do I interpret the different classes now with the 12 class patterns? Should the classes hold up and should there be consistency of slope and intercept for the different class patterns (should pattern 1 1 and pattern 1 2 have similar slope and intercept for process 1?). Also how do I find the conditional probabilities of class membership given membership in another class in the Mplus output.
 Bengt O. Muthen posted on Wednesday, October 26, 2016 - 2:46 pm
It all depends on how you set up the model. I would use 2 latent class variables, one for each process, and then use the dot command like (see UG):

%c1#1.c2#1%

to impose exactly the equality constraints you want.
 Daniel Lee posted on Wednesday, December 07, 2016 - 12:07 pm
Hi Dr. Muthen,

I ran a multigroup conditional LGM, and I am having some trouble wrapping my head around the fixed effects. In particular, the mean of the intercept factor (=~4.80 [on a Likert type scale was 0-5]) for one of the groups was extremely high! However, when I looked at the mean of dependent variable at the first time point for that group, the mean was a lot less. I know that the mean of the intercept is an adjusted mean at the first time point for a particular group, but the intercept of this particular group seems way to high (especially since most of the covariates are not significant).

I was wondering if my understanding of the mean of the intercept is correct (an adjusted mean for a group at the first point), and if there are other factors in a multigroup-LGM that might inflate the intercept estimate (especially if most covariates are not significant).
 Bengt O. Muthen posted on Wednesday, December 07, 2016 - 2:45 pm
Please send the output to Support along with license number - so we know exactly what you are looking at.
 Emma C Burns  posted on Friday, June 02, 2017 - 12:09 am
Hi Dr. Muthen,

If the mean slope of the outcome is significantly negative, and when that is regressed on a covariate, the regression coefficient is also negative, does that mean the covariate is predicting a more or less steep slope?

We know that a positive regression for a positive slope means the steepness of the slope increases, but we are unsure how to interpret it for negative slopes.
 Bengt O. Muthen posted on Friday, June 02, 2017 - 2:06 pm
It is simple:

If x increases by a certain amount, the negative coefficient says that a negative value is added to the slope. This means that the slope decreases. So a big x value gives a larger negative slope, that is, a more steep slope downwards.
 Uzay Dural posted on Monday, June 12, 2017 - 12:30 am
Hello Drs. Muthen,

I conducted conditional multiple-group (experimental versus control groups) MLGM. The interaction between gender (men = 1) and a time invariant covariate is significantly predicting the negative slope in the experimental group. (gender x covariate --> slope = -.237).

Does this mean: as the covariate increases the slope decreases for men (compared to women) in the experimental group?

It is probably not, and I am confused. Due to limited sample size I could not conduct 4-group MLGM (female experimental, female control, male exp., male control). Instead, should I focus on the experimental group and conduct multiple group MLGM with gender groups (covariate --> slope)? Or which post-hoc analysis should I consult to?

Thank you very much in advance!
 Bengt O. Muthen posted on Monday, June 12, 2017 - 6:06 pm
Q1: Yes

Q2: With limited sample size you may want to represent the 4 different groups by 3 dummy variables.
 Uzay Dural posted on Tuesday, June 13, 2017 - 1:02 am
Thank you very much Dr. Muthen!

A follow up: is it possible to get standard errors of model estimated means to plot the interaction effects?
 Bengt O. Muthen posted on Tuesday, June 13, 2017 - 6:29 pm
Yes, you can even get plots with confidence intervals for interactions - see the Table 1.8 runs on the web page for our book examples:

http://www.statmodel.com/mplusbook/chapter1.shtml
 Daniel Lee posted on Wednesday, September 27, 2017 - 6:06 am
Hello, I ran an growth model for variable Z and the intercept and slope term was significant. However, upon including a predictor (X1), the slope term for Z was no longer significant, but X1 significantly predicted the slope of Z (e.g., .50). Can I interpret this result even though the slope term for variable Z is not significant in the model?

Thank you!
 Bengt O. Muthen posted on Wednesday, September 27, 2017 - 3:31 pm
Perhaps you are looking at the intercept instead of the mean for the slope when you add the covariate.
 Daniel Lee posted on Thursday, September 28, 2017 - 5:01 am
Hi, in response to your previous post, the intercept was still significant after including the covariate (i.e., x1), but the slope was no longer significant. But x1 significantly predicted the non-significant, slope term.

I'm wondering if that means that the trajectory is flat when the covariate is at 0, but an increase to the covariate increases the rate of change in the growth factor (effect size of x1 to s was .5). I would love your input.
 Bengt O. Muthen posted on Thursday, September 28, 2017 - 10:32 am
I was talking about the intercept for the slope growth factor regressed on the covariate.
 Aida Soriano posted on Monday, October 02, 2017 - 1:41 am
Dear Dr. Muthen,

I’m working on a LGM, in which i and s of a variable influence i and s of in-role and extra-role performance. Also I included some predictors. The fit was not good. Modification indices suggested to include residual covariances between both outcomes (same time) and between one outcome (different times). If I include the ones that you see below, I got good fit. But, does it have sense? How can I justify including a residual covariance between INPE2-EXPE2, and not between INPE1-EXPE1, INPE3-EXPE3..??

MODEL:
ix sx| flow1@0 flow2@1 flow3@2 flow4@3;
iy sy| inpe1@0 inpe2@1 inpe3@2 inpe4@3;
iye sye| expe1@0 expe2@1 expe3@2 expe4@3;
iy ON ix X1;
sy ON sx;
iye ON ix X1;
sye ON sx;
ix ON X1 X2 X3;

sy@0;
sye@0;
iy WITH iye@0; !they were non-significant so I fixed it to 0

INPE2 WITH EXPE2;
EXPE4 WITH INPE4; !in-role and extra-role in the same time
EXPE3 WITH EXPE1;
INPE3 WITH INPE1; !same variable in diff times

Thank you,
 Bengt O. Muthen posted on Monday, October 02, 2017 - 5:16 pm
This question is suitable for SEMNET.
 Aida Soriano posted on Monday, October 02, 2017 - 11:01 pm
What does this mean? Should i post it in SEMNET?

I'm really interested in knowing your opinion about covariate the residuals suggested by the modification indices or not.

Thank you,
 Bengt O. Muthen posted on Tuesday, October 03, 2017 - 11:45 am
A priori, I would include correlations between outcomes at the same time point for all time points and all outcomes at each time point. This is because there are presumably many left-out time-specific covariates that influence all outcomes at a certain time point.

The outcomes may also have residual correlations across time due to left-out covariates that influence all time points.