Message/Author 

Anonymous posted on Tuesday, January 16, 2001  3:47 pm



In a timeinvariant conditional LTM, how do you interpret the significant effects of the predictors on the slope (in this case, an overall downward trend in health over time). Example: Education: estimate .074 (se .056) Energy: estimate .231 (se .054) For a 1 unit increase in education, you see... For a 1 unit increase in energy, you see... Thanks in advance. 


Can you describe your model more fully? Does LTM stand for latent transition or latent trait modeling? Do you have dichotomous dependent variables? 

Anonymous posted on Tuesday, January 23, 2001  9:31 am



The model has four timepoints with a categorical dependent health variable (measured on a 5point scale). The model is timeinvariant with baseline predictors (e.g. education) predicticing the intercept and slope of the model. The education and energy variables are categorical, with higher numbers representing higher levels of education and energy to participate in daily activities. I used the acronym LTM to mean latent trajectory modeling. 


With categorical repeated measures, the interpretation of the effects of timeinvariant covariates on the slope growth factor can be expressed in several different ways. First, the coefficient (unstandardized) can be simply interpreted as in regular regression in terms of change in the slope for a unit change in a covariate (holding other covariates constant). This may not carry much meaning because the scale of the slope is arbitrary. Second, one can consider the standardized coefficient, in which case the change in the slope is expressed in slope standard deviations. This still doesn't mean much given that the outcome is categorical. Third, one can express the ultimate effect of the change in the covariate on the outcome variable probabilities. This may give a more "down to earth" interpretation. For instance, you can compute the outcome probabilities for some chosen values of your covariates. You do this by first computing the mean values of the slope given the chosen covariate values and then computing the outcome variable probabilities for these mean values. 

Anonymous posted on Thursday, June 28, 2001  2:00 pm



How do you calcute the mean value of the slope given the chosen covariate value and then compute the outcome variable probabilities? Can this be done directly in MPLUS or handcalculated. Thanks for your assistance. 

bmuthen posted on Sunday, July 01, 2001  12:10 pm



I assume that you have a categorical outcome and that by slope you mean the slope growth factor. The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x. The probability has to be computed by hand, for instance with unit scale factors delta (see User's Guide), you have for a binary y scored 0/1, P (y=1 x) = F(tau + s_m*x_t), where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings). 


I'm looking at social mobility across 4 different time points and according to the BIC value the best fitting unrestricted latent class growth analysis is a 7 class model. Seven different classes is not substantively useful and I notice from your June 2000 paper (Muthen and Muthen 'Integrating PersonCentered and VariableCentered Analyses: Growth Mixture Modeling with Latent Variables' in Alchoholism: Clinical and Experimental Research) that you outline other criteria on page 887 for assessing how many latent classes to use. Please could you explain how high the average posterior probabilites should be  for my 7 class model I've got crossclassification figures as low as 0.683, 0.610 and 0.662. I would prefer to use a 4 class model which has a higher BIC (26224 compared to 26165) but better crossclassification values ie between 0.764 and 0.91. Am I justified in using a 4 class model? 

bmuthen posted on Sunday, November 14, 2004  12:35 pm



The posterior probabilities tell you how useful the model is, but not how many classes fit the data best. You can consider other fit statistics. For example, several simulation studies indicate that Mplus' samplesize adjusted BIC is better than BIC. Also, the LoMendellRubin test in Mplus' Tech11 can be used. Ultimately, the usefulness of the model is a key consideration besides statistical fit indices, e.g. predictive performance. 


I want to perform hypothesis testing on the individual parameters in my model. I know that I can use est/error but should I use a T or Z distribution? 


The estimate divided by the standard error shown in the Mplus output follows an approximate z distribution. 


I am relatively new to growth modeling and have managed to confuse myself. I have a simple question regarding interpretation of coefficients. I understand that including a timeinvariant covariate into a model influences the latent slope and intercept, so that estimates listed under the "intercept" section of the output for the slope and intercept account for the influence of the covariate on the latent factors. In my case, the slope mean in the tech4 output is negative, but I have a positive coefficient estimate for my slope in the intercept section of the output (slope estimate = .113). I'm currently exploring why that may have occurred. But in the meantime, the covariate has a negative association with the slope. (.05). Would I interpret this so that higher scores on the covariate at time 1 are associate with more slowly increasing slopes (based on the slope coefficent)? Thank you in advance. 


If s is the slope growth factor, mean (s) = a + b mean (x) When x is zero, the mean (s) is equal to the intercept (s), that is, a. In your case, a is positive and b is negative, so the mean of x must be a positive value large enough to cause the product to be negative and larger than a resulting in a negative mean of s. The interpretation is that as x increases the slope is a larger negative value. 

anonymous posted on Saturday, February 14, 2009  8:50 am



Hi, First, I'd like to thank you for making this forum available, it is such a great help! I am attempting to revise a paper and have some questions related to interpreting the correlation between intercept and growth factors. The LGM focuses on symptoms from time 1 to time 7. 1. Given a positive intercept mean (0.82), a negative linear slope mean (0.16), and a positive quadratic slope mean (0.02), how do you interpret a negative correlation between the slope and intercept? Is it that the higher individuals are on symptoms at time 1, the slower the rate of decline in symptoms? 2. If the variance of the quadratic factor is fixed to 0, is it necessary to include it in your interpretation or to include a correlation between the intercept or linear slope and quadratic factor? 3. Given a positive intercept mean (0.97), a negative linear slope mean (0.13), and a positive quadratic slope mean (0.02), how do you interpret: a) a positive (but nonsignificant) correlation between the intercept and quadratic slope b) a negative correlation between the linear and quadratic slope? Thanks very much in advance! 


1. If you center at time 1, then the higher an individual is at time 1, the lower his/her slope  that is, the steeper the decline. I am referring to the correlation between the intercept and the linear slope (but see also the caveat in 3 below). 2. With Var(q)=0 you don't have covariances between q and other growth factors. You still have the mean of q to explain. 3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat, a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end. b) when the initial decline is steeper, the ending upturn is higher. 

anonymous posted on Tuesday, February 17, 2009  3:20 pm



Hello, I conducted a conditional twogroup LGM and I'm having some trouble wrapping my head around interpreting the effect of two predictors on the slope functions. In the first group, which consists of only an intercept (intcpt=.315) and linear factor (intercept = .023), how do you suggest I interpret the following: 1. a positive path coefficient (0.032) for the regression of the slope on predictor 1. 2. a negative path coefficient of 0.034 for the regression of the slope on predictor 2. In the second group, which consists of an intercept (int= 0.844), linear factor (int=0.015), and a quadratic factor (int=0.004), how do you suggest I interpret the following: 1. a negative path coefficient of 0.062 for the regression of the linear slope on predictor 2. 2. a positive path coefficient of 0.012 for the regression of the quadratic slope on predictor 2. Thanks for your assistance! 


Use the rules for interpreting coefficients in linear regression. As in that case, these path coefficients are partial regression coefficients, so giving the effect on the DV as the predictor changes 1 unit while holding the other predictors constant. 

anonymous posted on Thursday, February 19, 2009  11:52 am



Thanks for your help. However, I still am not clear on whether the predictor is predicting a faster or slower rate of change. 


Since you mention predictor (singular form) I assume you refer to the question you have for the second group, regarding the quadratic model. Is that right? 

anonymous posted on Friday, February 20, 2009  5:43 am



Yes, that is correct. I think (but please correct me if I'm wrong!) that for the group with the negative linear trend (the first group), predictor 1 (with a positive coefficient) predicts a slower decline and predictor 2 (with a negative coefficient) predicts a faster decline However, I am completely confused as to how to interpret the effect of the predictor on the quadratic group. Thanks again in advance! 


See the following post from Sunday, February 15: 3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat, a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end. b) when the initial decline is steeper, the ending upturn is higher. 

anonymous posted on Friday, February 20, 2009  9:05 am



Thanks  is this also true for the effect of covariates on linear and quadratic slope? 


Yes. 


Hello, I am really workinng on this to get it right, but I am now confused. I have several covariates predicting latent growth in body image measured at several time points between ages 13 and 30. (I am here using the ´model results´,is STDYX preferable?) To use the covariate close parentadolescent relationship as an example; for boys there is a positive estimate at initial level at age 13 (0.14) (understandable!), negative sign. estimate for slope (0.25), and a positive estimate for q (0.14). Do I interpete the s and q as that parent adolescent relationship influences body image growth to a less degree (during adolescence) for so to be of more importance again in early adulthood (q)? The body image curve for boys are increasing between the ages 13 and 18, so leveling off and decresing some at ages 21 and 23 .(so an increase again up to age 30). 


Hi again, We are treating close adolescent relationship and peer relationship as ´time invariant covariates´from time 1. I have BMI as a time varying covariate at six points in time. When we include BMI in the model the effects of the time invariant covariates for girls on slope and quadratic growth disappears, while almost no difference for boys. Is there a way that we in one model (one step) can reveal this effect. Now we run it twice. Thanks in advance (for both my two posts) 


Regarding your first post, it is difficult to separately interpret effects on linear and quadratic slopes. This is why sometime "orthogonal polynomials" are used. The effect on the intercept is straightforward, however, and one approach to this issue is shown in Muthén, B. & Muthén, L. (2000). The development of heavy drinking and alcoholrelated problems from ages 18 to 37 in a U.S. national sample. Journal of Studies on Alcohol, 61, 290300. which is on our web site under Papers. Regarding the choice of standardization, see the UG. 


When you say run it twice, I think you don't mean for boys and girls but with and without BMI. If so, it seems difficult to capture the changing gender role in one model. The growth is different with BMI as a tvc. I wonder if having BMI as a parallel growth process instead of as a tvc would be useful. 


Hi, I'm analysing a latent growth curve model with four timepoints and timeinvariant and timevarying covariates. I'm interested in the total effect of TVCs on the growth factors, especially on the mean (or intercept) of the slope factor i.e. does the mean growth rate change significantly after TVCs are specified. In the model without the TVCs the intercept of the slope factor is .35 and in the model with the TVCs .45 indicating that growth rate would be higher if the effects of TVCs were removed from the equation. How can I assess the significance of this change? Is it okay to constrain the intercept of the slope factor in the model with TVCs to the value it had in the model without TVCs and then analyse the chisquare change in model fit? Or is there a better/correct way to do this? Thanks in advance. 


No, that doesn't sound correct. As a first step you want to think about how to make the question well defined. What is the intercept/mean of the slope growth factor when the model includes the TVCs  does it mean the same thing as when TVCs are not included? When included, does your model let the TVCs influence the slope growth factor? If not, doesn't the slope refer to the development of the Ys at zero values of the TVCs? Which raises the question, are the TVCs centered (sample means subtracted)? 


My model looks like this: MODEL: ! Nonlinear crowth curve; ylevel yslope  y1@0 y2@0.6 y3@1.6 y4@1.6; ! TICs; ylevel on tica ticb; yslope on tica ticb; ! TVCs/concurrent effects; y1 on tvc1; y2 on tvc2; y3 on tvc3; y4 on tvc4; ! TVCs/lagged effects; y2 on tvc1; y3 on tvc1 tvc2; y4 on tvc1 tvc2 tvc3; Y is a personality variable and TVCs are the number of certain types of events. Regressions of Ys on TVCs show small, but significant negative effects. If I understand your point right, to me, the essential meaning of Ys and hence (I think) its growth parameters is the same whether TVCs are specified or not. I can do the centering of TVCs, but the interpretation of growth parameters at zero number of events as the TVCs now stands is also well motivated. TVCs do not directly influence the growth factors  actually regressing the slope factor on the TVCs would in a way be the easiest solution to my problem, but I don't think that it is allowed here, or is it? Thanks again. 


Say that the TVC means decline linearly over time. Then the direct negative effects onto the Ys will help pull down the Y means instead of the slope mean being the only source affecting the Y means. This affects the interpretation of the slope mean changing across the two models. You can regress the slope on TVCs. For instance, TVC1 happens before the slope affects the change from time 1 to time 2. TVC1 might also be correlated with the intercept. These points illustrate the complexity of models with TVCs. 


Thanks for your help. I regressed the slope (and intercept) factors on TVC1 (and on TVC2 in a three timepoint model) and there were no significant effects on the slope. These models seem to me less than perfect however as TVCs 3 and 4 can't (I think) be used in them. If there is no way to assess the joint effect of all TVCs on the growth factors I guess I need to consider some other models than LGC. Any suggestions? 


You might want to take a look at how intercept changes can be modeled by TVCs  see slides 157159 of the Topic 3 handout of 05/17/2010. You can also formulate a growth model for the TVC process and do parallel growth modeling where the TVC growth factors influence the growth factors for the Y process. 


I am modeling gender and race centrality as predictors of change in cross race contact. In terms of the analysis interpretation we are unsure of how to interpret the output from MPlus (gender is 0=female and 1=Male)). INT ON GENDER 0.040 0.073 0.540 0.589 CENTRALITY 0.101 0.060 1.679 0.093 SLOPE ON GENDER 0.111 0.061 1.802 0.071 CENTRALITY 0.102 0.055 1.866 0.062 


You interpret these slopes just like you would in a regular linear regression with a continuous dependent variable  that is, if INT was observed and if SLOPE was observed. 


Hello, I have a question regarding the interpretion of BMI as a time varying covariate. It is a latent growth curve, i s and q, outcome; body image at 6 ages from 13 to 30, also background variables. The time varying covariate BMI has a sign neg estimate for males at age 13 (.18) and 30 (.27), but a sign. pos at age 21 (.04, p<0.01). Females have a sign pos at age 21 and a neg at age 30. I have checked this several times now, it seems correct. So BMI has an additional effect on body image at this ages; at ages 13 and 30 boys´ relatively high BMI led to further decline in body satisfaction, while at age 21 the opposite occured? Or am I interpreting the estimates with time varying covariates wrong here? How is it best to express it? Many thanks! 


See the following book whcih has a section on the interpreation of timevarying covariates: Bollen, K.A. and Curran, Patrick, J. Latent Curve Models: A Structural Equation Modeling Perspective. Wiley 2006. 


Thanks for quick reply! My worry is A) that the results (see above) might be incorrect. Based on previous research I just cannot see how a positive prediction age 21 (BMI (tvc)/body satisf.) can be correct, particularly not for girls. Also the correlations are negative, around .30. B)The fit measures for the model are not that good, cfi .93, RMSEA 0.04, SRMR 0.08. I have looked at mod. indic.: When I include a path Q on bmi30 the fit is much better, cfi .97, RMSEA 0.03 and SRMR 0.05.. Chi square much lower too (but still sign, sample is 1082). It makes sense to me that BMI30 predicts Q; males (.20), females (.39). (curve in body image adulthood levels off), but can I do that? Then BMI age21 are no longer positive! and not significant, all effects go through Q. 


A) This I cannot comment on without more information than can be handled on Mplus Discussion. B) If the model doesn't fit, any interpretation of the results is invalid. 


Ok, we are struggling with this. Can I post more information here or can we do it another way? 

Carolin posted on Monday, August 22, 2011  2:38 am



Hello, I'm analyzing a quadratic GMM with four timepoints and covariates. One covariate has a significant influence on the linear slope factor, but insignificant influence on the quadratic factor. How can I interpret this? Does this mean that the covariate only affects the change between T1 and T2 and after this there is no influence? Thanks a lot 


Not quite. Telling effects on the linear and quadratic parts growth factors is difficult because those two factors interact. The covariate that influnces the linear factor significantly continues to have an influence after T2 because the linear slope continues to have an influence beyond T2. But beyond that it is hard to parse out the influences via the two growth factors. 


Dear Linda and Bengt, I am analyzing the following longitudinal growth model on a continuous variable (perceived alcohol availability), including multiple group analyses on age: GROUPING is age (13=13 14=14 15=15); MODEL: iPalav sPAlav  M1PAlAv@0 M2PAlAv@1 M3PAlAv@2; iPAlav on group1; sPAlav on group1; I get the following warning in the output: THE MODEL ESTIMATION TERMINATED NORMALLY WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP 14 IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/ RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE SPALAV. However I do not know what to check in the technical output 4 and what I can conclude from this?? I hope you can help me further. 


Just to give some more information in relation to my previous question; In the tech 4 outcome concerning 14 year olds Mplus doesn't give the correlations between sPalav and the other latent variables (stated as 999) or itself. Furthermore, in the estimated covariance matrix for latent variables I can see a negative covariance between spalav and spalav of .107. Is there anything we can do to solve this problem? Looking forward to your reply. 


It sounds like spalav has a negative variance. This makes the model inadmissible. You would need to change the model. 

Gareth posted on Monday, April 09, 2012  4:55 am



I have two questions about this formula for calculating outcome variable probabilities (binary y scored 0/1) at different levels of a covariate over time, in categorical growth models: "P (y=1 x) = F(tau + s_m*x_t), where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings). The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x" 1. The mean value of the slope obtained in TECH4 is different from the the intercept of the slope factor. If the intercept of the slope factor is used in the formula, why is the mean value of the slope obtained in TECH4 relevant? 2. Is this formula the same for logit and probit coefficients? If different, how should it be modified? 


1. The mean and intercept are different parameters. The mean of y, y_bar, is y_bar = a + b*x_bar; The intercept is a = y_bar  x_bar. 2. See Chapter 14 of the user's guide. There is a section on probit and another on logit. 

Meg posted on Monday, June 11, 2012  6:48 am



Hello, I have a question regarding the interpretation of my growth model. I am looking at depression (outcome) across four time points and assessing the influence of timevariant and invariant predictors on the slope and intercept of the depression curve. The UGM suggests that depression decreases from mid adolescence through young adulthood (I used a freedfactor loading approach). I am a little confused as to how to interpret the regression coefficients because most of the examples have positive slopes. Here are my questions: 1. if the estimate for gender (boys) predicting the slope of depression is positive, does this mean that boys have a slower rate of decline in depression over time? 2. The timevarying covariate also has a declining slope. The regression estimate for the effect of the intercept of the TVC on the slope of depression is positive (.085). Does this mean that those people with higher levels on the TVC have a slower rate of decline in depression? Thanks 


1. Yes. 2. Yes. 

Gareth posted on Thursday, November 01, 2012  7:02 am



Suppose I have a parallel process growth model with categorical outcomes. The intercepts and slopes are regressed on covariates, and the intercepts and slopes are correlated. For each covariate, I have calculated probabilities using the formulae in the discussion above: (1) s_m = a+g*x for covariate value x (2) F(tau + s_m*x_t). How can this formula be modified to estimate the probability at the first time point that the one outcome is already present, when the other outcome is already present at baseline? The two intercepts are correlated, so I want to illustrate that someone already having outcome 1 is more likely to already have outcome 2 at baseline. The correlation between the intercepts is captured by a covariance rather than by a regression coefficient. 


You say P (y=1 x) = F(tau + s_m*x_t) but that is P(y=1  x, s=s_m). If s is a random effect, to get P(y=1  x) you have to integrate over s. Likewise, P(y1=1, y2=1  x) requires bivariate integration over s1, s2. 


Hello, I am estimating longitudinal models with binary and ordinal observed outcomes using the multilevel features (TWOLEVEL RANDOM) in Mplus (as opposed to the SEM/LGCM approach). I know that with SEM/LGCM, the regressions of growth factors on covariates are linear regressions (as the intercept and slope are continuous latent variables with arbitrary metrics) and the factor loadings for the intercept and slope are fixed logit or probit coefficients. One can, however, get predicted probabilities for the observed categorical indicators of growth by combining the appropriate parameter estimates. But in the case of MLM, where the intercept and slope are estimated by directly regressing the categorical outcome on an observed ordinal time variable using stacked (long) data (rather than creating a measurement model for the growth factors) and a logit or probit link, can the estimated growth parameters themselves be more directly interpreted on the logit (or probit) scale... following which one could simply convert these to odds ratios and thus predicted probabilities? Is this correct or am I missing something? Thanks, Cam 


I think this is the right way to look at it. This is like UG ex 9.16 but with categorical outcomes. Here x1 and x2 influence s and y on the Between (subject) level) which in turn influence the outcome at each time point as the figure implies. So x1 and x2 ultimately have a logit/probit influence on the outcomes. 


I am using a simple linear growth curve to predict a distal outcome. In explaining the contribution of the slope as a predictor, in addition to interpreting the estimates provided, is it practical to convey this information by using R2? Specifically, would it be feasible to run the model with only the intercept or slope being used as a predictor (ex. y on I), then the same model but with both I and S as predictors (y on I S) and report the differences in R2 between these two models? 


That doesn't seem unreasonable, as long as i and s are not too highly correlated. 


I am estimating a linear latent growth curve model across three time points using ordinalcategorical data. Specifically the response scale of the outcome variable is a 4point likert scale (none, 12 days, 35 days, 67 days). As such I have used WLSMV to estimate the model. In the output, the mean of the intercept is fixed at 0 and the mean of the slope is estimated. The output tells me that the mean of the slope is 0.3 (p<.05). So the outcome variable decreases by 0.3 points between each time point. Reviewers of my work continue to ask me, what does this mean in term of how much change occurred. Because the outcome variable is ordinalcategorical I am finding it difficult to answer this. Would this best be answered in terms of calculating an effect size for the slope, and if so, how would I do this? Thank you in advance for your help. 


I think looking at a plot of probabilities would be helpful. See the SERIES option of the PLOT command. 


Thank you for your response Linda. Just to follow up, it is possible to calculate an effect size for the slope, in order to report that the decrease represented a small, medium, or large change? 


It is possible, but does not convey how that impacts the probabilities of the observed variables. 

Ivana Igic posted on Wednesday, August 14, 2013  3:17 am



Dear Drs. Muthen, I’m running a 3step GMM (Mplus Web Notes: No. 15). In the first step after I have tested different models I got a 5 class curvilinear solution as the most suitable.In the 3step I have predicted the distal outcome in T5, while I have controlled for the T1 value of distal outcome. 1.Is the intercept value of distal outcome within the class the mean value of the distal outcome per class? 2. The values of distal outcome are 16, what is wrong if I got the negative values for intercept or the values higher than 6 for the intercept per class? 3.I also analyzed the same data in SPSS using ANCOVA and I got very different values for the estimated mean value of distal outcome per class. 4.I used Wald test for distal outcome means comparison as suggested but this doesn’t work. Did I do something wrong? %c#1% …. [t5_y ] (m1); %c#2% ….. [t5_y ] (m2); %c#3% ….. [t5_y ] (m3); %c#4% …. [t5_y ] (m4); %c#5% …. [t5_y ] (m5); Model test: m1=m2; m1=m3; m1=m4; m2=m3: m2=m4; m3=m4; Thank you very much for your help. 


13. The intercepts not means are being estimated. 4. Remove m2=m3: m2=m4; m3=m4; The other tests imply those tests. 

Ivana Igic posted on Wednesday, August 14, 2013  10:49 am



Thank you very much for answering me! 1. I want to compare the value of distal outcome within diff. classes, how should I then interpret the intercept values? I want to be able to say that people within one class feel better/worse (my distal outcome) compare to people in other classes and to test the significance of these differences using the wald test. 2.The model test is still not working. Thank you very much for your help and have a nice day. 


1. The intercept is the mean controlled for the covariate. 2. Please send the output and your license number to support@statmodel.com. 


my question relates to conditional linear latent growth curve models. I have an unconditional model which shows that the endogenous variable declines over time. If I regress the slope Factor on an exogenous variable, its effect on the slope factor is negative. Example: Unconditional model: Unstandardized Means I 3.902 0.028 141.290 0.000 S 0.037 0.010 3.506 0.000 Conditional Model: S ON AGSLB 0.194 0.039 4.941 0.000 As you see there is a negative effect of AGSLB on the slope Factor. Does this mean that if AGSLB increases by one, the curve of the endogenous variable will move (0.194 units) towards zero? And in general does a positive effect on the SlopeFactor mean that if the exogenous variable increases, the curve of the endogenous variable will move more into the direction it has in the unconditional model and a negative Effect that it will move more towards zero, indifferent of the curve’s shape in the unconditional model? 


A covariate that has a negative effect on a slope is interpreted as follows. As the covariate value increases, the slope value decreases. It doesn't matter if the slope mean is negative or positive. If the mean is negative, increasing covariate value beyond its mean makes it even more negative. 


I am running a linear growth curve model. The mean slope (0.003) is not signifcant and negative, but the variance of the slope (0.002) is significant. I am testing how the slope predicts an outcome variable, and I find a significant and positive unstandardized regression coefficient (0.245) for slope predicting that outcome. How do I interpret this finding? I know a positive regression coefficient means, that with an increasing slope, the outcome variable increases. But since my slope is slightly negative to begin with, does that mean the more negative my slope, the more increased the outcome? Does this logic apply even when my slope is not significant to begin with (it may be sort of random that it is slightly negative)? 


There is variability around your slope mean. It can increase by going from, for example 1 to .5. This increase is associated with an increase in the outcome. 


Thank you Linda. Just a follow up question to make sure I understand you right: When you talk about a change in a negative slope from 1 to .5 you would refer to that as an increase in the slope? Because I thought this would be called a decrease as the absolute value of the slope is decreasing. Could you once again explain what we call an increase or decrease in the case of a negative slope? I believe, this difference is quite important for how we interpret the regression coefficient from the slope to the outcome. Thank you, Martina 


We aren't talking absolute values. We are talking the real values. 

Back to top 