Message/Author 

Anonymous posted on Monday, July 16, 2001  10:30 am



I have a parallel process model: yvariables==8 xvariables==11 latent var==4 I have run TECH4 in order to obtain the observed means for the slope factors. The output provides model estimated means for 24 model elements, but I'm having difficulty interpreting which mean goes with which variable in the model. Example: 1 (.12), 2 (.96), ....24 (1.57). Can you provide some assistance? Thank you. 


TECH4 gives means, covariances, and correlations for the latent variables in the model. In TECH4 the variable names are given to identify which variables the means are for. I would need to see the output if this is not the case for your analysis. You can send it to support@statmodel.com. 

Anonymous posted on Monday, November 19, 2001  7:17 am



Dear Linda & Bengt! I have been running GGM models with three classes. I have been using the residual output for the estimated means. Recently, I compared the estimated means with the ones I computed by hand. The values are very close, but the ones from the out put are always higher. Is this due to a rounding error? Here is an example for two time points in one class: ac=3.499 bc=0.270 qc=0.060 bc(t8)= 6.5 bc(t7)=5.5 qc(t8)= 42.25 qc(t7)=30.25 3.499 + (6.5*0.270) + (42.25* (0.06))=2.719 Estimated mean from the residual output=2.732 3.499 + (5.5*0.270) + (30.25* (0.06))=3.169 Estimated mean from the residual output=3.178 Thank you for your help. Best, Hanno 


I would see this as rounding error. The calculations in Mplus using double precision whereas is your hand calculations, you use only two digits in some places. 

Anonymous posted on Monday, June 30, 2003  8:10 am



Dear Linda and Bengt: I am trying to set up a first order autoregressive model in MPLUS. I have three time points. The following is the model I am trying to specify: MODEL: i by dep1@1 dep2@1 dep3@1; s by dep1@0 dep2@1 dep3@2; [i s dep1@0 dep2@0 dep3@0]; i with s; (here is the level 1 autoregresive): e1 by dep1@1; dep1@0; e2 by dep2@1; dep2@0; e3 by dep3@1; dep3@0; i s with e1e3@0; e2 on e1 (1); e3 on e2 (1); e1e3 (2); As you can see, I am equating the variance of the latent factors in the autoregressive model. Is this correct? Is it appropriate to equate the variances? Using HLM and autoregressive covariance structure gives me very different results. What is wrong with my model specification in Mplus? Please advise. Thank you, Mar Dowling 


I see one problem in your MODEL command. You may have only one equality on a line. So change, e2 on e1 (1); e3 on e2 (1); to e2 on e1 (1); e3 on e2 (1); Also, the statement e1e3 (2) does not hold the variances equal. In the model e2 and e3 have residual variances estimated. So this line should be deleted. If after these changes, you still cannot replicate the model in HLM, send your Mplus output and the HLM output along with your data, and I will check to see what the differences are in the two models. 

Anonymous posted on Sunday, May 16, 2004  12:27 pm



Dear Ben or Linda, This is the first time I have tried to run a parallel process model. I am modeling the relationship between the longitudinal trajectories of 2 variables in the same sample. I got an error message I do not understand. And the output I received does not make immediate sense to me. Can you help? Input was: VARIABLE: NAMES ARE y21 y27 y43 y52 y61 g21 g27 g43 g52 g61; MISSING = . ANALYSIS: TYPE = MISSING H1; MODEL: i1 BY y21y61@1; s1 BY y21@0 y27@6 y43@22 y52@31 y61@40; i2 BY g21g61@1; s2 BY g21@0 g27@6 g43@22 g52@31 g61@40; s1 ON i2; s2 ON i1; [y21y61@0 i1s2]; OUTPUT: TECH4 Error was: THE MODEL ESTIMATION TERMINATED NORMALLY THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 18. Output is: Means I1 6.731 I2 15.365 Intercepts Y21 0.000 Y27 0.000 Y43 0.000 Y52 0.000 Y61 0.000 G21 5.205 G27 5.404 G43 2.113 G52 0.683 G61 1.438 S1 0.242 S2 0.212 Variances I1 12.213 I2 5.587 Residual Variances Y21 9.644 Y27 7.898 Y43 8.167 Y52 6.967 Y61 5.969 G21 5.294 G27 3.782 G43 2.930 G52 4.177 G61 2.814 S1 0.003 S2 0.002 Not sure how to tell from this whether I have a parallel process or not. 


I would have to see the full output to comment. Include TECH11 in the OUTPUT command so that I can see which parameter it is referring to and send the output to support@statmodel.com. 

Anonymous posted on Thursday, December 09, 2004  12:07 pm



I am currently running a crosslagged panel analysis with count data. Multiple time points are modeled using a first order autoregressive structure. I am concerned about deviations from dispersion that would ordinarily be assumed for poisson distributed data. Convexity plots do demonstrate some deviations from poissoness. It appears that using the zero inflation capability of MPlus is not possible because these variables cannot occur on the right of an "ON" statement. How robust would parameter and fit estimates be, in MPlus, if there is some violation of the distrubutional assumption? Does the integration algorithm (forgive me because I am out of my depth here) provide any protection against assumption violations? Thanks for your help. Chuck Green 

bmuthen posted on Thursday, December 09, 2004  5:16 pm



Ex 7.25 in the Version 3 User's Guide shows how to do zeroinflated Poisson modeling where the inflation and noninflation parts are explicit. This approach could in principle be used for crosslagged panel analysis, although is perhaps cumbersome. Note also that with Poisson outcomes as predictors, the counts are treated as continuous variables (so prediction is not from any latent log rate propensity construct). 

Anonymous posted on Monday, April 25, 2005  10:22 am



I am trying to learn growth modeling. I got my model into MPLUS but I don't understand much of the output. Can you please recommend some first steps for me to take to interpret? There's CFI/TLI, RMSEA, Infomation Criteria, SRMR... How do I go about learning which of these are important and what they mean? Thanks! 

Anonymous posted on Monday, April 25, 2005  12:38 pm



related to above  in a growth model analysis, if the pvalue for the 'chisq test of model fit' is not significant (e.g. 0.4910), that means that I would not reject the null hypothesis. So, does that mean that my model doesn't do any better than analysing individuals as if they were separate independent cases? In other words, knowing the individuals' previous scores on a given var. doesn't help at all? 


I believe all of these fit measures are described in the dissertation by Yu which is available under Mplus Papers on our website. This dissertation also suggests appropriate cutoffs for these fit measures. 


The null hypothesis is your growth model. So if you cannot reject the growth model, this implies that it fits the data well. 

Anonymous posted on Wednesday, April 27, 2005  8:13 pm



Thank you. Okay, so my hypothesized model is always the null hypothesis? This is the same in CFA, right? For example, if i am testing to confirm my hypothesis that there are, say, 3 latent factors, then (1) my null is that there are 3 latent factors, right? (2) if I get a pvalue > 0.05 then I would not reject the hypothesis that there are 3 latent factors? (3) how would you state the alternative hypothesis? 


1. Yes. 2. Yes. 3. In CFA for continuous factor indicators, it is the model of free variances and convariances. 

Anonymous posted on Tuesday, June 14, 2005  8:16 am



Dear Linda and Bengt: I am running a LGC model (linear) to test the effectiveness of the treatment(1) versus control (0) group. The outcome is a binary variable. According to the manual, I select the ML estimator to get Logistic model. If the group have significant loading on slope, say 0.5, is it correct to say the treatment group is exp(0.5) more likely to develope outcome with value 1 than control group during the whole observation time? It seems the OR is timepoint related, different timepoint have different OR as seeing from the forumal. 

bmuthen posted on Wednesday, June 15, 2005  7:47 am



Because the slope is a continuous dependent variable, this is a regular linear regression, not a logit regression. 

Anonymous posted on Wednesday, June 15, 2005  8:58 am



Thank you very much for your response. Since slope is a latent continuous variable, so the coefficient estimate for slope is regular regression coefficient. But I am still not clear about the level1 formula of growth curve with binary outcome. Is it logit(p)=b0 + b1time+ e? or Pr(y=1) =F( b0+b1time+e)? or something else? Thank you very much! 

BMuthen posted on Sunday, June 19, 2005  5:26 am



With ML you have a logit function so the formula is logit (p) = threshold + b1*time 

BMuthen posted on Sunday, June 19, 2005  5:37 am



With ML you have a logit function so the formula is logit (p) = threshold + b1*time 

Anonymous posted on Saturday, August 20, 2005  10:24 am



I ran a growth model with the following syntax (taking out some of the code lines to simplify my question): ANALYSIS: TYPE = missing; MODEL: IY SY  Y1@0 Y2@1 Y3@2 Y4@3 Y5@4; IY SY ON X1 X2 X3; Y1 Y2 Y3 Y4 Y5(1); OUTPUT: TECH1 TECH4 standardized; I get output that I don't quite understand. I want the covariance between the random intercept and slope, plus its standard error. It's not clear to me where I find the covariance. Is it in the Model Results? or in the "estimated covariance matrix for the latent variables"? I would have thought these two bits of output would match. MODEL RESULTS SY WITH IY 0.021 0.555 0.037 0.002 0.002 Residual Variances IY 80.236 4.940 16.241 0.879 0.879 SY 0.703 0.108 6.510 0.735 0.735 ESTIMATED COVARIANCE MATRIX FOR THE LATENT VARIABLES IY SY ________ ________ IY 91.243 SY 1.038 0.956 


You have covariates in your model so the model is estimating a residual covariance not a covariance. 

Anonymous posted on Saturday, August 20, 2005  11:32 am



Linda, I'm not sure how to interpret your response. Am I to assume that the covariances given in the 'model results' section are based on residualized values, and the estimates given in the 'estimated covariance matrix for the latent variables' are not based on residuals? Is that why they don't match? Thanks. 


In your growth model, because you regess the growth factors on a set of covariates, the parameters that are estimated in the model for the growth factors are intercepts, residual variances, and residual covariances. These are not the same values as given in TECH4. If you did not regress your growth factors on a set of covariates, then the parameters estimated in the model for the growth factors would be means, variances, and covariances. And these would match the values given in TECH4. 

Anonymous posted on Saturday, August 20, 2005  4:11 pm



Excellent. Thank you. 

Anonymous posted on Tuesday, August 23, 2005  10:52 pm



Hi Bengt or Linda! I have been trying to run a growth model for two paraller processes over three time points. In the output, I get the warning saying that the residual covariance matrix is not positive definite. The problem involves one of the three trend growth factors. I don't know whether this means that I can not conitnue with this model at all  or is there anything I can try to do to fix the problem. By the way, is there any chance you'll give a course on Mplus in Europe in the near future? Greetings, and thanks in advance for your help. 

bmuthen posted on Wednesday, August 24, 2005  6:51 am



Perhaps you have a nonpositive variance for one of your trend factors. You can simply fix the variance to zero. The are no firm plans for an Mplus course in Europe right now. Although we have in mind doing a course there, this will be a short 2day event rather than our 5day courses in the US. 


I've tried searching the discussion boards on this, but haven't found this issue elsewhere: I have a simple model for alcohol use growth over 3 waves of data, with 4 time invariant covariates included. I found that the unconiditional growth model fit the data reasonably; however, the condition model showed TERRIBLE fit. Based on modification indices, I included a covariance in the model (y1drink with y2drink). The fit is now practically perfect (red flag #1), but the slope/intercept covariance estimate, the y1drink/y2drink covariance, and the level 1 residuals all have standard errors that appear to be unestimated (show up in output as asterisks *******). The remainder of the model estimates appear okay. There are no error messages in the output 1) What do the asterisks mean? 2) Can I legitimately interpret the ouput despite this (I'm not actually interested in the covariances or errors for the substantitive purpose of the paper). Thank you. My input is pasted below: model: icept slope  y1drink@0 y2drink@1 y3drink@2; [icept slope]; y1drink  y3drink; y1drink with y2drink; icept slope on ygender; icept slope on remote; icept slope on singmom; icept slope on fstrain; 


Asterisks are printed when the number is too large to fit in the space provided. I would have to see your output to say more. Please send it and your license number to support@statmodel.com. 


I'm running a LCGA with dichotomous DVs (diagnostic data) and wanted to check with the interpretation of some of the plot data. When looking at the plot of the estimated probabilities  Let's say that for one class at position 1 on the xaxis, the probability value on the yaxis is at .5. Does this mean that at x = 1, 50% of the individuals would have the diagnosis? Or, does it mean that there is a .5 chance that people in that class have a diagnosis? Or, do those mean the same thing? 


it means that there is a .5 chance that people in that class have a diagnosis. 


Dear Linda & Bengt, I am fitting a linear GMM with intercept I and slope S and requested Cprobabilities and FScores to be saved. The output contains the list of variables actually used in the analysis, I, S and C_I and C_S plus the class probabilities and class membership. Can you help me with the interpretation of C_I and C_S, please? All the best, G 


These are the factor scores based on most likely class membership. 


Hi I'm running an unconditional growth curve of moral disengagement I have parametrisized the slope as 0 1 2 3 (the fit indices are good chi square not significant rmsea less then .05 etc) the mean of the slope is negative and significant. I have a problem with the interpretation of the negative correlation between slope and intercept. is it correct to say that "the higher the level of moral disengagement at T1, the less the change of moral disengagement from T1 to T2 "? or I have to say " higher initial values where associated with steeper decreases from T1 to T2"? I have the same problem of interpretation when I'm running a conditional model with a predictor on the slope and intercept. The regression coefficient on the slope is significant and negative. Is it correct to say " the higher the level of aggregation wìith deviant peer the less the change of moral disengagement?" or I have to say "higher level of aggregation with deviant peer predicts a higher decrease of moral disengagement from T1 to T2? Thanks 


The interpretation of a negative covariance between the intercept and slope growth factors is interpreted as: As the intercept growth factor increases, the slope growth factor decreases. The conditional interpretation follows the same pattern. 

ywang posted on Wednesday, November 04, 2009  8:15 pm



Hello, Drs. Muthen: I searched "MPlus discussion" for the interpretation of estimates in Latent Growth Modeling on Categorical Variables (dummy variables), but still can not understand how to interpret the coefficients of covariates on intercept or slope. For example, if the slope of overweight on gender (male versus female) is 0.5, it seems that it can be interpreted as that the increase of logit of being overweight per unit of time for males is 0.5 higher than females. Is it correct? But it is still difficult to understand. Can you help me to interpret it in a more understandable way? Or can you recommend some papers which interpreted the results in detail instead of only listed that the coefficients were significant or not? Thanks a lot!!! 


Your growth model for say a binary outcome is (1) logit_it = i_i + s_i*x_t + e_it where (2) s_i = alpha + beta*w_i + e_i Here x_t is time and w is your gender dummy. Inserting 2 in 1, this means that you have beta*w_i*x_t which means that your interpretation is correct. To get a deeper understanding of this you can compute the s mean for the 2 genders and see how the 2 means make for a different change in probability of the binary outcome when time changes one unit. You do this by converting the logit into a probability  see Chapter 13 of the UG. 

ywang posted on Thursday, November 05, 2009  8:22 pm



Dear Dr. Muthen: Thank you so much for your swift response. It is very helpful. I am new to LGM and have a followup question. As I understand, the slope is the coefficient of the interaction term between time and a covariate (for example, gender)in latent growth modeling. In tradiational regression model, if you have an interaction term between gender and time, you have to include both gender and time in the same model. Is it the same for Mlpus latent growth modeling? If I include the regression of slope on gender, do I have to include the regression of intercept on gender in the same model? Thank you very much in advance for your help. 


It is most natural to regress all growth factors on the covariates. 

AKH posted on Tuesday, November 16, 2010  11:27 am



I fit a growth model with a timevarying covariate (TVC). The TVC was person mean centered so that the coefficient for the personcentered TVC was a pure withinperson effect. Now I'm confused on how to interpret the withinperson effect of the personcentered TVC (depression) on the outcome (abuse). The part of the model copied here: MODEL: %WITHIN% maleslope  abuse ON time; femaleslope  abuse ON time; abuse ON dep; abuse ON dep; I think a significant within person effect for personcentered, timevarying depression predicting abuse would mean that on occasions when depression is above the typical person’s average, it is associated with a change in abuse. Right? And would a nonsignificant effect mean that depression is not related to change in abusive behaviors? Thanks! 


A significant coefficient for the regression of abuse on dep says that the two variables have a signficant relationship where when dep changes one unit abuse changes the numbber of units shown in the regression coefficient. A nonsignificant coefficiet says that the two variables are not signficantly related. 

AKH posted on Tuesday, November 16, 2010  4:17 pm



Thanks, but I think the way I wrote the model code example may have caused some confusion. I understand how to interpret typical regression coefficients, but am asking about an analysis of a growth model using a person period dataset with a person centered TVC. I think it might be interpreted as follows (when low abuse and low depression scores are better): A significant, positive within person effect for personcentered, timevarying depression predicting abuse would mean that on occasions when depression is below the typical person’s average, it is associated with a decrease in abuse. Whereas a significant negative effect would indicate that individuals with above average levels of depression report a decrease in abuse. And a nonsignificant effect would indicate that a person’s time varying depression is not related to her/his abuse trajectory. Does that sound correct? 


I would not refer to a "typical person's average". I would keep it simple and say an increase in dep is related to an increase in abuse and a decrease in dep is related to a decrease in abuse. 

Anne Chan posted on Saturday, April 02, 2011  6:41 pm



Hello! I run a LGC model on the amount of time students spend for leisure activity across 5 year and also regressed students' life satisfaction at Year 5 on the intercept and slope. The slope of leisure activity is negative. The regression coefficient of this slope and life satisfaction is negatively significant. May I ask, how should I interpret this result? Does it means, the more the students increase the rate of lowering their leisure time (i.e., a steeper negative slope), the lower their life satisfaction? Thanks! 


When you say "the slope of leisure activity is negative", I think you mean that the mean of the slope growth factor is negative. Irrespective of this, the negative effect of the slope growth factor on Year 5 life satisfaction implies that as the slope growth factor value increases, life satisfaction goes down. Note that saying "as the slope growth factor value increases" could still mean that it is negative, just less so. Note that regressing on both an intercept and a slope growth factor may run into collinearity if they are highly correlated. 

Anne Chan posted on Monday, April 04, 2011  9:38 pm



Thanks! Sorry, I still cannot get it. Put it into example, do you mean that when the slope change from "0.15" to "0.25" (the ABSOLUTE value of slope growth factor increases), life satisfaction goes down, or do you mean that when slope change from "0.15" to "0.05" (the value of slope growth factor change to less negative), life satisfaction goes down? Many thanks! 


By slope growth factor value increasing, I mean for example going from 0.15 to 0.05. So stated differently, the model says that a person with a higher slope growth factor value (say 0.05 instead of 0.15) has a lower Year 5 life satisfaction. 

Anne Chan posted on Wednesday, April 06, 2011  10:34 am



Thanks a lot! May I ask another question? The model did not fit when I regress life satisfaction on both the intercept and the slope growth factor of leisure activity. I guess I run into the problem of collinearity, do you have any suggestion of how to deal with this problem? Thanks a lot! 


Collinearity does not cause model misfit. If you have good fit without the distal outcome Year 5 life satisfaction and poor fit when including the distal, this says that the correlations between the repeated measures and the distal are not well fitted by the model so you should investigate why. An easier way to model the distal is to center the intercept growth factor at the last time point and let only this growth factor predict the distal. That also may make the detection of model misfit easier. 

Anne Chan posted on Wednesday, April 06, 2011  12:41 pm



Thanks a lot for your reply. May I ask how to model the distal is to center the intercept growth factor at the last time point? What is the syntax for doing this? Thanks! 


Study Topic 3 and Topic 4 on our web site  video and handouts. 

Carolin posted on Tuesday, June 14, 2011  5:32 am



Dear Mrs and Mr Muthen, I have a question concerning the H1 and H0 loglikelihood value in the output of a GMM: Is the H0 Model the model I specified in the model command? If yes, what does the unrestricted H1 model mean? MAXIMUM LOGLIKELIHOOD VALUE FOR THE UNRESTRICTED (H1) MODEL IS 5468.222 MODEL FIT INFORMATION Number of Free Parameters 15 Loglikelihood H0 Value 5275.604 H0 Scaling Correction Factor 1.569 for MLR I have one further question: How can I compare a model without covariates with a model with covariates? Thanks a lot for your help! Carolin 


The H0 model is the model specified in the MODEL command. I don't find any information about the H1 model in the GMM output. This message usually comes about with TYPE=BASIC; You can do difference testing of a model with and without covariates as long as the set of dependent variables is the same and you don't mention the means, variances, or covariances of the covariates in the MODEL commmand. 

Carolin posted on Thursday, June 16, 2011  12:52 am



Thanks a lot so far! I have following information in the output of a GMM (TYPE = MIXTURE)before the random starts results are listed: MAXIMUM LOGLIKELIHOOD VALUE FOR THE UNRESTRICTED (H1) MODEL IS 5468.222 Can you explain to me which model is refered to here? Concerning the difference testing: how can I do this (is there a special test? Could yo tell me the syntax?). Thanks, Carolin 


Please send the full output and your license number to support@statmodel.com. 

Anne Chan posted on Friday, June 24, 2011  7:59 pm



Hello! In my LGC model, I regress the number of hours (HOURS, a continuous variables) and gender of students on their slope of test scores (SCORES). In the nonstandardized result, the result of SLOPE ON HOURS is nonsignificant, but in the standardized result (STDYX), SLOPE ON HOURS is significant. May I ask: (1) Why the significant level of nonstandardized and standardized result different? (2) Should I report the STDYX result (report SLOPE ON HOURS as significant)? Thanks! 


They differ because the sampling distribution of the raw and standardized coefficients are not the same. You should report the significance of the raw for the raw and the standardized for the standardized. 


Hi Linda  Above, in 2006, you wrote that C_I & C_S (for example) output from a GMM "are the factor scores based on most likely class membership." Just to be OCD, what is the difference between the output I, S, & Q factor scores, and the C_I, C_S, & C_Q factor scores? I'm guessing that the former are therefore estimated for each observation without regard to the latent classes they would be assigned to, and so the best estimates to use would be the C_I, etc? Best, Bruce 


Hi. I'm hoping to get a response to Bruce Cooper's question above. I am interested in estimating individual means in a GMM conditional on latent class. I'm not sure which factor scores to use. Thank you. 


Bruce was on the right track. The I, S, & Q factor scores are scores mixed over the latent classes, whereas the C_I, C_S, & C_Q factor scores are the scores for the most likely class of the individual. I would use the latter. 

HeeJin Jun posted on Monday, November 28, 2011  3:18 pm



Hi, I am running piecewise growth model. What is the default covariance structure for the growth model? My codes are below. Thanks for your help. MODEL: i s1  bmim8@8 bmim7@7 bmim6@6 bmim5@5 bmim4@4 bmim3@3 bmim2@2 bmim1@1 bmi00@0 bmip1@0 bmip2@0 bmip3@0 bmip4@0 bmip5@0 bmip6@0 bmip7@0 bmip8@0; i s2  bmim8@0 bmim7@0 bmim6@0 bmim5@0 bmim4@0 bmim3@0 bmim2@0 bmim1@0 bmi00@0 bmip1@1 bmip2@2 bmip3@3 bmip4@4 bmip5@5 bmip6@6 bmip7@7 bmip8@8; i s1 s2 ON ptsd1308s ptsd4p08s; 


I think all the growth factors have free residual correlations as the default. But ask for Tech1 and check what you get there and in the regular output. 


My question is related to Anne Chan's question (20110624). I have regressed future sickness absenteeism (binary) on the intercept and the linear slope of two burnout dimensions exhaustion and disengagement (one LGM/burnout dimension). In the nonstandardized results the effect of the slope on future sickness absenteeism is nonsignificant (p = .143) but in the standardized results it is significant (p = .014). 1) I read that the reason for why these two results differ was that the sampling distribution was different, but should there really be such a big difference in pvalue and which of the two effects is "true"? In the response to Anne's question about which effect to report you said that she should report the significance of the raw for the raw and the standardized for the standardized. However, this means that one can arrive at completely different conclusions depending on which effect is reported. 2) Is there really no "best practice" for how to report these effects or any literature that discuss this issue? 


A related question to my previous post is about the effect of time in LGMs. In my case I have two variables and I want to report which of the variables that changes most over time. I therefore want to report the standardized effects. How do I accurately present the change in the outcome variable in SDs for a time score increase of one (unstandardized) unit, e.g., how many SDs do the outcome variable increase in one year? 


You can use ESTIMATOR=BAYES to explore this by looking at the credibility intervals which are not symmetric and looking at the distributions of the parameters. If the model is linear, the mean of the slope growth factor tells you about the mean of the outcomes. At each time point, you can divide the mean of the slope growth factor by the standard deviation of the outcome at each time point. 


Thank you for your reply. Because I am not familiar with Bayesian estimation I am uncertain when it comes to interpreting the results and specifying the model. Is it standard to include priors when specifying the model? Is it ok to estimate a model without priors? I am interested in the effects of the intercept and slope of burnout on future sickness absenteism (SA). I have estimated a model without specifying any priors and the magnitude of the effects are very different when I used BAYES (I on SA=0.329; S on SA=1.405) compared to when I used MLR (I on SA=1.051; S on SA=3.796). Is there any reason for these differences (besides different methods of estimation) and which of the effects is more correct? In regards to my question about presenting change in SDs I also tried a somewhat different approach compared to the one you suggested. I used the mean and the SD at T1 to standardize the outcome variable at T2 and T3 (i.e., expressing change in SDs), this allowed me to get an average change in SDs for a unit change in time over the period. Does this approach seem ok to you? 


I think you said your DV is binary in which case Bayes uses a probit link and the default ML link is logit, so that can explain the difference. If you choose LINK=probit in your ML run the results should be very close. Yes, you can use Bayes with noninformative priors  this is the Mplus default. And it is the situation where ML and Bayes give asymptotically the same results. Your change presentation seems reasonable if those means and SDs are modelestimated quantities. 


Hi Linda and Bengt, I have run a latent growth model with a binary outcome using the default settings. I have then used dummy variables to predict the intercept and slope. What is the interpretation of these parameters? For example, variable A predicts the slope with a parameter of 0.9  how do I interpret this  can I simply add it to the threshold of the slope to understand change over time for variable A (in comparison to reference group)? Or is it an odds ratio? Many thanks, sorry if this question is simplistic! 


Growth factors are continuous latent variables. When they are regressed on a set of covariates, linear regression coefficients are estimated. They are interpreted as usual. 

WenHsu Lin posted on Tuesday, April 08, 2014  6:49 pm



Hi, I am running a parallel LGM (fi fs, pi ps), the estimated mean for each intercept (fi pi) and slop (fs ps) are clear (model 1). However, if I let fs regressed on pi and ps regressed on fi (model 2), there are no estimated means for ps and fs. How do I get the means? I compared the two models found that the means for fi and pi are the same, so does that means of ps and fs will remain the same? 

Jon Heron posted on Wednesday, April 09, 2014  8:24 am



when any variable becomes dependent (latent or manifest) you get an intercept and a residual variance rather than a mean and variance. If you are still desperate to see latent variable means and variances then I think tech4 will give you them. 

WenHsu Lin posted on Thursday, April 10, 2014  6:37 pm



Thank you Jon. Please let me ask a related question. I regressed ps (peer support slop) on fi(family support intercept) and gender. I got intercept for ps (.59) and estimated paths coefficient for fi is .024 and for gender is .54. So, the equation is like: y(ps) = .59.024(fi)+.54(gender). So my questions are: (1)what fi value shall I use in the regression to get y? (2)Is y value related to the estimated mean for ps? In the regular LGM, the mean for ps is negative. So, does that mean male would increase the path of decline? 


Dear Drs. Muthén: I have similar question to the above post. My model involved with two parallel LGM (ai as; bi bs), one outcome variable (y), and two exogenous variables (x1 x2). My question is that: (1) I regressed as on ai and x1 x2, I got estimated intercept (3.5) and path coefficients for each of ai x1 x2. So, I can have an equation as: as = 3.5+0.8(ai)+3(x1)+4.32(x2). I put estimated mean for ai to get predictive as outcome, is this right? (2) If my estimated mean for as is negative, which means growth is negative. So does that mean the greater value from above equation leads to great decline? Thank you so much. 


(1) You can get the mean of as from TECH4. (2) Yes, a larger negative slope value implies a steeper decline. 


Hi Linda and Bengt, The estimated mean of my slope (social support) is negative and is positive for the intercept. I then used these two latent variables to predict a distal outcome variable (delinquency). The estimated effect of intercept on the distal outcome is negative but is positive for the slope. With regard to the intercept, the result meant that the higher the initial level of social support, the greater involvement of delinquency at later time. This is consistent with theory. However, for the slope, the result showed that as social support gets higher, the individual involves in more delinquency? But this is somewhat counterintuitive. So, I am wondering if the true meaning of this result should be interpreted as follow: as social support gets higher, which means "stepper decline," individuals have higher level of delinquency? Thank you. 


It can be difficult to predict from two growth factors like this. I wrote up some experiences I had in the paper on our website: Muthén, B., Khoo, S.T., Francis, D. & Kim Boscardin, C. (2003). Analysis of reading skills development from Kindergarten through first grade: An application of growth mixture modeling to sequential processes. Multilevel Modeling: Methodological Advances, Issues, and Applications. S.R. Reise & N. Duan (Eds). Mahaw, NJ: Lawrence Erlbaum Associates, pp.7189. download paper contact first author show abstract See especially section 5.1. One problem can also be high correlation between i and s. 


Thanks a lot. 

Back to top 