Message/Author 


I fitted a latent curve model with parallel processes for selfesteem and depression among children. The model also included covariates of four subtypes of child maltreatment that predict both growth functions of selfesteem and depression. I found that the intercept of selfesteem is positively related to the slope of depression. The intercept of depression slope is significant and negative, indicating that depression decreased over time for the entire group. So, does the positive value of the regression path from the selfesteem intercept to the depression slope mean that children with higher initial levels of selfesteem showed “greater” decreases in the depression over time? Or because the depression slope is negative (decreasing), a positive influence indicates “slower” decreases in the depression? 

bmuthen posted on Friday, November 04, 2005  1:55 pm



A positive influence means a slower decrease in depression. Note that the intercept of the depression slope is not the mean  you get the mean in Tech4. 


Dear Bengt, Your answer was (critically) helpful! Thanks so much for your comment regarding intercept vs. mean of the slope factor. A related question is about testing equality of the means of intercept and slope factors across two groups (boys vs. girls). I put equality contraints giving the identical number in the second 'MODEL: male' statement as in the first MODEL statement like this, [CDI_L] (3); [CDI_S] (4); [SEI_L] (5); [SEI_S] (6); In the output, however, the intercepts fo the first group yields all zeros; whereas in the second group, the values of the intercepts of those factors are nonzeros. In the TECH4 output results, the estimated means are not equal between the groups. I know as the default in Mplus, the intercepts of the factors are fixed to zero for the first group and are freed to be estimated in the other groups. Am I still testing the equaility (invariance) of the means of the intercept and slope factors by assiging the same value for the corresponding parameters in the Mplus program? I'm confused because the output shows different values of the parameters across groups. Thanks! Jungmeen 

bmuthen posted on Saturday, November 05, 2005  3:15 pm



This is hard to answer without seeing your run. Chapter 16 (see the  symbol) lists the default settings for growth models using the old BY language and the new  growth language. If you are not doing this already, I would recommend using the new language. As you see in chapter 16, intercepts being fixed at zero or not for a group depends on the setting, such as having categorical outcomes or not. So I can't answer you specifically. Note also that in your case with the regression of a slope growth factor regressed on an intercept growth factor, testing invariance across groups in growth factor means is not accomplished by holding growth factor intercept parameters equal across groups since the growth factor mean is not just a function of the intercept parameter as I mentioned. If this answer is not sufficient, please send your input, output, data, and license number to support@statmodel.com. 


Hello: I have a question about the interpretation of parallel process growth models using ind. varying time scores. When regressing a slope factor (construct A) on a slope factor (Construct B), is it best to use the unconditional mean of both slopes to interprete the resulting effects? So, if process A is declining in the unconditional model, and process B is declining in the unconditional model and then find that regressing slope B on slope A results in a positive regression estimate (and a positive slope intercept for construct B) then I would interprete that steeper declines in A is associated with steeper decline in B (thus the remaining intercept of B is now positive). Is this correct? thanks ML 


If your two processes are declining and you regress b on a, it means that for a one unit decrease in a, b decreases the amount of the regression coefficient. 


Ok. thanks much. But, how about when process A is rising and process b is declining. In that case, would a poistive regression coefficient be interpreted as meaning that a rising A process is associated with less of a decline in B? Thanks Michelle 


Yes. 


I would like to run a parallel process growth model for BMI and symptoms of depression (where I regress the slope of BMI on the intercept of depression and vice versa). In the unconditional latent growth models for BMI and depression, there is a significant quadratic term. How do I account for nonlinearity in the parallel process growth model? Is it necessary to do so? Below is the model command I have so far: MODEL:ibmi sbmi qbmi bmi05@0 bmi10@1 bmi15@2 bmi20@3; idep sdep qdep depsum05@0 depsum10@1 depsum15@2 depsum20@3; ibmi on ex3_age female black nocoll05 unmard05 exsum05; sbmi on ex3_age female black nocoll05 unmard05 exsum05; qbmi on ex3_age female black nocoll05 unmard05 exsum05; idep on ex3_age female black nocoll05 unmard05 exsum05; sdep on ex3_age female black nocoll05 unmard05 exsum05; qdep on ex3_age female black nocoll05 unmard05 exsum05; ***sbmi on idep; ***sdep on ibmi; idep with ibmi; idep with sdep; ibmi with sbmi; sdep with sbmi; Should I change the astericked lines to the following: sbmi qbmi on idep; sdep qdep on ibmi; 


You should fit each growth process separately as a first step. They do not need to have the same shape. Then you can fit a model with both processes. I would not add covariates until after that. You do not want to have ON and WITH statements for the same variables. 

dan berry posted on Wednesday, January 07, 2009  7:57 pm



Dear Drs. Muthen, In a parallel process model with covariates and a distal outcome I’d like to leave the intercept/mean of the outcome variable (WJG5) in the scale of the of the scaling indicator. But when I fit the model (v4.2)tech4 gives a mean of 8; the indicator scales are in the 100s. When I use [wj5*] it says the model may not be identified. I’m sure I’m missing something basic, but not sure what. Thanks! dan Analysis:type= missing meanstructure; ITERATIONS = 5000; Estimator= mlr; MODEL: wjg5 BY wjbmwcg5 wjbrwcg5; att54 slatt  lattg54@0 lgmattk@1 lgmatt1@2 lgmatt3@4 lattg4@5; ktcon sltcon  cnfl_tkf@0 cnfl_t1s@1 cnfl_t2s@2 cnfl_tg3@3 cnfl_tg4@4; [wjg5*]; ktcon ON att54; att54 WITH totagr54; slatt on ktcon; sltcon on slatt; slatt on totagr54; WJg5 ON ktcon; wjg5 ON att54; WJg5 ON SLatt; SLTCON WITH ATT54@0; SLTCON WITH TOTAGR54@0; OUTPUT: SAMPSTAT modindices; tech4 tech1; 


You need to add [wjbmwcg5@0] to the MODEL command for identification purposes. 

dan berry posted on Thursday, January 08, 2009  6:25 pm



Thank you for your quick response. But could it be something else? When I add [wjbmwcg5@0] along with [wjg5*], the model never converges (even when I set it for huge numbers of iterations). If I cannot get WJG5 in the metric of the scaling indicator, I’m unclear about what the 8 tech4 mean for WJG5 represents. Is it that the default mean is zero, and that 8 is the conditional mean based on the average values of all the predictors in which the WJG5 is regressed upon? 


You will need to send your input, data, output, and license number to support@statmodel.com for further help. 

Tim Stump posted on Tuesday, November 03, 2009  6:09 pm



I have two piecewise parallel processes characterized by the following statements: i1 s1  ueia0@0 ueia2@2 ueia4@2 ueia8@2 ueia12@2; i1 s2  ueia0@0 ueia2@0 ueia4@2 ueia8@6 ueia12@10; i2 s3  seia0@0 seia2@2 seia4@2 seia8@2 seia12@2; i2 s4  seia0@0 seia2@0 seia4@2 seia8@6 seia12@10; I'd like to determine if the two slopes of one process (s1 and s3) are the same as the slopes from the other process (s2 and s4), i.e., s1s3=0 and s2s4=0. What syntax would I add to carry out these tests? 


You can do this using difference testing of nested models where one model allows the parameters to be free and the other constrains them to be equal which is described in Chapter 13 of the user's guide or you can use MODEL TEST. See the user's guide for more information. 

Tim Stump posted on Wednesday, November 04, 2009  11:10 am



Linda, thanks for your reply to my post on november 3. I have a followup question. How do I use the parameter labels (eg, p1, p2, etc from chapter 16 of manual) and MODEL TEST statement when specifying the growth model with the "" symbol? 


When a bar symbol is involved, it is the means and variances of the random effects that are the parameters in the model. So you would label those parameters. If I am not understanding the question, you need to send the full output and your license number to support@statmodel.com so I can see the exact situation. 


Hi How can I test for the assumptions of a parallel process model? Thanks Wayne 


Are you asking how to test the fit of a parallel process model? If so, fit is assessed as for any model. 


Sorry, I was thinking about a statistic for multivariate normality for the outcomes, random effects, and residuals. Also, I would like to test for the independence between random effects and residuals as well as the independence of the residuals. Please correct me if I am wrong, are these not the assumptions for growth curve models. 


We don't have any specific tests for normality of the outcomes. The MLR estimator is robust to nonnormality of the outcomes. I am not aware of tests of multivariate normality of random effects and residuals. Regarding independence, perhaps you are referring to Hausmantype tests of uncorrelatedness between residuals and exogenous variables. I am not aware of such tests when the exogenous variable is a random effect. Uncorrelatedness of residuals can be explored using WITH statements when those parameters are identified. 


Dear Linda, I have a related question about your post on Tuesday, August 19, 2008  3:13 pm. It is still not clear to me how to add or if there is need to add quadratic term into parallel process LGM for mediation test. Without quadratic term my model is; ib sb bmi0@0 bmi1@1 bmi2@1.5 bmi3@2.5; is ss ssb0@0 ssb1@1 ssb2@1.5 ssb3@2.5; sb ON is ss; ss ON ib; ib sb ON group; is ss ON group; When I add a quadratic term to one of these indivudial growth models (for example to BMI model), should i regress quadratic growth factor on linear growth factor of ssb to find a mediating effect? Like this; ib sb qb bmi0@0 bmi1@1 bmi2@1.5 bmi3@2.5; is ss ssb0@0 ssb1@1 ssb2@1.5 ssb3@2.5; qb ON is ss; ss ON ib; ib sb qb ON group; is ss ON group; Thanks. 


You could do this. 


Thank you for your answer. I have a following question on the same topic. I have a quadratic growth both for the mediator and the outcome variable. I have a difficulty to calculate the mediated effect in this situation. Since I have a quadratic growth in both variable, should I still use the classic method as; The action theory test (apath) – Group > Slope1 The conceptual theory test (bpath)  Slope 1 > Slope 2 OR Should I include the quadratic slope into the calculation of mediating effect? For instance as; apath Group > Slope 1 bpath Slope 1 > Quadratic 2 OR something like combination of linear and quadratic slope factors; apath Group > Slope 1 + Group > Slope 2 b path  Slope 1 > Slope 2 + Quadratic 1 > Quadratic 2 OR something else? I’d really appreciate any help. Thanks beforehand. 


Dear Linda and Bengt Muthen, I would appreciate it a lot if you could give your opinion and guidance on the question that I asked at the previous message at April 7. Thank you in advance. Regards 


I don't think there is any concensus on how to approach mediation in a parallel growth setting. And, with a quadratic function you have the added difficulty of not being able to separate effects of linear and quadratic growth. One possibility is to focus on mediation at a specific time point and only consider mediation via the intercept growth factor defined for that time point. That is, setting the time score at zero for that time point for both processes. 


Dear Linda and Bengt Muthen, I have a question about doing a parallel process growth model. I see in Topic 3 (slide 181) that the slope of one of the growth factors is restricted to the first two time points with the last 2 time points free. However, the manual shows both processes estimated similarly (for example, i1 s1  y11@0 y12@1 y13@2 y14@3). Does this difference have anything to do with the inclusion of covariates? I am running a model where I've run the growth models separately, then together, then added then covariates, and finally included an interaction term between i2 and one of the covariates (following slide 165). I would like to make sure my model is identified properly. Thanks very much, as always! Luna 


No, this has nothing to do with the inclusion of covariates. These are two different growth models. The first has two free time scores. The second is a linear growth model. 


Thanks, Linda, for your quick response. I was wondering about the example models for parallel process latent growth models in the Topic 3 and in the guide, since one shows covariates and the other does not. I would like to test if i1 predicts s2 and i2 on s1, but controlling for possible covariates. Here is the model statement I have: i1 s1 x11@0 x12@1 x13@2; i2 s2 x21@0 x22@1 x23@2; i1s2 on gender ses; s2 on i1; s1 on i2; Does that seem a correct merging of Topic 3 and the guide? Is it correct not to have the intercepts correlate or is that dependent on theory? 


I would think the intercept growth factors would typically be correlated but your theory would be the last word. 

xiaoyu bi posted on Thursday, December 26, 2013  3:46 pm



Hi, Linda, I fitted a parallel process LGM for arguments and somatic symptoms. Both the slopes are negative, indicating both arguments and somatic symptoms decrease over time. The intercept between arguments is significantly and negatively related to slope of somatic symptoms. Does that mean people with higher initial level of argument have a greater decrease in somatic symptoms over time? Thank you! 


Yes. 


Hi Drs. Muthen, I modeled joint book reading and receptive vocabulary using the parallel process model. The intercepts of the slopes of joint book reading and receptive vocabulary are negative. The slopes of joint book reading and receptive vocabulary are significantly and negatively related. How should I interpret the significant association between the slopes? 


Please send the output and your license number to support@statmodel.com. 


Hi Dr. Muthen, I am sorry that I don't have the license number, as I am using the university license. But I do have another question. Are there ways to test the moderating effect on the association between the two slopes? For instance, child gender moderates the association between the two slopes. Thanks. 


You can do that in a multiplegroup analysis based on child gender. Then you can test equality of that association. 

anonymous Z posted on Thursday, February 26, 2015  6:21 pm



I fitted a latent curve model with parallel processes for mothers’ and children’s selfesteem. I have treatment and control condition. In the treatment condition, the slopes of mothers’ and children’s selfesteem were significantly correlated; in contrast, in the control condition, the slopes of mothers’ and children’s selfesteem wasn’t. What does this result mean? Usually Mplus can do multiple group comparison in terms of the mean of intercept and slope. Does it make sense to do a comparison on the variance? 


Yes, any parameter can be affected by treatment, so this could be an interesting finding. As long as you can interpret what it means substantively. 

anonymous Z posted on Friday, February 27, 2015  5:04 pm



Dr. Muthen, Thank you very much for your response. My followup question is that if Mplus can do multiple group comparison on variance. I know it can do multiple group comparison on the mean of intercept and slope. Does comparison on variance make sense? Another question is: In the parallel modelling, is "S1 WITH S2" the correlation of residual variance or the correlation between S1 and S2? Thanks, 


Q1. Yes. Q2. That depends on whether S1 and S2 are regressed on something (if they are on the lefthand side of ON). If yes, then it is a residual covariance, if no then it is a covariance. It is not a correlation unless you consider the StdYX output. 

anonymous Z posted on Friday, February 27, 2015  5:59 pm



How to do multiple group comparison on variance then? I have no idea how to constrain the variance to be equal. What the syntax should be like? Thank you very much. 


Variance equality is like other equalities, using say y (1); in each group. This assumes that y is not regressed on anything because if it is, then y(1); refers to the residual variance. 

Back to top 