Message/Author 

Daniel posted on Monday, March 22, 2004  9:11 am



I ran a two group lgm with time invariant covariates. The model did not fit the data well. I ran the modindices procedure and freeing the slope and intercept factor means from the comparison group was suggested. This improved the model fit substantially. However, some of the effects that were significant before are no longer significant. My question is, which analysis do I use when reporting my findings, the analysis with the slope and intercept means of the comparison group set equal to zero, or the analysis where I freed them to have better fit? 

bmuthen posted on Tuesday, March 23, 2004  7:13 am



If you do a 2group growth model, your baseline model should be one where the growth factor means (or intercepts) are allowed to be different across groups. If that fits well, that would be the one to report. 


I have a dataset where we've measured physical aggression at 3 time points. I know ideally we should have 4 but these are the limitations of my data. We want to estimate the impact of treatment by gender on aggression. In addition, I have 2 continuous observed covariates and 1 latent variable for language ability (2 indicators). I've been trying to combine the syntax for the two part model presented at the workshop along with the syntax from your 2002 paper on intervention effects but have not met with any success. Is there an example on the web that can help me with this? Thanks so much! 


Just to make sure, I assume you are thinking about growth mixture modeling with latent trajectory classes with the added twist of twopart modeling. I don't think we have an example of that combination, but although complex it should not be problematic. We are polishing a paper on the steps one wants to take to do twopart factor mixture modeling and what you have in mind should go through similar steps. I can send this rough draft to you if you want. If you continue to have problems with this setup, send your input, output, data, and license number to support@statmodel.com. 


If you don't mind sending the draft that would be great as I'm trying to get this paper done by the middle of April. Thanks so much! Kim 


Send me your email address. 

anonymous posted on Sunday, July 20, 2008  4:46 pm



Hello, I'm attempting to fit a multiplegroup LGM with 2 groups. In one group, the data exhibits a significant linear slope with significant linear variance. In the other, the data exhibits a significant quadratic slope with significant quadratic variance. I'm wondering how I specify this in the multiplegroup framework. I attempted to do so by constraining the quadratic intercept and variance to 0 in the first group in order to "omit" that factor...that seems to work. However, when testing remaining differences across groups I'm having some trouble. When I constrain the models to be equal (but leave the quadratic intercept and variance free to be estimated in the second group), the quadratic intercept and variance are no longer significant in the second group and the model is NPD. Any suggestions? 


It sounds like you hold equal to the linear model in one group the linear part of the quadratic in the other group. But the linear part in the quadratic group has different meaning than the linear model in the the linear group. I think only the intercept growth factors are comparable in this case. 

anonymous posted on Monday, July 21, 2008  7:21 am



Hello, Halfway there  thank you! I tried your suggestion and allowed the linear intercept and variance to be free across groups. In the quadratic group, the quadratic variance is now significant, however the intercept is still not significant. Should I also allow all or some of the covariances among growth factors to be free across groups? Also, is there a reference I can use to justify this approach? Thanks! 


All growth factor parameters should be different across groups (including covariances)  except for the mean and variance of the intercept factor (and its regressions on covariates) which can be compared across groups, i.e. held equal or nonequal in order to test invariance. 

anonymous posted on Wednesday, July 23, 2008  7:29 am



Thank you  this is very helpful. So, given your recommendation, I'm assuming that we also should not compare regressions of the linear factor on covariates  yes? May I cite this as personal communication? 


That's right. Although we appreciate the creditgiving gesture, we prefer no personal communication references since we may not always be aware of the full picture (and don't have time to sink into that), and that the author instead portrays the argument for doing something in a certain way. 


Hello, I applied twogroup LGM to test the intervention effect on the outcomes (as explained in the article by Muthen 1997). My questions are; 1) I want to check a possible moderating effect of gender. I created an interaction term but i think i shouldn't test it in the twogroup LGM. That's why i checked it beforehand with conditional LGM (without multigrouping). Then if there was a siginificant moderating effect of gender, I made two datasets for gender subgroups and repeated twogroup LGM to explore intervention effects in these subgroups. Is that a good way? 2) When gender is not a moderator, I want to add it as a covariate to my models as 'i s ON gender'. Should I equate it across groups as 'i s ON gender (10)' or should I release it? The model fits better when i release it but the theory of multigorup analysis is equating all the effects across groups except the intervention effect. In this respect, do you advise to equate the covariate effects across groups or not? Thanks beforehand. Regards. 


1) A separate 2group (TxCtrl) analysis for each gender is useful. But you may also want to test if there are gender differences which means that you have a 4group analysis. 2) Equating or not across TxCtrl groups depends on your setting. If you have randomization and "i" is the preintervention growth factor then you should have equality. And if "s" refers to postintervention growth you should have equality only if you don't believe there is gender moderation. 

Vanessa posted on Wednesday, September 21, 2011  10:13 pm



Hello, I have a similar situation as the above (intervention analyses following multigroup Muthen 1997 approach). Is testing whether eg. gender had an impact on the Tx slope (in Tx group), by regressing Tx slope on gender, a valid way of examining this issue? How does it differ from testing the impact of gender using a separate 2group (TxCtrl) analysis for each gender? Finally, how does testing a separate 2group (TxCtrl) analysis for each gender differ from your suggested 4group analysis? I actually have two potential moderators, which may also feasibly interact (gender and dichotomous genetic factor); what would be the best way of examining the impact of these two factors in the Muthen 1997 framework? Many thanks in advance Vanessa 


The MuthenCurran (1997) Psych Methods paper argues that their approach is more powerful in detecting intervention effects than regular 2group analysis. You can do a 4group run using the MC approach. I would use Define to create interaction variables to capture the moderator effects. 

Vanessa posted on Thursday, September 22, 2011  5:16 pm



Thank you. Just to clarify my last question and your answer: using the MC 1997 approach, do you mean the best way of examining the impact of two dichotomous moderator variables on an intervention, would be to create interaction terms between potential moderators, and then regress the Tx slope (in the Tx group only) on the moderator variables and their interaction variable? If one only had one dichotomous moderator of interest, would the best way be to do a 4group run, or simply regress the Tx slope on the moderator variable? Thanks again 


I guess the question is how the moderators affect model parameters, including those in the control group  and if the group samples are large enough if you take a multiplegroup approach. If you bring in moderators as interaction covariates, you need to have the same covariates for both the controls and treatment group. I would think at least gender plays a role also for controls. 

Vanessa posted on Sunday, January 15, 2012  9:39 pm



Hello. This question follows on from the last two posts here and centres around how covariates should be treated in multigroup intervention analyses (following the MC 1997 approach); The last post seems to suggest that if. eg., we consider gender to influence the Tx slope in the Tx group, then gender should be a covariate in the control group also... Because there's no Tx slope in the control group, are you suggesting that if we bring in a covariate influencing the Tx slope, then this covariate should also be allowed to influence the normative slope parameters in both the Tx and Control groups? I have several covariates that the Tx slope in Tx group is regressed on; allowing the normative slope paramaters to be regressed on the plausible covariates (and held equal in both Tx and control gps) seems to improve the overall fit in my model, and one of the paths is signficant. I am trying to keep the models simple and focus only on treatment effects: Is there a reason why covariate effects that are not directly impacting the Tx slope should be included in the model still (whether sig. of nonsig)? eg. I ON gender is sig.; when it is not included in both groups, Tx slope ON I is barely sig; when I ON gender is included in both groups, Tx slope ON I is v. sig. Is the meaning of Tx slope ON I changed by regressing I on gender?? Thanks for your advice 


It is hard to make general recommendations, but I would tend to want a wellspecified model for the normative growth, so answering your question marks: Q1. Yes. Q2. Less clearcut, but including covariates that are significant for i and s may increase the power with which to detect tx effects. Q3. You should probably have txslope on i gender; in which case txslope on i is a partial regression coefficient, holding gender constant. 

Vanessa posted on Monday, January 16, 2012  2:25 pm



Thanks for your response. Q1 and 2 responses seem to indicate that the significant covariate effects on the normative parameters should be included; Should covariate effects on normative parameters be included if, when left free to vary between groups, they are significant only in one group (Tx group) (randomisation is employed) yet when constrained to be equal, significant still? Re: Q3, sorry, I didn't make clear but the model already included txslope on gender as well as i (and other covariates, sig or not). Txslope on gender is not significant whether i on gender is included or not. Whereas including (or not) i on gender, changes the sig level of txslope on i. So if all sig. covariate effects on normative parameters are included, and thus i on gender is included, how does one then interpret a significant txslope on i? It seems that i is no longer the initial status but that related only to gender  or is it that in Mplus language it is now the residual of i that txslope is being regressed on? [with txslope on i and i on gender] Thanks in advance 

Vanessa posted on Monday, January 16, 2012  2:34 pm



Regarding your response to Q1 and Q2, would you still recommend this if it is a randomised (at baseline) trial, when in theory, everything should be equal between groups? 


Re the q. in your last message, including covariates is worthwhile in a randomized study as well because it increases the power with which you can detect tx effects. Re the last q. in you first message, i is still the initial status  it is just that we let its mean differ across genders. Re the first q. in your first message, I think normative parameters should be held equal across groups due to randomization. I think we need to end this thread here, so that we don't slip into a consulting situation which goes beyond the purposes of Mplus Discussion and Support. 

pauline posted on Thursday, June 14, 2012  7:18 pm



I am doing a MG LGM with 3 time invariant covariates. Can one test invariance of regression coefficients for s and q separately? E.G. s on covariate1 (1); then s on covariate2 (1); then s on covariate3 (1); then q on covariate1 (1); etc.. These result in a nonsig chi2 diff. But when I build a model with all regression coefficients held equal, the chi square diff is now significant. But I can use different combinations to achieve some invariance eg., s on covariate1 (1); s on covariate2 (2); q on covariate1 (3); q on covariate2 (4); or s on covariate2 (1); s on covariate3 (2); q on covariate2 (3); q on covariate3 (4); or s on covariate1 (1); s on covariate3 (2); q on covariate1 (3); q on covariate3 (4); and each results in a nonsig chi2 diff and lowered AIC/BIC. How do I choose the most appropriate model??? There is no theory to suggest one over the other. All 3 combinations lead to possible and slightly different interpretations. Would it make sense to break it down further if it results in more invariance?? e.g., s on covariate1 (1); s on covariate3 (2); q on covariate1 (3); q on covariate2 (4); q on covariate3 (5); Thank you. 


I don't think it is a good idea to test if s on x has the same slope as q on s because s and q are different things and have different scales. Also, you should not test equality of the slopes of s on x1 and s on x2 unless x1 and x2 are measured on the same scale. 

pauline posted on Friday, June 15, 2012  12:50 pm



Thank you for your response. I think I may have been confused. I am not trying to test equalities of s on x and q on s, nor equalities of s on x1 and s on x2. The first part was meant to provide an example of separate MG LGM run for each coefficient held equal across groups (not to each other)...eg. running one MG LGM with s on x1...and then running another MG LGM this time with only s on x2 held equal, etc.. this gives me invariance for each equality statement when analyzed in it's own separate MG LGM. Although when each equality statement is tested separately works well, it does not work when tested together in combination as in s on covariate1 (1); s on covariate2 (2); s on covariate3 (3); q on covariate1 (4); q on covariate2 (5); q on covariate3 (6); the coefficients are held equal across groups, but not to each other. I hope this clarifies what I'm trying to do. Thank you very much for your help. 


It is still not clear to me. Please briefly describe what you want to happen (not what doesn't work). 

pauline posted on Saturday, June 16, 2012  10:21 pm



Hi Bengt, I want to test for equality of path coefficients for the covariates in a MG LGM. I want to know whether the influence of the covariates on development is the same for both groups. I have 3 covariates, and the growth form is quadratic. I tried holding all the path coefficients equal across groups but that does not work. Many thanks. 


This needs to be done in two steps followed by a chisquare difference test. In the first step the coefficients are free across groups: s on covariate1; s on covariate2 ; s on covariate3 ; q on covariate1 ; q on covariate2 ; q on covariate3 ; In the second step they are constrained to be equal across groups: s on covariate1 (1); s on covariate2 (2); s on covariate3 (3); q on covariate1 (4); q on covariate2 (5); q on covariate3 (6); 

pauline posted on Tuesday, June 19, 2012  6:47 pm



Thanks Linda. I tried as you have suggested: s on covariate1 (1); s on covariate2 (2); s on covariate3 (3); q on covariate1 (4); q on covariate2 (5); q on covariate3 (6); but I get a sig chi2 difference indicating inequality. Should I then proceed to constrain parameters individually in a stepwise fashion to determine if any of the coefficients can be constrained to be equal? 


You can look at modification indices to see where the differences are largest and test those using either difference testing or MODEL TEST. 

pauline posted on Thursday, June 21, 2012  1:40 am



Hi Linda Yes this makes sense. Thank you for your patience. I have 2 more questions. 1. My unconditional MG LGM shows invariance in factor means, variances, covariances and residual variances. So I am now testing the conditional model. In testing the equality of regression coefficients for covariates, should the factor intercepts, residual variances, and residual covariances also be held equal across groups (if testing of the unconditional model is found to be invariant across groups)? 2. How would one interpret if there is a sig chi2 difference if covariates are added to the model which otherwise was invariant if unconditional 


Factor means, variances, and covariances are not measurement parameters. They are structural parameters. You might want to watch the Topic 1 course video that covers these topics. Only measurement parameters should be held equal across groups. 

Carlijn C posted on Monday, August 25, 2014  10:22 am



Hello, I've read this topic, but unfortunately it is still not clear to me. When testing for differences between two groups, I held all parameters equal, except the one that I want to test. For example, intercept mean and variance and slope variance were held equal, only the slope mean was freely estimated. (I used the Wald test to test if the slope means were significantly different.) Should the covariance between the slope and intercept also be held equal? Another question: I want to control for a covariate. When I test for differences between two groups (same situation as above), should the regression on the covariate held equal between the two groups (so i on covariate (1), s on covariate (2), in both groups)? 

Carlijn C posted on Wednesday, August 27, 2014  3:15 am



I'm sorry that I repost, but I've read this in this topic: 'Equating or not across TxCtrl groups depends on your setting. If you have randomization and "i" is the preintervention growth factor then you should have equality. And if "s" refers to postintervention growth you should have equality only if you don't believe there is gender moderation.' In my case (the situation in the above post), there is randomization and I don't expect moderation effects (but you never know, I think), so I quess I should held the regression on the covariate equal between the two groups. However, I'm still not sure about the covariance between the slope and intercept (my first question). I would be very grateful if you can help me. 


You can go either way on that. In fact, with randomized studies where i is defined as preintervention status I have regressed s on i and that regression can be different in treatment vs control group because subjects at different i levels may benefit differently from the treatment (viewing s as influenced by treatment). For a related approach, you may want to take a look at the paper on our website: Muthén, B. & Curran, P. (1997). General longitudinal modeling of individual differences in experimental designs: a latent variable framework for analysis and power estimation. Psychological Methods, 2, 371402. 

Carlijn C posted on Wednesday, August 27, 2014  12:30 pm



Thank you for your answer. I've read this paper and, if I understand it right, you suggest this syntax: Model: s on i; s with i@0; Model experimental: s on i (1); Model control: s on i (2); I was thinking, I'm not sure that there are no moderation effects. Is it allowed to equal the regression of i on the covariate in both groups (because of the randomization), but freely estimate the regression of s on the covariate? So: Model: i s on gender Model experimental: i on gender (1); s on gender (2); Model control: i on gender (1); s on gender (3); Or does this not make sense? 


That's certainly one reasonable model; several could be explored. 

Back to top 