I ran a two group lgm with time invariant covariates. The model did not fit the data well. I ran the modindices procedure and freeing the slope and intercept factor means from the comparison group was suggested. This improved the model fit substantially. However, some of the effects that were significant before are no longer significant. My question is, which analysis do I use when reporting my findings, the analysis with the slope and intercept means of the comparison group set equal to zero, or the analysis where I freed them to have better fit?
bmuthen posted on Tuesday, March 23, 2004 - 7:13 am
If you do a 2-group growth model, your baseline model should be one where the growth factor means (or intercepts) are allowed to be different across groups. If that fits well, that would be the one to report.
I have a dataset where we've measured physical aggression at 3 time points. I know ideally we should have 4 but these are the limitations of my data. We want to estimate the impact of treatment by gender on aggression. In addition, I have 2 continuous observed co-variates and 1 latent variable for language ability (2 indicators). I've been trying to combine the syntax for the two part model presented at the workshop along with the syntax from your 2002 paper on intervention effects but have not met with any success. Is there an example on the web that can help me with this?
Just to make sure, I assume you are thinking about growth mixture modeling with latent trajectory classes with the added twist of two-part modeling.
I don't think we have an example of that combination, but although complex it should not be problematic. We are polishing a paper on the steps one wants to take to do two-part factor mixture modeling and what you have in mind should go through similar steps. I can send this rough draft to you if you want.
If you continue to have problems with this setup, send your input, output, data, and license number to email@example.com.
anonymous posted on Sunday, July 20, 2008 - 4:46 pm
Hello, I'm attempting to fit a multiple-group LGM with 2 groups. In one group, the data exhibits a significant linear slope with significant linear variance. In the other, the data exhibits a significant quadratic slope with significant quadratic variance. I'm wondering how I specify this in the multiple-group framework. I attempted to do so by constraining the quadratic intercept and variance to 0 in the first group in order to "omit" that factor...that seems to work. However, when testing remaining differences across groups I'm having some trouble. When I constrain the models to be equal (but leave the quadratic intercept and variance free to be estimated in the second group), the quadratic intercept and variance are no longer significant in the second group and the model is NPD. Any suggestions?
It sounds like you hold equal to the linear model in one group the linear part of the quadratic in the other group. But the linear part in the quadratic group has different meaning than the linear model in the the linear group. I think only the intercept growth factors are comparable in this case.
anonymous posted on Monday, July 21, 2008 - 7:21 am
Hello, Half-way there - thank you! I tried your suggestion and allowed the linear intercept and variance to be free across groups. In the quadratic group, the quadratic variance is now significant, however the intercept is still not significant. Should I also allow all or some of the covariances among growth factors to be free across groups? Also, is there a reference I can use to justify this approach? Thanks!
All growth factor parameters should be different across groups (including covariances) - except for the mean and variance of the intercept factor (and its regressions on covariates) which can be compared across groups, i.e. held equal or non-equal in order to test invariance.
anonymous posted on Wednesday, July 23, 2008 - 7:29 am
Thank you - this is very helpful. So, given your recommendation, I'm assuming that we also should not compare regressions of the linear factor on covariates - yes? May I cite this as personal communication?
Although we appreciate the credit-giving gesture, we prefer no personal communication references since we may not always be aware of the full picture (and don't have time to sink into that), and that the author instead portrays the argument for doing something in a certain way.
Hello, I applied two-group LGM to test the intervention effect on the outcomes (as explained in the article by Muthen 1997). My questions are; 1) I want to check a possible moderating effect of gender. I created an interaction term but i think i shouldn't test it in the two-group LGM. That's why i checked it beforehand with conditional LGM (without multigrouping). Then if there was a siginificant moderating effect of gender, I made two datasets for gender sub-groups and repeated two-group LGM to explore intervention effects in these subgroups. Is that a good way? 2) When gender is not a moderator, I want to add it as a covariate to my models as 'i s ON gender'. Should I equate it across groups as 'i s ON gender (10)' or should I release it? The model fits better when i release it but the theory of multigorup analysis is equating all the effects across groups except the intervention effect. In this respect, do you advise to equate the covariate effects across groups or not?
1) A separate 2-group (Tx-Ctrl) analysis for each gender is useful. But you may also want to test if there are gender differences which means that you have a 4-group analysis.
2) Equating or not across Tx-Ctrl groups depends on your setting. If you have randomization and "i" is the pre-intervention growth factor then you should have equality. And if "s" refers to post-intervention growth you should have equality only if you don't believe there is gender moderation.
Vanessa posted on Wednesday, September 21, 2011 - 10:13 pm
I have a similar situation as the above (intervention analyses following multi-group Muthen 1997 approach).
Is testing whether eg. gender had an impact on the Tx slope (in Tx group), by regressing Tx slope on gender, a valid way of examining this issue?
How does it differ from testing the impact of gender using a separate 2-group (Tx-Ctrl) analysis for each gender?
Finally, how does testing a separate 2-group (Tx-Ctrl) analysis for each gender differ from your suggested 4-group analysis?
I actually have two potential moderators, which may also feasibly interact (gender and dichotomous genetic factor); what would be the best way of examining the impact of these two factors in the Muthen 1997 framework?
The Muthen-Curran (1997) Psych Methods paper argues that their approach is more powerful in detecting intervention effects than regular 2-group analysis.
You can do a 4-group run using the M-C approach.
I would use Define to create interaction variables to capture the moderator effects.
Vanessa posted on Thursday, September 22, 2011 - 5:16 pm
Just to clarify my last question and your answer: using the M-C 1997 approach, do you mean the best way of examining the impact of two dichotomous moderator variables on an intervention, would be to create interaction terms between potential moderators, and then regress the Tx slope (in the Tx group only) on the moderator variables and their interaction variable?
If one only had one dichotomous moderator of interest, would the best way be to do a 4-group run, or simply regress the Tx slope on the moderator variable?
I guess the question is how the moderators affect model parameters, including those in the control group - and if the group samples are large enough if you take a multiple-group approach. If you bring in moderators as interaction covariates, you need to have the same covariates for both the controls and treatment group. I would think at least gender plays a role also for controls.
Vanessa posted on Sunday, January 15, 2012 - 9:39 pm
Hello. This question follows on from the last two posts here and centres around how covariates should be treated in multi-group intervention analyses (following the M-C 1997 approach);
The last post seems to suggest that if. eg., we consider gender to influence the Tx slope in the Tx group, then gender should be a covariate in the control group also...
Because there's no Tx slope in the control group, are you suggesting that if we bring in a covariate influencing the Tx slope, then this covariate should also be allowed to influence the normative slope parameters in both the Tx and Control groups?
I have several covariates that the Tx slope in Tx group is regressed on; allowing the normative slope paramaters to be regressed on the plausible covariates (and held equal in both Tx and control gps) seems to improve the overall fit in my model, and one of the paths is signficant. I am trying to keep the models simple and focus only on treatment effects: Is there a reason why covariate effects that are not directly impacting the Tx slope should be included in the model still (whether sig. of non-sig)?
eg. I ON gender is sig.; when it is not included in both groups, Tx slope ON I is barely sig; when I ON gender is included in both groups, Tx slope ON I is v. sig.
Is the meaning of Tx slope ON I changed by regressing I on gender??
It is hard to make general recommendations, but I would tend to want a well-specified model for the normative growth, so answering your question marks:
Q2. Less clearcut, but including covariates that are significant for i and s may increase the power with which to detect tx effects.
Q3. You should probably have
txslope on i gender;
in which case txslope on i is a partial regression coefficient, holding gender constant.
Vanessa posted on Monday, January 16, 2012 - 2:25 pm
Thanks for your response.
Q1 and 2 responses seem to indicate that the significant covariate effects on the normative parameters should be included;
Should covariate effects on normative parameters be included if, when left free to vary between groups, they are significant only in one group (Tx group) (randomisation is employed) yet when constrained to be equal, significant still?
Re: Q3, sorry, I didn't make clear but the model already included txslope on gender as well as i (and other covariates, sig or not).
Txslope on gender is not significant whether i on gender is included or not. Whereas including (or not) i on gender, changes the sig level of txslope on i.
So if all sig. covariate effects on normative parameters are included, and thus i on gender is included, how does one then interpret a significant txslope on i? It seems that i is no longer the initial status but that related only to gender - or is it that in Mplus language it is now the residual of i that txslope is being regressed on? [with txslope on i and i on gender]
Thanks in advance
Vanessa posted on Monday, January 16, 2012 - 2:34 pm
Regarding your response to Q1 and Q2, would you still recommend this if it is a randomised (at baseline) trial, when in theory, everything should be equal between groups?
Re the q. in your last message, including covariates is worthwhile in a randomized study as well because it increases the power with which you can detect tx effects.
Re the last q. in you first message, i is still the initial status - it is just that we let its mean differ across genders.
Re the first q. in your first message, I think normative parameters should be held equal across groups due to randomization.
I think we need to end this thread here, so that we don't slip into a consulting situation which goes beyond the purposes of Mplus Discussion and Support.
pauline posted on Thursday, June 14, 2012 - 7:18 pm
I am doing a MG LGM with 3 time invariant covariates. Can one test invariance of regression coefficients for s and q separately?
E.G. s on covariate1 (1); then s on covariate2 (1); then s on covariate3 (1); then q on covariate1 (1); etc..
These result in a nonsig chi2 diff. But when I build a model with all regression coefficients held equal, the chi square diff is now significant. But I can use different combinations to achieve some invariance eg.,
s on covariate1 (1); s on covariate2 (2); q on covariate1 (3); q on covariate2 (4);
s on covariate2 (1); s on covariate3 (2); q on covariate2 (3); q on covariate3 (4);
s on covariate1 (1); s on covariate3 (2); q on covariate1 (3); q on covariate3 (4);
and each results in a nonsig chi2 diff and lowered AIC/BIC. How do I choose the most appropriate model??? There is no theory to suggest one over the other. All 3 combinations lead to possible and slightly different interpretations. Would it make sense to break it down further if it results in more invariance??
e.g., s on covariate1 (1); s on covariate3 (2); q on covariate1 (3); q on covariate2 (4); q on covariate3 (5);
I don't think it is a good idea to test if s on x has the same slope as q on s because s and q are different things and have different scales. Also, you should not test equality of the slopes of s on x1 and s on x2 unless x1 and x2 are measured on the same scale.
pauline posted on Friday, June 15, 2012 - 12:50 pm
Thank you for your response. I think I may have been confused. I am not trying to test equalities of s on x and q on s, nor equalities of s on x1 and s on x2. The first part was meant to provide an example of separate MG LGM run for each coefficient held equal across groups (not to each other)...eg. running one MG LGM with s on x1...and then running another MG LGM this time with only s on x2 held equal, etc.. this gives me invariance for each equality statement when analyzed in it's own separate MG LGM. Although when each equality statement is tested separately works well, it does not work when tested together in combination as in
s on covariate1 (1); s on covariate2 (2); s on covariate3 (3); q on covariate1 (4); q on covariate2 (5); q on covariate3 (6);
the coefficients are held equal across groups, but not to each other.
I hope this clarifies what I'm trying to do. Thank you very much for your help.
It is still not clear to me. Please briefly describe what you want to happen (not what doesn't work).
pauline posted on Saturday, June 16, 2012 - 10:21 pm
I want to test for equality of path coefficients for the covariates in a MG LGM. I want to know whether the influence of the covariates on development is the same for both groups. I have 3 covariates, and the growth form is quadratic. I tried holding all the path coefficients equal across groups but that does not work. Many thanks.
This needs to be done in two steps followed by a chi-square difference test.
In the first step the coefficients are free across groups:
s on covariate1; s on covariate2 ; s on covariate3 ; q on covariate1 ; q on covariate2 ; q on covariate3 ;
In the second step they are constrained to be equal across groups:
s on covariate1 (1); s on covariate2 (2); s on covariate3 (3); q on covariate1 (4); q on covariate2 (5); q on covariate3 (6);
pauline posted on Tuesday, June 19, 2012 - 6:47 pm
Thanks Linda. I tried as you have suggested: s on covariate1 (1); s on covariate2 (2); s on covariate3 (3); q on covariate1 (4); q on covariate2 (5); q on covariate3 (6);
but I get a sig chi2 difference indicating inequality. Should I then proceed to constrain parameters individually in a stepwise fashion to determine if any of the coefficients can be constrained to be equal?
You can look at modification indices to see where the differences are largest and test those using either difference testing or MODEL TEST.
pauline posted on Thursday, June 21, 2012 - 1:40 am
Hi Linda Yes this makes sense.
Thank you for your patience. I have 2 more questions.
1. My unconditional MG LGM shows invariance in factor means, variances, covariances and residual variances. So I am now testing the conditional model. In testing the equality of regression coefficients for covariates, should the factor intercepts, residual variances, and residual covariances also be held equal across groups (if testing of the unconditional model is found to be invariant across groups)?
2. How would one interpret if there is a sig chi2 difference if covariates are added to the model which otherwise was invariant if unconditional
Factor means, variances, and covariances are not measurement parameters. They are structural parameters. You might want to watch the Topic 1 course video that covers these topics. Only measurement parameters should be held equal across groups.
Carlijn C posted on Monday, August 25, 2014 - 10:22 am
I've read this topic, but unfortunately it is still not clear to me. When testing for differences between two groups, I held all parameters equal, except the one that I want to test. For example, intercept mean and variance and slope variance were held equal, only the slope mean was freely estimated. (I used the Wald test to test if the slope means were significantly different.) Should the covariance between the slope and intercept also be held equal?
Another question: I want to control for a covariate. When I test for differences between two groups (same situation as above), should the regression on the covariate held equal between the two groups (so i on covariate (1), s on covariate (2), in both groups)?