Multiple group growth modeling
Message/Author
 Sarah Dauber posted on Monday, December 04, 2006 - 7:28 am
Hello,
I am trying to conduct a multigroup growth model to examine patterns of alcohol use over time in two groups (experimental and control). The output gives an overall chi-square and then the chi-square contributions for each group. How do I interpret the chi-square contributions from each group? Are they supposed to be relatively equal? What does it tell you if one group has a much larger contribution than the other?

Thank you,
Sarah Dauber
 Linda K. Muthen posted on Monday, December 04, 2006 - 9:26 am
The model may fit better for one group than the other. This is not the same as fitting the model for each group separately, however, because there are restrictions across the groups. I would suggest fitting the growth model separately in each group before doing a multiple group analysis to be certain that the same growth model fits well in each group.
 Sarah Dauber posted on Monday, December 04, 2006 - 9:47 am
Thanks for your response. So the model that makes the larger contribution to the chi-square has the poorer fit? Is that correct?

Thanks,
Sarah Dauber
 Linda K. Muthen posted on Monday, December 04, 2006 - 2:10 pm
The larger the chi-square the worse the fit. But I would run the groups separately to asses model fit for each group.
 Sarah Dauber posted on Tuesday, December 05, 2006 - 11:53 am
Thanks for your help. One more question...is it possible to fit a quadratic curve in one group and a linear fit in the other within a multi-group model? If so, how do I specify this in the input?

Thanks.
Sarah Dauber
 Linda K. Muthen posted on Tuesday, December 05, 2006 - 12:06 pm
You could do this but if the two groups don't have the same growth model, it does not make sense to compare growth factor means and variances across groups. It is like measurement invariance. If the factors aren't the same across groups, then comparing factor means is comparing apples to oranges.
 Michael Spaeth posted on Friday, May 08, 2009 - 8:34 am
I want to compare growth factor means between two subgroups via multiple group approach. However, my final model is a conditional one and I wonder whether I should use the conditional or unconditional model to test growth factor mean differences between subgroups. Equality tests actually differ depending on the choice of unconditional vs. conditional. May be this could be due to the fact that growth factor means become intercepts in conditional models?
 Linda K. Muthen posted on Saturday, May 09, 2009 - 10:59 am
The equality tests differ because in the unconditional model, it is a test of means and in the conditional model, it is a test of intercepts. If your final model is conditional, it makes sense that you would want to compare intercepts and regression coefficients.
 Michael Spaeth posted on Monday, May 11, 2009 - 2:48 am
Thanks, but I'm not fully sure if I got this. Finally, I would like to state something like this: "Controlled for cov xy the intercept growth factor mean (actually : Baseline-level) and/or the slope growth factor mean (actually: growth over time) is higher/lower (stronger/weaker) in subgroup A as compared to subgroup B."
To achieve this, would one compare the intercepts of growth factor means in the conditional model (as you said) or compare the growth factor means derived from "model constraint" (as it is often recommended here in the forum)?
 Bengt O. Muthen posted on Monday, May 11, 2009 - 8:05 am
The question is if in your group comparison you are interested in the differences across groups in the means of the covariates. I assume not. The growth factor means are partly determined by the covariate means in that growth factor means are produced by covariate means times covariate slopes plus intercepts. When comparing groups I think it is more relevant to compare those slopes and intercepts. I would think you are more likely to have group invariance in slopes and intercepts than in covariate means. Considering the intercepts is what I would call having controlled for covariates, that is, getting rid of the effect of the group differences in the covariate means.
 Michael Spaeth posted on Monday, May 11, 2009 - 11:55 am
Sounds very reasonable. Thank you for clarification. However, I'm still interested in reporting subgroup (low/high) specific growth factor means (and SEs) of the conditional multiple group model. Unfortunatelly, I failed in replicating the subgroup specific growth factor means reported in tech4 with the following setting (example for intercept growth factor "id").

model:
...

model low:
[id] (p1);
sex (p2);
id on sex (p3);
ta (p4);
id on ta (p5);

model high:
[id] (p6);
sex (p7);
id on sex (p8);
ta (p9);
id on ta (p10);

model constraint:
New (c d);
c = p1 + (p2 * p3) + (p4 * p5);
d = p6 + (p7 * p8) + (p9 * p10);

Although model constraint is often suggested for getting SEs for the growth factor means in conditional models I found no syntax in this forum. So the above mentioned syntax was a guess...
 Linda K. Muthen posted on Tuesday, May 12, 2009 - 9:18 am
If you specify the means in MODEL CONSTRAINT, the standard errors are calculated by the program.

You have the means specified incorrectly. p2, p4, p7, and p9 refer to variances not means. Also, you should not mention variances or means of covariates in the MODEL command. You should obtain the means of sex and ta from descriptive statistics.

ymean = intercept + B*xmean

where the value of xmean is taken from descriptive statistics.
 Michael Spaeth posted on Wednesday, May 13, 2009 - 10:44 am
Thank's a lot, that worked fine. Final question: How should one alter the above syntax to define equal growth factor means across the both groups in the conditional model? I thought of setting the 3 variables of the growth factor equation equal across subgroups.

I tried:

model low:
[id] (p1);
id on ta (p2);
id on sex (p3);

model high:
[id] (p1);
id on ta (p2);
id on sex (p3);

model constraint:
New (c);
c = p1 + (p2 * amean) + (p3 * bmean);

where amean and bmean reflect the covariate means of the whole sample. The estimated growth factor means of both subgroups (tech4) slightly differ. However, this could also be a function of covariates and my syntax is o.k.!?
 Linda K. Muthen posted on Thursday, May 14, 2009 - 9:42 am
You should define a mean for each group using the mean of a and b for each group. You can then use MODEL TEST to test the equality of the means.
 Michael Spaeth posted on Friday, May 15, 2009 - 4:52 am
thanks, I overlooked this option in the handbook, sorry.
 Walter Sobchak posted on Friday, May 15, 2009 - 9:03 am
Hi!
I applied a multiple group approach which tested equalities of covariances between growth factors (of parallel processes) using chi-square tests.
I know that this approach tests unstandardized parameters (actually covariances).
What confuses me: I found that there are sometimes big differences with regard to correlations between growth factors comparing both groups (gender)which did not became significant when testing covariances and smaller differences in correlations which became signifcant. How can this briefly explained?

Another question: I first analyzed the whole sample. Here I found a sig. correlation between an intercept and a slope which did not become signifcant (but was equal) in both groups when doing multiple group analysis. Additionally, the correlation in both groups is lower as compared to the overall analysis. How should that be communicated? It would be nice if I could postulate: We found a significant correlation between interceptA and slopeB and this association was not moderated by gender. However, I've found a paper which only reports the correlation of multiple group analysis and no correlations of the whole sample. That would imply: We found no correlation between interceptA and slopeB and no moderation by gender.

All other significant overall correlations are at least signifcant for one group, so there is no conflict with overall analysis.
How would you deal with that?
 Linda K. Muthen posted on Saturday, May 16, 2009 - 8:28 am
I'm unclear on how you are testing the differences in correlations.
 Walter Sobchak posted on Sunday, May 17, 2009 - 3:49 am
I'm sorry! For sure I did not test differences in correlations (is this possible?). But I wondered why seemingly little differences in correlations became significant and seemingly big differences in correlations became insignificant (--> both in tests of differences in covariances) and how to explain that.

Pertaining Question 2, I wondered how to report my findings. The covariance between intercept and slope was significant in overall analysis but not significant in both groups of multiple group analysis. Should I report the insignificant interaction against the background of the sig. main effect of overall analysis or against the background of the insig. main effect of multiple group analysis?
 Bengt O. Muthen posted on Tuesday, May 19, 2009 - 10:58 am
The parameter estimate size is not all that determines significance, but also the standard errors - which can be quite different.

If you find it important to analyze groups, I would report what happens in each group. I would also test equality across groups in a multiple-group run.
 EvavdW posted on Monday, February 10, 2014 - 7:31 am
Hi,
I am running a two-level five-group latent growth model with within level grouping variable in which I would like to examine group-differences in the predictive value of two variables for the intercept and slope.

I tried:

USEV ARE Plus1 Plus2 Plus3 Class LionPc MonkeyPc;
CLUSTER = Class;
CLASSES = g(5);
knownclass = g(grade = 4 5 6 7 8);
MISSING ARE ALL (999);
ANALYSIS: TYPE = TWOLEVEL mixture;
ESTIMATOR = ML;!Bayes;
processors = 3;
Model = NOCOV;

MODEL:
%WITHIN%
%overall%
iw sw | Plus1@0 Plus2@1 Plus3@2;
iw ON LionPc MonkeyPc;
sw ON LionPc MonkeyPc;
LionPc WITH MonkeyPc;

%BETWEEN%
%overall%
ib sb | Plus1@0 Plus2@1 Plus3@2;
sb@0;

The model estimation terminated normally.

In the results however, the regression weights (iw ON LionPc MonkeyPc &
sw ON LionPc MonkeyPc) and covariance (LionPc WITH MonkeyPc) are estimated exactly at the same value in all groups. Is this a default setting? If yes, how can I make sure they are freely estimated? Is there a command for this?

Kind regards, Eva
 Linda K. Muthen posted on Monday, February 10, 2014 - 10:48 am
They are held equal as the default. To free them, mention them in the class-specific part of the MODEL command, for example,

MODEL:
%WITHIN#
%OVERALL%
y ON x;
%c#1%
y ON x;
 EvavdW posted on Tuesday, February 18, 2014 - 4:25 am
In my multigroup model, the predictive value of two variables for intercept and slope is now estimated for each group seperately.

In the next step, I would like to see if differences between standardized estimates are signficant between the two predictors (i.e., within groups and therefore dependent samples) as well as between groups (independent samples)

Is it possible to do this using the Wald test. And if so, how would I have to program this, for example with y ON x and y ON z?

I tried:

%g#1%
y ON x (c1);
y ON z (c2);

Model test:
c1 = c2;

and I also tried:

%g#1%
y ON x (c1);
%g#2%
y ON x (d1);

Model test:
c1 = d1;

But then I get the follwing error:
*** ERROR in MODEL CONSTRAINT command
The following parameter label is ambiguous. Check that the corresponding
parameter has not been changed. Parameter label: C1

I hope you can help me out.

Kind regards, Eva
 Linda K. Muthen posted on Tuesday, February 18, 2014 - 6:18 am
 EvavdW posted on Tuesday, February 18, 2014 - 1:45 pm
I ran the input file again so I could send you the output and somehow now it does work. I think the first time I accidentally put a parameter label in the overall model,
which caused the error to appear.

Thank you just as much!

Kind regards, Eva
 EvavdW posted on Wednesday, February 26, 2014 - 4:54 am
If I understand correctly, the following command tests differences between UNstandardised estimates?

%g#1%
y ON x (c1);
y ON z (c2);

Model test:
c1 = c2;

Is this right?
And if so, is it also possible to adapt the command to test differences between standardised estimates?

When it is not possible to test differences between standardised estimates, I guess it will be necessary to standardise both x and z using z-scores. However, since I am interested in multigroup analysis (using grade as the grouping factor), I think I should standardise around the grade mean (not the grand mean). Is it possible to do this in Mplus? Or should I do this in SPSS before hand?

Kind regards, Eva
 Linda K. Muthen posted on Wednesday, February 26, 2014 - 11:38 am
Yes. But MODEL TEST should be specified:

Model test:
0 = c1 - c2;

You would need to define the standardize estimates in MODEL CONSTRAINT and test them in MODEL CONSTRAINT.
 EvavdW posted on Thursday, February 27, 2014 - 12:54 am
Dear Linda,

How would I define the standardized estimates and test this in MODEL CONSTRAINT? How would this look in my command?

USEV ARE Plus1 Plus2 Plus3 Class y z;
CLUSTER = Class;
CLASSES = g(2);
knownclass = g(grade = 4 5);
MISSING ARE ALL (999);
ANALYSIS: TYPE = COMPLEX MIXTURE;
ESTIMATOR = ML;!Bayes;
processors = 3;
Model = NOCOV;

MODEL:
%overall%
iw sw | Plus1@0 Plus2@1 Plus3@2;
iw ON y;
iw ON z;
sw ON y;
sw ON z;
y WITH z;

%g#1%
iw sw;
iw ON y(c1);
iw ON z(d1);
sw ON y(e1);
sw ON z(f1);
y WITH z(g1);

%g#2%
iw sw;
iw ON y(c2);
iw ON z(d2);
sw ON y (e2);
sw ON z(f2);
y WITH z(g2);

Model Test:
0 = c1 - c2;

Kind regards, Eva
 Linda K. Muthen posted on Thursday, February 27, 2014 - 6:06 am
See Example 5.20 in the user's guide.
 EvavdW posted on Thursday, February 27, 2014 - 8:12 am
I looked at Example 5.20 and I am a bit confused:

Is this a way to tell Mplus to use standardized estimates for x ON y and x ON z in its comparison?

Or is this a way to calculate standardized z-scores for y and z?

Kind regards, Eva
 Linda K. Muthen posted on Thursday, February 27, 2014 - 10:43 am
The example shows how to use MODEL CONSTRAINT to compute the standardized estimates.
 Simone Croft posted on Thursday, April 16, 2015 - 5:25 am
I am trying to run a multigroup analysis to determine gender differences in univariate LGCs over time. The measurement model shows partial strong invariance over time and strong invariance across gender.

I would like to see if there are gender differences in the growth factors? By default means are fixed @0 for females and estimated for males, but is it possible (and sensible) to fix them to be equal and see if fit indices are reduced? Or is there another, better way to do this?
 Linda K. Muthen posted on Thursday, April 16, 2015 - 6:03 am
The models to compare factor means are zero in all groups or timepoints compared to zero in one group or timepoint and free in the others.
 Simone Croft posted on Thursday, April 16, 2015 - 6:20 am
Thank you Linda.
 Jinette Comeau posted on Monday, May 29, 2017 - 9:34 am
I estimated a multiple group growth model to examine differences in children's mental health trajectories across two groups (high income and low income). I have two mental health outcomes: depression and antisocial behavior. I found significant differences in the mean intercepts of depression and antisocial behavior across the two groups. However, differences in the mean slopes across the two groups were only significant for antisocial behavior. I argued that these income groups appear to have a more pronounced influence on antisocial behavior compared to depression (i.e. differences in the mean slopes for antisocial behavior but not depression), but a reviewer is asking if I can test this empirically. Is there a way to test for differences in the effects of income on two different outcomes, whether in the multiple group approach or some other alternative approach? Many thanks!
 Bengt O. Muthen posted on Monday, May 29, 2017 - 5:50 pm
You could test for equality of the 2 slope means (for antisocial and depression) using Model Test. But since those 2 DVs are in different metrics you would have to consider standardized means. Which means that you would express the standardized means in Model Constraint and then test their difference in Model Test (see the V8 UG page 773).