Hello, I am trying to conduct a multigroup growth model to examine patterns of alcohol use over time in two groups (experimental and control). The output gives an overall chi-square and then the chi-square contributions for each group. How do I interpret the chi-square contributions from each group? Are they supposed to be relatively equal? What does it tell you if one group has a much larger contribution than the other?
The model may fit better for one group than the other. This is not the same as fitting the model for each group separately, however, because there are restrictions across the groups. I would suggest fitting the growth model separately in each group before doing a multiple group analysis to be certain that the same growth model fits well in each group.
You could do this but if the two groups don't have the same growth model, it does not make sense to compare growth factor means and variances across groups. It is like measurement invariance. If the factors aren't the same across groups, then comparing factor means is comparing apples to oranges.
I want to compare growth factor means between two subgroups via multiple group approach. However, my final model is a conditional one and I wonder whether I should use the conditional or unconditional model to test growth factor mean differences between subgroups. Equality tests actually differ depending on the choice of unconditional vs. conditional. May be this could be due to the fact that growth factor means become intercepts in conditional models?
The equality tests differ because in the unconditional model, it is a test of means and in the conditional model, it is a test of intercepts. If your final model is conditional, it makes sense that you would want to compare intercepts and regression coefficients.
Thanks, but I'm not fully sure if I got this. Finally, I would like to state something like this: "Controlled for cov xy the intercept growth factor mean (actually : Baseline-level) and/or the slope growth factor mean (actually: growth over time) is higher/lower (stronger/weaker) in subgroup A as compared to subgroup B." To achieve this, would one compare the intercepts of growth factor means in the conditional model (as you said) or compare the growth factor means derived from "model constraint" (as it is often recommended here in the forum)?
The question is if in your group comparison you are interested in the differences across groups in the means of the covariates. I assume not. The growth factor means are partly determined by the covariate means in that growth factor means are produced by covariate means times covariate slopes plus intercepts. When comparing groups I think it is more relevant to compare those slopes and intercepts. I would think you are more likely to have group invariance in slopes and intercepts than in covariate means. Considering the intercepts is what I would call having controlled for covariates, that is, getting rid of the effect of the group differences in the covariate means.
Sounds very reasonable. Thank you for clarification. However, I'm still interested in reporting subgroup (low/high) specific growth factor means (and SEs) of the conditional multiple group model. Unfortunatelly, I failed in replicating the subgroup specific growth factor means reported in tech4 with the following setting (example for intercept growth factor "id").
model low: [id] (p1); sex (p2); id on sex (p3); ta (p4); id on ta (p5);
model high: [id] (p6); sex (p7); id on sex (p8); ta (p9); id on ta (p10);
model constraint: New (c d); c = p1 + (p2 * p3) + (p4 * p5); d = p6 + (p7 * p8) + (p9 * p10);
Although model constraint is often suggested for getting SEs for the growth factor means in conditional models I found no syntax in this forum. So the above mentioned syntax was a guess...
If you specify the means in MODEL CONSTRAINT, the standard errors are calculated by the program.
You have the means specified incorrectly. p2, p4, p7, and p9 refer to variances not means. Also, you should not mention variances or means of covariates in the MODEL command. You should obtain the means of sex and ta from descriptive statistics.
ymean = intercept + B*xmean
where the value of xmean is taken from descriptive statistics.
Thank's a lot, that worked fine. Final question: How should one alter the above syntax to define equal growth factor means across the both groups in the conditional model? I thought of setting the 3 variables of the growth factor equation equal across subgroups.
model low: [id] (p1); id on ta (p2); id on sex (p3);
model high: [id] (p1); id on ta (p2); id on sex (p3);
model constraint: New (c); c = p1 + (p2 * amean) + (p3 * bmean);
where amean and bmean reflect the covariate means of the whole sample. The estimated growth factor means of both subgroups (tech4) slightly differ. However, this could also be a function of covariates and my syntax is o.k.!?
Hi! I applied a multiple group approach which tested equalities of covariances between growth factors (of parallel processes) using chi-square tests. I know that this approach tests unstandardized parameters (actually covariances). What confuses me: I found that there are sometimes big differences with regard to correlations between growth factors comparing both groups (gender)which did not became significant when testing covariances and smaller differences in correlations which became signifcant. How can this briefly explained?
Another question: I first analyzed the whole sample. Here I found a sig. correlation between an intercept and a slope which did not become signifcant (but was equal) in both groups when doing multiple group analysis. Additionally, the correlation in both groups is lower as compared to the overall analysis. How should that be communicated? It would be nice if I could postulate: We found a significant correlation between interceptA and slopeB and this association was not moderated by gender. However, I've found a paper which only reports the correlation of multiple group analysis and no correlations of the whole sample. That would imply: We found no correlation between interceptA and slopeB and no moderation by gender.
All other significant overall correlations are at least signifcant for one group, so there is no conflict with overall analysis. How would you deal with that?
I'm sorry! For sure I did not test differences in correlations (is this possible?). But I wondered why seemingly little differences in correlations became significant and seemingly big differences in correlations became insignificant (--> both in tests of differences in covariances) and how to explain that.
Pertaining Question 2, I wondered how to report my findings. The covariance between intercept and slope was significant in overall analysis but not significant in both groups of multiple group analysis. Should I report the insignificant interaction against the background of the sig. main effect of overall analysis or against the background of the insig. main effect of multiple group analysis?
The parameter estimate size is not all that determines significance, but also the standard errors - which can be quite different.
If you find it important to analyze groups, I would report what happens in each group. I would also test equality across groups in a multiple-group run.
EvavdW posted on Monday, February 10, 2014 - 7:31 am
Hi, I am running a two-level five-group latent growth model with within level grouping variable in which I would like to examine group-differences in the predictive value of two variables for the intercept and slope.
USEV ARE Plus1 Plus2 Plus3 Class LionPc MonkeyPc; CLUSTER = Class; CLASSES = g(5); knownclass = g(grade = 4 5 6 7 8); MISSING ARE ALL (999); ANALYSIS: TYPE = TWOLEVEL mixture; ESTIMATOR = ML;!Bayes; processors = 3; Model = NOCOV;
MODEL: %WITHIN% %overall% iw sw | Plus1@0Plus2@1Plus3@2; iw ON LionPc MonkeyPc; sw ON LionPc MonkeyPc; LionPc WITH MonkeyPc;
In the results however, the regression weights (iw ON LionPc MonkeyPc & sw ON LionPc MonkeyPc) and covariance (LionPc WITH MonkeyPc) are estimated exactly at the same value in all groups. Is this a default setting? If yes, how can I make sure they are freely estimated? Is there a command for this?
They are held equal as the default. To free them, mention them in the class-specific part of the MODEL command, for example,
MODEL: %WITHIN# %OVERALL% y ON x; %c#1% y ON x;
EvavdW posted on Tuesday, February 18, 2014 - 4:25 am
Thank you so much for your reply. In my multigroup model, the predictive value of two variables for intercept and slope is now estimated for each group seperately.
In the next step, I would like to see if differences between standardized estimates are signficant between the two predictors (i.e., within groups and therefore dependent samples) as well as between groups (independent samples)
Is it possible to do this using the Wald test. And if so, how would I have to program this, for example with y ON x and y ON z?
%g#1% y ON x (c1); y ON z (c2);
Model test: c1 = c2;
and I also tried:
%g#1% y ON x (c1); %g#2% y ON x (d1);
Model test: c1 = d1;
But then I get the follwing error: *** ERROR in MODEL CONSTRAINT command The following parameter label is ambiguous. Check that the corresponding parameter has not been changed. Parameter label: C1
EvavdW posted on Tuesday, February 18, 2014 - 1:45 pm
I ran the input file again so I could send you the output and somehow now it does work. I think the first time I accidentally put a parameter label in the overall model, which caused the error to appear.
Thank you just as much!
Kind regards, Eva
EvavdW posted on Wednesday, February 26, 2014 - 4:54 am
If I understand correctly, the following command tests differences between UNstandardised estimates?
%g#1% y ON x (c1); y ON z (c2);
Model test: c1 = c2;
Is this right? And if so, is it also possible to adapt the command to test differences between standardised estimates?
When it is not possible to test differences between standardised estimates, I guess it will be necessary to standardise both x and z using z-scores. However, since I am interested in multigroup analysis (using grade as the grouping factor), I think I should standardise around the grade mean (not the grand mean). Is it possible to do this in Mplus? Or should I do this in SPSS before hand?
I am trying to run a multigroup analysis to determine gender differences in univariate LGCs over time. The measurement model shows partial strong invariance over time and strong invariance across gender.
I would like to see if there are gender differences in the growth factors? By default means are fixed @0 for females and estimated for males, but is it possible (and sensible) to fix them to be equal and see if fit indices are reduced? Or is there another, better way to do this?
I estimated a multiple group growth model to examine differences in children's mental health trajectories across two groups (high income and low income). I have two mental health outcomes: depression and antisocial behavior. I found significant differences in the mean intercepts of depression and antisocial behavior across the two groups. However, differences in the mean slopes across the two groups were only significant for antisocial behavior. I argued that these income groups appear to have a more pronounced influence on antisocial behavior compared to depression (i.e. differences in the mean slopes for antisocial behavior but not depression), but a reviewer is asking if I can test this empirically. Is there a way to test for differences in the effects of income on two different outcomes, whether in the multiple group approach or some other alternative approach? Many thanks!
You could test for equality of the 2 slope means (for antisocial and depression) using Model Test. But since those 2 DVs are in different metrics you would have to consider standardized means. Which means that you would express the standardized means in Model Constraint and then test their difference in Model Test (see the V8 UG page 773).