Hello, I am trying to conduct a multigroup growth model to examine patterns of alcohol use over time in two groups (experimental and control). The output gives an overall chi-square and then the chi-square contributions for each group. How do I interpret the chi-square contributions from each group? Are they supposed to be relatively equal? What does it tell you if one group has a much larger contribution than the other?
The model may fit better for one group than the other. This is not the same as fitting the model for each group separately, however, because there are restrictions across the groups. I would suggest fitting the growth model separately in each group before doing a multiple group analysis to be certain that the same growth model fits well in each group.
You could do this but if the two groups don't have the same growth model, it does not make sense to compare growth factor means and variances across groups. It is like measurement invariance. If the factors aren't the same across groups, then comparing factor means is comparing apples to oranges.
I want to compare growth factor means between two subgroups via multiple group approach. However, my final model is a conditional one and I wonder whether I should use the conditional or unconditional model to test growth factor mean differences between subgroups. Equality tests actually differ depending on the choice of unconditional vs. conditional. May be this could be due to the fact that growth factor means become intercepts in conditional models?
The equality tests differ because in the unconditional model, it is a test of means and in the conditional model, it is a test of intercepts. If your final model is conditional, it makes sense that you would want to compare intercepts and regression coefficients.
Thanks, but I'm not fully sure if I got this. Finally, I would like to state something like this: "Controlled for cov xy the intercept growth factor mean (actually : Baseline-level) and/or the slope growth factor mean (actually: growth over time) is higher/lower (stronger/weaker) in subgroup A as compared to subgroup B." To achieve this, would one compare the intercepts of growth factor means in the conditional model (as you said) or compare the growth factor means derived from "model constraint" (as it is often recommended here in the forum)?
The question is if in your group comparison you are interested in the differences across groups in the means of the covariates. I assume not. The growth factor means are partly determined by the covariate means in that growth factor means are produced by covariate means times covariate slopes plus intercepts. When comparing groups I think it is more relevant to compare those slopes and intercepts. I would think you are more likely to have group invariance in slopes and intercepts than in covariate means. Considering the intercepts is what I would call having controlled for covariates, that is, getting rid of the effect of the group differences in the covariate means.
Sounds very reasonable. Thank you for clarification. However, I'm still interested in reporting subgroup (low/high) specific growth factor means (and SEs) of the conditional multiple group model. Unfortunatelly, I failed in replicating the subgroup specific growth factor means reported in tech4 with the following setting (example for intercept growth factor "id").
model low: [id] (p1); sex (p2); id on sex (p3); ta (p4); id on ta (p5);
model high: [id] (p6); sex (p7); id on sex (p8); ta (p9); id on ta (p10);
model constraint: New (c d); c = p1 + (p2 * p3) + (p4 * p5); d = p6 + (p7 * p8) + (p9 * p10);
Although model constraint is often suggested for getting SEs for the growth factor means in conditional models I found no syntax in this forum. So the above mentioned syntax was a guess...
If you specify the means in MODEL CONSTRAINT, the standard errors are calculated by the program.
You have the means specified incorrectly. p2, p4, p7, and p9 refer to variances not means. Also, you should not mention variances or means of covariates in the MODEL command. You should obtain the means of sex and ta from descriptive statistics.
ymean = intercept + B*xmean
where the value of xmean is taken from descriptive statistics.
Thank's a lot, that worked fine. Final question: How should one alter the above syntax to define equal growth factor means across the both groups in the conditional model? I thought of setting the 3 variables of the growth factor equation equal across subgroups.
model low: [id] (p1); id on ta (p2); id on sex (p3);
model high: [id] (p1); id on ta (p2); id on sex (p3);
model constraint: New (c); c = p1 + (p2 * amean) + (p3 * bmean);
where amean and bmean reflect the covariate means of the whole sample. The estimated growth factor means of both subgroups (tech4) slightly differ. However, this could also be a function of covariates and my syntax is o.k.!?
Hi! I applied a multiple group approach which tested equalities of covariances between growth factors (of parallel processes) using chi-square tests. I know that this approach tests unstandardized parameters (actually covariances). What confuses me: I found that there are sometimes big differences with regard to correlations between growth factors comparing both groups (gender)which did not became significant when testing covariances and smaller differences in correlations which became signifcant. How can this briefly explained?
Another question: I first analyzed the whole sample. Here I found a sig. correlation between an intercept and a slope which did not become signifcant (but was equal) in both groups when doing multiple group analysis. Additionally, the correlation in both groups is lower as compared to the overall analysis. How should that be communicated? It would be nice if I could postulate: We found a significant correlation between interceptA and slopeB and this association was not moderated by gender. However, I've found a paper which only reports the correlation of multiple group analysis and no correlations of the whole sample. That would imply: We found no correlation between interceptA and slopeB and no moderation by gender.
All other significant overall correlations are at least signifcant for one group, so there is no conflict with overall analysis. How would you deal with that?
I'm sorry! For sure I did not test differences in correlations (is this possible?). But I wondered why seemingly little differences in correlations became significant and seemingly big differences in correlations became insignificant (--> both in tests of differences in covariances) and how to explain that.
Pertaining Question 2, I wondered how to report my findings. The covariance between intercept and slope was significant in overall analysis but not significant in both groups of multiple group analysis. Should I report the insignificant interaction against the background of the sig. main effect of overall analysis or against the background of the insig. main effect of multiple group analysis?