Message/Author 


Hello, I am trying to conduct a multigroup growth model to examine patterns of alcohol use over time in two groups (experimental and control). The output gives an overall chisquare and then the chisquare contributions for each group. How do I interpret the chisquare contributions from each group? Are they supposed to be relatively equal? What does it tell you if one group has a much larger contribution than the other? Thank you, Sarah Dauber 


The model may fit better for one group than the other. This is not the same as fitting the model for each group separately, however, because there are restrictions across the groups. I would suggest fitting the growth model separately in each group before doing a multiple group analysis to be certain that the same growth model fits well in each group. 


Thanks for your response. So the model that makes the larger contribution to the chisquare has the poorer fit? Is that correct? Thanks, Sarah Dauber 


The larger the chisquare the worse the fit. But I would run the groups separately to asses model fit for each group. 

Sarah Dauber posted on Tuesday, December 05, 2006  11:53 am



Thanks for your help. One more question...is it possible to fit a quadratic curve in one group and a linear fit in the other within a multigroup model? If so, how do I specify this in the input? Thanks. Sarah Dauber 


You could do this but if the two groups don't have the same growth model, it does not make sense to compare growth factor means and variances across groups. It is like measurement invariance. If the factors aren't the same across groups, then comparing factor means is comparing apples to oranges. 


I want to compare growth factor means between two subgroups via multiple group approach. However, my final model is a conditional one and I wonder whether I should use the conditional or unconditional model to test growth factor mean differences between subgroups. Equality tests actually differ depending on the choice of unconditional vs. conditional. May be this could be due to the fact that growth factor means become intercepts in conditional models? 


The equality tests differ because in the unconditional model, it is a test of means and in the conditional model, it is a test of intercepts. If your final model is conditional, it makes sense that you would want to compare intercepts and regression coefficients. 


Thanks, but I'm not fully sure if I got this. Finally, I would like to state something like this: "Controlled for cov xy the intercept growth factor mean (actually : Baselinelevel) and/or the slope growth factor mean (actually: growth over time) is higher/lower (stronger/weaker) in subgroup A as compared to subgroup B." To achieve this, would one compare the intercepts of growth factor means in the conditional model (as you said) or compare the growth factor means derived from "model constraint" (as it is often recommended here in the forum)? 


The question is if in your group comparison you are interested in the differences across groups in the means of the covariates. I assume not. The growth factor means are partly determined by the covariate means in that growth factor means are produced by covariate means times covariate slopes plus intercepts. When comparing groups I think it is more relevant to compare those slopes and intercepts. I would think you are more likely to have group invariance in slopes and intercepts than in covariate means. Considering the intercepts is what I would call having controlled for covariates, that is, getting rid of the effect of the group differences in the covariate means. 


Sounds very reasonable. Thank you for clarification. However, I'm still interested in reporting subgroup (low/high) specific growth factor means (and SEs) of the conditional multiple group model. Unfortunatelly, I failed in replicating the subgroup specific growth factor means reported in tech4 with the following setting (example for intercept growth factor "id"). model: ... model low: [id] (p1); sex (p2); id on sex (p3); ta (p4); id on ta (p5); model high: [id] (p6); sex (p7); id on sex (p8); ta (p9); id on ta (p10); model constraint: New (c d); c = p1 + (p2 * p3) + (p4 * p5); d = p6 + (p7 * p8) + (p9 * p10); Although model constraint is often suggested for getting SEs for the growth factor means in conditional models I found no syntax in this forum. So the above mentioned syntax was a guess... 


If you specify the means in MODEL CONSTRAINT, the standard errors are calculated by the program. You have the means specified incorrectly. p2, p4, p7, and p9 refer to variances not means. Also, you should not mention variances or means of covariates in the MODEL command. You should obtain the means of sex and ta from descriptive statistics. ymean = intercept + B*xmean where the value of xmean is taken from descriptive statistics. 


Thank's a lot, that worked fine. Final question: How should one alter the above syntax to define equal growth factor means across the both groups in the conditional model? I thought of setting the 3 variables of the growth factor equation equal across subgroups. I tried: model low: [id] (p1); id on ta (p2); id on sex (p3); model high: [id] (p1); id on ta (p2); id on sex (p3); model constraint: New (c); c = p1 + (p2 * amean) + (p3 * bmean); where amean and bmean reflect the covariate means of the whole sample. The estimated growth factor means of both subgroups (tech4) slightly differ. However, this could also be a function of covariates and my syntax is o.k.!? 


You should define a mean for each group using the mean of a and b for each group. You can then use MODEL TEST to test the equality of the means. 


thanks, I overlooked this option in the handbook, sorry. 


Hi! I applied a multiple group approach which tested equalities of covariances between growth factors (of parallel processes) using chisquare tests. I know that this approach tests unstandardized parameters (actually covariances). What confuses me: I found that there are sometimes big differences with regard to correlations between growth factors comparing both groups (gender)which did not became significant when testing covariances and smaller differences in correlations which became signifcant. How can this briefly explained? Another question: I first analyzed the whole sample. Here I found a sig. correlation between an intercept and a slope which did not become signifcant (but was equal) in both groups when doing multiple group analysis. Additionally, the correlation in both groups is lower as compared to the overall analysis. How should that be communicated? It would be nice if I could postulate: We found a significant correlation between interceptA and slopeB and this association was not moderated by gender. However, I've found a paper which only reports the correlation of multiple group analysis and no correlations of the whole sample. That would imply: We found no correlation between interceptA and slopeB and no moderation by gender. All other significant overall correlations are at least signifcant for one group, so there is no conflict with overall analysis. How would you deal with that? 


I'm unclear on how you are testing the differences in correlations. 


I'm sorry! For sure I did not test differences in correlations (is this possible?). But I wondered why seemingly little differences in correlations became significant and seemingly big differences in correlations became insignificant (> both in tests of differences in covariances) and how to explain that. Pertaining Question 2, I wondered how to report my findings. The covariance between intercept and slope was significant in overall analysis but not significant in both groups of multiple group analysis. Should I report the insignificant interaction against the background of the sig. main effect of overall analysis or against the background of the insig. main effect of multiple group analysis? 


The parameter estimate size is not all that determines significance, but also the standard errors  which can be quite different. If you find it important to analyze groups, I would report what happens in each group. I would also test equality across groups in a multiplegroup run. 

EvavdW posted on Monday, February 10, 2014  7:31 am



Hi, I am running a twolevel fivegroup latent growth model with within level grouping variable in which I would like to examine groupdifferences in the predictive value of two variables for the intercept and slope. I tried: USEV ARE Plus1 Plus2 Plus3 Class LionPc MonkeyPc; CLUSTER = Class; CLASSES = g(5); knownclass = g(grade = 4 5 6 7 8); MISSING ARE ALL (999); ANALYSIS: TYPE = TWOLEVEL mixture; ESTIMATOR = ML;!Bayes; processors = 3; Model = NOCOV; MODEL: %WITHIN% %overall% iw sw  Plus1@0 Plus2@1 Plus3@2; iw ON LionPc MonkeyPc; sw ON LionPc MonkeyPc; LionPc WITH MonkeyPc; %BETWEEN% %overall% ib sb  Plus1@0 Plus2@1 Plus3@2; sb@0; The model estimation terminated normally. In the results however, the regression weights (iw ON LionPc MonkeyPc & sw ON LionPc MonkeyPc) and covariance (LionPc WITH MonkeyPc) are estimated exactly at the same value in all groups. Is this a default setting? If yes, how can I make sure they are freely estimated? Is there a command for this? Kind regards, Eva 


They are held equal as the default. To free them, mention them in the classspecific part of the MODEL command, for example, MODEL: %WITHIN# %OVERALL% y ON x; %c#1% y ON x; 

EvavdW posted on Tuesday, February 18, 2014  4:25 am



Thank you so much for your reply. In my multigroup model, the predictive value of two variables for intercept and slope is now estimated for each group seperately. In the next step, I would like to see if differences between standardized estimates are signficant between the two predictors (i.e., within groups and therefore dependent samples) as well as between groups (independent samples) Is it possible to do this using the Wald test. And if so, how would I have to program this, for example with y ON x and y ON z? I tried: %g#1% y ON x (c1); y ON z (c2); Model test: c1 = c2; and I also tried: %g#1% y ON x (c1); %g#2% y ON x (d1); Model test: c1 = d1; But then I get the follwing error: *** ERROR in MODEL CONSTRAINT command The following parameter label is ambiguous. Check that the corresponding parameter has not been changed. Parameter label: C1 I hope you can help me out. Kind regards, Eva 


Please send your output and license number to support@statmodel.com. 

EvavdW posted on Tuesday, February 18, 2014  1:45 pm



I ran the input file again so I could send you the output and somehow now it does work. I think the first time I accidentally put a parameter label in the overall model, which caused the error to appear. Thank you just as much! Kind regards, Eva 

EvavdW posted on Wednesday, February 26, 2014  4:54 am



If I understand correctly, the following command tests differences between UNstandardised estimates? %g#1% y ON x (c1); y ON z (c2); Model test: c1 = c2; Is this right? And if so, is it also possible to adapt the command to test differences between standardised estimates? When it is not possible to test differences between standardised estimates, I guess it will be necessary to standardise both x and z using zscores. However, since I am interested in multigroup analysis (using grade as the grouping factor), I think I should standardise around the grade mean (not the grand mean). Is it possible to do this in Mplus? Or should I do this in SPSS before hand? Kind regards, Eva 


Yes. But MODEL TEST should be specified: Model test: 0 = c1  c2; You would need to define the standardize estimates in MODEL CONSTRAINT and test them in MODEL CONSTRAINT. 

EvavdW posted on Thursday, February 27, 2014  12:54 am



Dear Linda, Thank you for your reply. I am quite new to this, and I am not sure how to go about this. How would I define the standardized estimates and test this in MODEL CONSTRAINT? How would this look in my command? USEV ARE Plus1 Plus2 Plus3 Class y z; CLUSTER = Class; CLASSES = g(2); knownclass = g(grade = 4 5); MISSING ARE ALL (999); ANALYSIS: TYPE = COMPLEX MIXTURE; ESTIMATOR = ML;!Bayes; processors = 3; Model = NOCOV; MODEL: %overall% iw sw  Plus1@0 Plus2@1 Plus3@2; iw ON y; iw ON z; sw ON y; sw ON z; y WITH z; %g#1% iw sw; iw ON y(c1); iw ON z(d1); sw ON y(e1); sw ON z(f1); y WITH z(g1); %g#2% iw sw; iw ON y(c2); iw ON z(d2); sw ON y (e2); sw ON z(f2); y WITH z(g2); Model Test: 0 = c1  c2; Kind regards, Eva 


See Example 5.20 in the user's guide. 

EvavdW posted on Thursday, February 27, 2014  8:12 am



I looked at Example 5.20 and I am a bit confused: Is this a way to tell Mplus to use standardized estimates for x ON y and x ON z in its comparison? Or is this a way to calculate standardized zscores for y and z? Kind regards, Eva 


The example shows how to use MODEL CONSTRAINT to compute the standardized estimates. 

Back to top 