I have a question regarding multigroup analysis with 3 groups when performing SEM. When comparing the chi-square and df between the unconstrained and constrained model, is there a way to see if there is a significant difference between each of the groups (i.e., group 1 vs. 2; group 2 vs. 3; group 1 vs. 3)? And if so where can I see it in the output? Or is it just possible to tell that there is an overall difference?
You would need to use just the data for groups 1 and 2, 1 and 3, and 3 and 3. You can do overall difference tests or test individual parameters. You can also use MODEL TEST.
ellen posted on Sunday, September 09, 2012 - 11:27 pm
Hi Dr. Muthen,
I am trying to compare parameter estimates across 3 groups. I heard that parameter differences can be tested by MODEL TEST in Mplus, but I am not sure how to write the Mplus language for MODEL TEST. Could you tell me what I should write in the input file for performing MODEL TEST to examine whether parameters are equal across groups? My model is:
GROUPING = race (1=African 2=Asian 3=Hispanic) ;
ANALYSIS: ESTIMATOR = MLR ;
Rm By R1 R2 R3 ; Ot By O1 O2 O3 ; Sg By S1 S2 S3 ; De BY D1 D2 D3 ;
Sg ON Rm ; Sg ON Ot ; Sg ON De ; Rm WITH Ot ; Rm WITH De; Ot WITH De;
MODEL African: MODEL Asian: [R1 - D3] ; MODEL Hispanic: [R1 - D3] ;
The Multigroup Chi-square difference test was significant. However, it only tells me there is overall difference; it does not tell me whether certain parameters are invariant while others are non-invariant. Rather than doing overall difference tests or constraining each parameter one at a time, how do I write the MODEL TEST language to test for specific parameter differences? (For instance, if I want to see whether the parameters of "Sg ON Rm" and "Rm with De" are equivalent across groups?)
You can label the two slopes you want to compare and create a diff parameters in MODEL CONSTRAINT using the labels. Or you can use the labels in MODEL TEST.
Daniel Lee posted on Wednesday, January 18, 2017 - 5:19 pm
makes perfect sense. thank you!
Lily Assaad posted on Tuesday, November 28, 2017 - 12:44 pm
I am testing measurement invariance across race (4 races) within an ESEM framework. My model has 5 latent factors and 25 indicators. I achieved scalar invariance so I wanted to compare the means of my 4 races for each of the 5 factors. I did so by changing my reference group multiple times so as to get all the possible 2-way t-tests. However, I got different results depending on which group was the reference group. For example, when Asians were the reference and Americans (as well as blacks and hispanics) were in the model, the mean estimate for factor 1 was significant for americans. However, when Americans were the reference group and Asians (along with blacks and hispanics) were in the model, the mean estimate for factor1 (between asians and americans) was no longer significant. Thus, I have 2 questions. 1) Do you know why this is happening with me? 2) Is there a way to run all the possible 2-way t-tests between all the means across all races?
1. It shouldn't happen. Send your example to firstname.lastname@example.org. As you change the reference group make sure the Log-likelihood value stays the same - if it is not the same then the models are not comparable like that.
2. You can use model constraint to form the differences between any two parameters. See User's Guide example 9.1 for how you can use model constraint. You can also run just one group with dummy covariates for each race (it is not exactly the same model but worth looking into).
Lily Assaad posted on Tuesday, November 28, 2017 - 5:29 pm
From the files you sent I can see that when you changed the reference group from A to W factor 4 and 5 switched places, so keep this in mind when you are comparing the means.
Also here is what happens when you change the reference group. Suppose A is the reference group and in group W the factor mean is M and the factor variance is V. if you switch A and W so that the W is the reference group the factor mean in A will be M/sqrt(V) and the factor variance will be 1/V.
In one case you are testing M=0 and in the other you are testing M/sqrt(V)=0. Both tests are logically equivalent and they will always yield the same conclusion asymptotically, however, they can have different p-values for finite sample size (this happens with maximum-likelihood estimation - it doesn't happen with Bayes). In most cases though the conclusion about significance doesn't change.
You can verify this yourself using code along these lines
model: f1-f2 by y1-y6 (*1);
model g2: f1-f2 (v1-v2); [f1-f2] (m1-m2);
model constraints: new(a1-a2); a1=m1/sqrt(v1); a2=m2/sqrt(v2);
You will be able to see that the significance of m1 and m2 is different from that of a1 and a2 which is the same as the one with reversed reference group.
Hello, I am conducting a longitudinal multigroup analysis and I would like to make comparisons between my two groups, males and females, on the intercept, slope, as well as the associations between a time-varying predictor and the outcome variable. I have therefore done the following:
model constraint: new(ifm); ifm=if-im; !differences in intercept new(s1fm); s1fm=s1f-s1m; !differences in slope 1 new(s2fm); s2fm=s2f-s2m; !differences in slope 2 new(d1fm); d1fm=d1f-d1m; !differences in assoc between divorce1 and dep1 new(d2fm); d2fm=d2f-d2m; !differences in assoc between divorce2 and dep2 new(d3fm); d3fm=d3f-d3m; !differences in assoc between divorce3 and dep3
I'm running a theory-based mediation model with totally aggregated variables. I want to compare this model in 3 groups, however, I'm not sure what's the best way to do it. Basically, I want to see if the same theoretical model can be applied (or if it fits the data well) to 3 different groups. Grouping option doesn't really answer my question because it doesn't tell me if the model fits the data in different groups. Should I compare the model in groups separately?