You mention factor loadings so I assume that x, y, and z are latent variables. If so, you should first establish measurement invariance before testing structural parameters. How to do this is discussed in Chapter 13 of the Mplus User's Guide available on the website at the end of the multiple group discussion. This is also shown in the Day 1 handout along with how to test for structural parameter differences.
tommy lake posted on Tuesday, June 12, 2007 - 11:54 am
Sorry I did not make it clear enough. Let's assume X and Y are latent variable, and Z is not.
I have established measurement invariance by constraining all parameters equal across the two groups. After a Chi-square difference test, I found when relaxing the constraints on coefficients and factor loadings, the model fits best.
I understand this means the coefficients and factor loadings, as a WHOLE, are significantly different across groups. My question is: Based on such information, can I say one INDIVIDUAL coefficient, from Y to X, is also significantly different cross the two groups? Can I say one individual indirect effect, from Z to X, is significantly different cross the two groups?
Measurement invariance is established by looking at measurement parameters -- intercepts and factor loadings in most cases. If these are not the same for both groups, then you do not have measurement invariance. Only after establishing measurement invariance, would one compare structural parameters -- means, variances, covariances, and regression coefficients of the factors. One would not constrain both measurement and structural parameters equal at the same time to test measurement invariance.
tommy lake posted on Wednesday, June 13, 2007 - 1:32 am
Sorry for my confusion about the concepts. I re-read Chapter 13 of the Mplus User's Guide and tried several model tests. Yet I still got problems.
My model is: f1 by y1 y2 y3; f2 by u4 u5 u6; f1 on f2 x1; f2 on x2;
It is estimated in two groups (female and male). My purpose is to compare the coefficient from x2 to f2 across groups.
As you suggested, I first test the measurement invariance of the two latent variables, f1 and f2. Then my question is:
1) should I test the two factors as in the above model, or test them separately (without ON statements)? I tried both but not sure which one is correct.
2) I found measurement non-invariance of f1 and f2. Does that mean I have no way to compare structure parameters? Is it possible to fix this problem?
3) In other questions I found you said: "Chi-square difference testing can be used to test the significance of any parameter. You just run a model where the parameter is held equal across groups and another model where the parameter is free across groups. " Can I use this method to test the coefficient from x2 to f2 across groups, even though there are measurement non-invariances?
1. I would do this without ON. 2. If you don't have measurement invariance, it means that the factor is not the same for both groups so it does not make sense to compare the structural parameters. 3. You can do this but it would be hard to justify its meaning.
tommy lake posted on Wednesday, June 13, 2007 - 1:10 pm
Thanks a lot! I am much clearer about the process now. I will reformulate the factors to see if I can get measurement invariance.
I have a follow-up question. In the above model, if we have measurement invariance, and if we know the coefficients from x2 to f2 and from f2 to f1 are significantly different across groups, can we say the indirect effect from x2 to f1 is also significantly different across groups? If not, how do we compare indirect effects across groups?
No. You would need to test the indirect effect using MODEL CONSTRAINT.
sunyforever posted on Wednesday, June 20, 2007 - 4:03 pm
I have a question related to the above discussion. Suppose I have a model:
f1 by y1 y2 y3; f2 by u4 u5 u6;
f1 on f2 x1 x2; f2 on x1 x2;
We can see x1 and x2 influence f1 both directly and indirectly through f2. My question is: can I compare the total effects of x1 and x2 on f1? Can I use the coefficients of the total effects to argue for one of x1 and x2 is more influential?
sunyforever posted on Wednesday, June 20, 2007 - 4:13 pm
In addition, I have got all the indirect effects and thus total effects with "model indirect". I just don't know how to compare them. To see their p-values, or structural coefficients, or standardized coefficients?
If the two x variables are on the same scale it would make sense to compare their total effects (without standardization). Model Indirect also gives standardized values so that total effects are expressed using unit x variance, making them comparable.
sunyforever posted on Thursday, June 21, 2007 - 11:43 am
Thanks for the quick answer. What if I estimate the above model in two groups (by gender), can I compare the total effects across groups?
Assume I have measurement invariance and the x variables are on the same scale. Should I compare their unstandardized coefficients or standardized ones? Should I test they are significantly different or not?
I would urge across-group comparisons to be made using unstandardized coefficients. Different groups may have different covariate variances so that standardized values differ across groups even when the unstandardized do not. Unstandardized coefficients are more likely to be invariant. These are classic arguments in SEM.
If the amount of two total effects are close, do I need to test their difference? I know how to test the difference between direct effects (with Chi-square difference test), but I am not clear how to handle indirect effects and total effects.
In the above model, how can I test if the total effects of x1 and x2 on f1 is significantly different? Could you drop me several lines of commands as an example?
You can do this using Model Test, which is Wald chi-square testing.
In Model, you give labels to the 3 slopes involved, e.g.
y on m (p1); m on x1 (p2); m on x2 (p3); y on x1 (p4); y on x2 (p5);
In Model test you use
total1=p1*p2+p4; total2=p1*p3+p5; total1=total2;
sunyforever posted on Wednesday, June 27, 2007 - 10:27 am
Thank you for the detailed instruction. I tried to run this model test in a two-group SEM, but always got the error message: "Unknown group name TEST specified in group-specific MODEL command." How can I resolve this problem?
Also, can I use this method to test the difference of total effects across groups?
Hi Linda and Bengt: I have tested for and achieved partial invariance in my measurement model using CFA for the latent variables alone. I have added my covariates and wish to test for structural invariance (factor means, variances, covariances, and regression coefficients). I have found very little in the literature about the best way to do this. I have queried SEMNET and reviewed their archives without success. Can you refer me to any references as to how best to move forward? Are there recommendations similar to what is in the UG related to measurement invariance in regards to order? Can I perform the analysis using chi-square difference testing? Thank you. Sue ALSO: when I connect to www.statmodel.com I am getting a message from Norton AV indicating that a virus was blocked, specifically trojan.asprox. This has been happening for several days.
We discuss this extensively in our "Topic 1" and "Topic 2" of the 8-part Mplus Short Course series. For a web video of Topics 1 and 2, see our home page under New Mplus Web Videos. This also provides handouts. The home page also has links to information on all of our 8 topics - with Topic 3 and Topic 4 coming up next week at Johns Hopkins University.
given the situation that X impacts Y directly and indirectly through a mediator M, I have a question on comparing the total effects across groups (WLSMV estimation).
I use the following statements to compare 2 groups at a time in my 3 group scenario:
Group1: Y on X (a1); M on X (a2); Y on M (a3);
Group2: Y on X (b1); M on X (b2; Y on M (b3);
Group3: Y on X; M on X; Y on M;
MODEL TEST: 0 = (a1+a2*a3)-(b1+b2*b3);
! The following statement did not work that Benqt gave on June 24, 2007 did not work for me !total1=a1+a2*a3; !total2=b1+b2*b3; !total1=total2; ! This resulted always in an error message: "A parameter label or the constant 0 must appear on the left-hand side of a MODEL TEST statement."
Is this approach correct? Secondly, is the Wald-Test applicable with WLSMV-estimation as I generally use the DIFFTEST test option to compare models estimated with WLSMV?
Dave posted on Tuesday, October 25, 2011 - 9:50 am
I have found a significant interaction in a two-step mediation model like X -> M1 -> M2 -> DV with the interaction predicting M1. The interaction is between X and a second variable (IV1). Other points to note, X, M1 and M2 are latent variables, DV is a 0/1 binary variable. I am trying to figure out how to interpret the interaction and I am would appreciate any guidance you can provide.
I have considered splitting the sample into groups (low/high) on the moderator and estimating the mediated model for each group. I tried this and the indirect effect appears to be different across low and high groups. Does it makes sense to use the multi-group analysis approach to test for a significant difference in the indirect effects? Reading the earlier posts I believe I could do this using constraints. Also, does the presence of the significant interaction change the need to show invariance across groups prior to using the multi-group approach to test differences in the indirect effect?
The model: Usevariable are X1 X2 X3 X4 X5 M11 M12 M13 M21 M22 M23 M24 M25 IV1 DV Control; Categorical is DV; Missing is .; ANALYSIS: TYPE IS Random; MODEL: int | F1 xwith IV1; F1 By X1 X2 X3 X4 X5; F2 By M11 M12 M13; F5 By M21 M22 M23 M24 M25; F2 on F1 IV1 int; DV on F1 F5 Control; F5 on F2;
I think you are dichotomizing your IV1 covariate to get a 2-group analysis. That's fine if you don't think you lose too much information. The 2-group analysis highlights the usual assumption of all parameters being the same at these low and high values. The significant interaction is only part of what's assumed invariant, such as residual variances.
But why abandon the XWITH approach? Interpreting an interaction effect uses the same thinking as in regression (see e.g. the Aiken-West book).
Xu, Man posted on Monday, March 19, 2012 - 4:10 pm
Could I just follow up on this thread. I have a two group SEM model too, with measurement model constrained to be equal across groups. The structural paths are the key points of difference testing. I tried the MODEL TEST and specified the two sets of structural coefficients (say 2*n) to be equal, but I found the output only gave an overal wald test given the number of Parameter Constraints.
Is there a way to get the wald test for each parameter please? Or do I have to manually create n number of models to test each constraint individually?
I also tried setting structural parameters to be equal across groups in the model part, and looked at MI indices, but it was not very obvious to me which paths need to be freed.
If modification indices don't guide you, you would need to do each test separately.
Xu, Man posted on Tuesday, March 20, 2012 - 2:07 am
Thank you, Linda. Or maybe I am not very good at using the MI indices. I will watch some relevant Mplus teaching films on measurement invariance (I beleive it is the later half of Topic 1) and see if I can clarify something.
Xu, Man posted on Tuesday, March 20, 2012 - 6:11 am
I have just tested each pair of the multiple group structural parameters using the MODEL TEST. I found that, although the previous overal wald test of structural paths are statistically signficant, when the paths are tested one by one, none is signficantly differrent across the two groups. I wonder if there is anything that is not consistent among the two approaches and which one is better. Thanks a lot for any thoughtd and advice!
brianne posted on Tuesday, November 13, 2012 - 7:45 am
Am I correctly understanding the multiple groups analysis to say that if I keep all of the paths free and then using the Wald test to test whether the “a” path is the same, then I can just interpret the moderator model below?
GROUPING IS CRISK (1=HIGH RISK 0=LOW RISK);
MODEL: EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 ; EMOR36 on INTR24 ; CDIST14 CESDBL INTR24;
MODEL HIGH RISK (dichotomized):
EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 (P1); EMOR36 on INTR24; CDIST14 CESDBL INTR24;
MODEL LOW RISK (dichotomized):
EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 (P2); EMOR36 on INTR24; CDIST14 CESDBL INTR24;
I have a set of two-wave longitudinal data, and I'd like to see whether the associations among several variables vary across time. Specifically, there are 6 paths I'd like to test.
It appeared to me that both Wald tests and Chi square ratio/difference tests would be applicable, but could you tell me what the difference is? I tried both approaches but got different results. According to Wald tests, only 1 of the 6 paths was different from T1 to T2. But when I did chi square difference tests, it looked like 4 of the 6 paths have changed.
Here's how I did chi square difference tests: I started by constraining all 6 paths to be equal across time and treated it as the baseline model. Then I freed one path a time and compared each new model's chi square value with the baseline model's, and see if the difference was larger than 3.84 (the critical value of df=1). For 4 of the 6 models, chi square difference was significant, which seemed to be inconsistent with what the Wald tests suggested.
Another question I had was: when I do Wald tests, should I start with testing all 6 paths at once and treat it as an omnibus test (and only move on to test specific paths once the overall test is significant)?
Wald tests and likelihood-ratio tests are expected to give similar results when they have the same df. One is not generally better than the other. It is unclear if you did the Wald test one at a time like you did for the LR chi-2.
I would test all 6 paths at once.
Jenny L. posted on Monday, May 13, 2013 - 11:29 pm
Thank you for your prompt reply, Prof. Muthen. Yes I was doing Wald tests one at a time, so the results should be similar to those of LR chi-2. However, while 2 of the paths showed similar results in the two tests, the other 4 were inconsistent.
Here's the code I wrote:
[Baseline model] model: SCC_T1 on fdbck_T1 SR_T1 auth_T1(1); fdbck_T1 on bth_T1 dth_T1 pos_T1 auth_T1(2-5); SR_T1 on bth_T1 dth_T1 fdbck_T1 int_T1(6);
SCC_T2 on fdbck_T2 SR_T2 auth_T2(1); fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(2-5); SR_T2 on bth_T2 dth_T2 fdbck_T2 int_T2(6);
[model for comparison: The path of interest is fdbck on bth]: model: SCC_T1 on fdbck_T1 SR_T1 auth_T1(1); fdbck_T1 on bth_t1 dth_T1 pos_T1 auth_T1(2-4); SR_T1 on bth_T1 dth_T1 fdbck_T1 int_T1(6);
SCC_T2 on fdbck_T2 SR_T2 auth_T2(1); fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(2-4); SR_T2 on bth_T2 dth_T2 fdbck_T2 int_T2(6);
The chi square difference between the two models were 5.382, whereas the wald test value was 2.041. All the 4 paths that showed inconsistencies between the two tests involved the variable "fdbck."
Could you kindly tell me where the codes went wrong? Thank you for your advice.
The z-tests that you obtain in the results section of the output compare the regression coefficient to zero. The equality test compares the regression coefficients to each other. A coefficient may be significantly different from zero but not significantly different from another coefficient.
0 b1 b3 b2
b3 and b2 may be different from zero but not from each other.