Comparison of coefficients across groups PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 tommy lake posted on Monday, June 11, 2007 - 1:31 pm
Dear Linda,

If I estimate a two-group SEM:

X on Y
Y on Z

I estimate this model in three steps:
(1) constain all parameters constant
(2) relax constraints on structural coefficients
(3) relax constraints on coefficients and factor loadings.

Then I conduct a Chi-square difference test, finding that the three models are significantly different from each other, and (3) fits best.

My question is: if I report model (3), can I say the coefficients of X on Y is significantly different cross the two groups?

And, can I say the indirect effects from Z to X is significantly different cross the two groups?

Thanks a lot!
 Linda K. Muthen posted on Tuesday, June 12, 2007 - 9:32 am
You mention factor loadings so I assume that x, y, and z are latent variables. If so, you should first establish measurement invariance before testing structural parameters. How to do this is discussed in Chapter 13 of the Mplus User's Guide available on the website at the end of the multiple group discussion. This is also shown in the Day 1 handout along with how to test for structural parameter differences.
 tommy lake posted on Tuesday, June 12, 2007 - 11:54 am
Dear Linda,

Sorry I did not make it clear enough. Let's assume X and Y are latent variable, and Z is not.

I have established measurement invariance by constraining all parameters equal across the two groups. After a Chi-square difference test, I found when relaxing the constraints on coefficients and factor loadings, the model fits best.

I understand this means the coefficients and factor loadings, as a WHOLE, are significantly different across groups. My question is: Based on such information, can I say one INDIVIDUAL coefficient, from Y to X, is also significantly different cross the two groups? Can I say one individual indirect effect, from Z to X, is significantly different cross the two groups?

Thanks
 Linda K. Muthen posted on Tuesday, June 12, 2007 - 2:08 pm
Measurement invariance is established by looking at measurement parameters -- intercepts and factor loadings in most cases. If these are not the same for both groups, then you do not have measurement invariance. Only after establishing measurement invariance, would one compare structural parameters -- means, variances, covariances, and regression coefficients of the factors. One would not constrain both measurement and structural parameters equal at the same time to test measurement invariance.
 tommy lake posted on Wednesday, June 13, 2007 - 1:32 am
Dear Linda,

Sorry for my confusion about the concepts. I re-read Chapter 13 of the Mplus User's Guide and tried several model tests. Yet I still got problems.

My model is:
f1 by y1 y2 y3;
f2 by u4 u5 u6;
f1 on f2 x1;
f2 on x2;

It is estimated in two groups (female and male). My purpose is to compare the coefficient from x2 to f2 across groups.

As you suggested, I first test the measurement invariance of the two latent variables, f1 and f2. Then my question is:

1) should I test the two factors as in the above model, or test them separately (without ON statements)? I tried both but not sure which one is correct.

2) I found measurement non-invariance of f1 and f2. Does that mean I have no way to compare structure parameters? Is it possible to fix this problem?

3) In other questions I found you said: "Chi-square difference testing can be used to test the significance of any parameter. You just run a model where the parameter is held equal across groups and another model where the parameter is free across groups. " Can I use this method to test the coefficient from x2 to f2 across groups, even though there are measurement non-invariances?

Thank you very much for your nice help!
 Linda K. Muthen posted on Wednesday, June 13, 2007 - 5:49 am
1. I would do this without ON.
2. If you don't have measurement invariance, it means that the factor is not the same for both groups so it does not make sense to compare the structural parameters.
3. You can do this but it would be hard to justify its meaning.
 tommy lake posted on Wednesday, June 13, 2007 - 1:10 pm
Dear Linda,

Thanks a lot! I am much clearer about the process now. I will reformulate the factors to see if I can get measurement invariance.

I have a follow-up question. In the above model, if we have measurement invariance, and if we know the coefficients from x2 to f2 and from f2 to f1 are significantly different across groups, can we say the indirect effect from x2 to f1 is also significantly different across groups? If not, how do we compare indirect effects across groups?
 Linda K. Muthen posted on Wednesday, June 13, 2007 - 2:00 pm
No. You would need to test the indirect effect using MODEL CONSTRAINT.
 sunyforever posted on Wednesday, June 20, 2007 - 4:03 pm
Prof. Muthen,

I have a question related to the above discussion. Suppose I have a model:

f1 by y1 y2 y3;
f2 by u4 u5 u6;

f1 on f2 x1 x2;
f2 on x1 x2;

We can see x1 and x2 influence f1 both directly and indirectly through f2. My question is: can I compare the total effects of x1 and x2 on f1? Can I use the coefficients of the total effects to argue for one of x1 and x2 is more influential?

Thanks
 sunyforever posted on Wednesday, June 20, 2007 - 4:13 pm
In addition, I have got all the indirect effects and thus total effects with "model indirect". I just don't know how to compare them. To see their p-values, or structural coefficients, or standardized coefficients?

Thanks
 Bengt O. Muthen posted on Wednesday, June 20, 2007 - 6:02 pm
If the two x variables are on the same scale it would make sense to compare their total effects (without standardization). Model Indirect also gives standardized values so that total effects are expressed using unit x variance, making them comparable.
 sunyforever posted on Thursday, June 21, 2007 - 11:43 am
Prof. Muthen,

Thanks for the quick answer. What if I estimate the above model in two groups (by gender), can I compare the total effects across groups?

Assume I have measurement invariance and the x variables are on the same scale. Should I compare their unstandardized coefficients or standardized ones? Should I test they are significantly different or not?
 Bengt O. Muthen posted on Thursday, June 21, 2007 - 8:32 pm
I would urge across-group comparisons to be made using unstandardized coefficients. Different groups may have different covariate variances so that standardized values differ across groups even when the unstandardized do not. Unstandardized coefficients are more likely to be invariant. These are classic arguments in SEM.
 sunyforever posted on Sunday, June 24, 2007 - 11:46 am
Prof. Muthen,

If the amount of two total effects are close, do I need to test their difference? I know how to test the difference between direct effects (with Chi-square difference test), but I am not clear how to handle indirect effects and total effects.

In the above model, how can I test if the total effects of x1 and x2 on f1 is significantly different? Could you drop me several lines of commands as an example?

thanks
 Bengt O. Muthen posted on Sunday, June 24, 2007 - 12:00 pm
You can do this using Model Test, which is Wald chi-square testing.

In Model, you give labels to the 3 slopes involved, e.g.

y on m (p1);
m on x1 (p2);
m on x2 (p3);
y on x1 (p4);
y on x2 (p5);

In Model test you use

total1=p1*p2+p4;
total2=p1*p3+p5;
total1=total2;
 sunyforever posted on Wednesday, June 27, 2007 - 10:27 am
Prof. Muthen,

Thank you for the detailed instruction. I tried to run this model test in a two-group SEM, but always got the error message: "Unknown group name TEST specified in group-specific MODEL command." How can I resolve this problem?

Also, can I use this method to test the difference of total effects across groups?

Many thanks
 Linda K. Muthen posted on Wednesday, June 27, 2007 - 10:28 am
Please send your input, data, output, and license number to support@statmodel.com.
 Nathan Vandergrift posted on Friday, September 21, 2007 - 3:03 pm
I have a similar issue (wanting to compare parameters across groups). What I would like to be able to test differences between parameters in much the same way that it can be done in a linear model.

From the following:
...
G1
acadach ON hrs2_int*-.02(p1);
...
G2
acadach ON hrs2_int*-.02(p9);
...
G3
acadach ON hrs2_int*-.02(p17);
...
MODEL TEST:
P1 = P9; P9 = P17;

I get:
Wald Test of Parameter Constraints
Value 2.234
Degrees of Freedom 2
P-Value 0.3273

Is this comparable to a multi-df F-test from a linear model?
In order to test specific contrasts, do I have to run the model with JUST each contrast in the MODEL TEST statement?

Thanks.
 Linda K. Muthen posted on Friday, September 21, 2007 - 3:55 pm
No. Yes.
 Nathan Vandergrift posted on Friday, September 21, 2007 - 4:28 pm
MODEL TEST:
P1 = P9;

However, the above is equivalent to a 1df contrast, no?
 Linda K. Muthen posted on Friday, September 21, 2007 - 4:39 pm
Yes.
 Susan Seibold-Simpson posted on Monday, August 11, 2008 - 7:12 am
Hi Linda and Bengt:
I have tested for and achieved partial invariance in my measurement model using CFA for the latent variables alone. I have added my covariates and wish to test for structural invariance (factor means, variances, covariances, and regression coefficients). I have found very little in the literature about the best way to do this. I have queried SEMNET and reviewed their archives without success. Can you refer me to any references as to how best to move forward? Are there recommendations similar to what is in the UG related to measurement invariance in regards to order? Can I perform the analysis using chi-square difference testing? Thank you. Sue
ALSO: when I connect to www.statmodel.com I am getting a message from Norton AV indicating that a virus was blocked, specifically trojan.asprox. This has been happening for several days.
 Bengt O. Muthen posted on Wednesday, August 13, 2008 - 3:08 pm
We discuss this extensively in our "Topic 1" and "Topic 2" of the 8-part Mplus Short Course series. For a web video of Topics 1 and 2, see our home page under New Mplus Web Videos. This also provides handouts. The home page also has links to information on all of our 8 topics - with Topic 3 and Topic 4 coming up next week at Johns Hopkins University.
 Susan Seibold-Simpson posted on Thursday, August 14, 2008 - 8:47 am
Thank you Bengt. Sue
 Johannes Meier posted on Wednesday, November 18, 2009 - 10:17 am
Dear Mplus-Team,

given the situation that X impacts Y directly and indirectly through a mediator M, I have a question on comparing the total effects across groups (WLSMV estimation).

I use the following statements to compare 2 groups at a time in my 3 group scenario:

Group1:
Y on X (a1);
M on X (a2);
Y on M (a3);

Group2:
Y on X (b1);
M on X (b2;
Y on M (b3);

Group3:
Y on X;
M on X;
Y on M;


MODEL TEST:
0 = (a1+a2*a3)-(b1+b2*b3);

! The following statement did not work that Benqt gave on June 24, 2007 did not work for me
!total1=a1+a2*a3;
!total2=b1+b2*b3;
!total1=total2;
! This resulted always in an error message: "A parameter label or the constant 0 must appear on the left-hand side of a MODEL TEST statement."

Is this approach correct?
Secondly, is the Wald-Test applicable with WLSMV-estimation as I generally use the DIFFTEST test option to compare models estimated with WLSMV?

Thanks in advance,
Majom
 Bengt O. Muthen posted on Wednesday, November 18, 2009 - 5:43 pm
Yes, this looks correct. Yes, the Wald test is applicable also with WLSMV estimation.
 Johannes Meier posted on Monday, November 23, 2009 - 1:19 am
Thanks a lot, Bengt.

As I have quite some parameter contraints to test, I wonder if there is an option to include several MODEL TEST statements at once. For example:

MODEL TEST 1:
x_group1 = x_group2

MODEl TEST 2:
x_group1 = x_group3

MODEl TEST 3:
x_group2 = x_group3

Is there any work around? Or do I have to test each effect in a separate model estimation (In this would be more than 100 model tests)?

Thanks in advance.
 Bengt O. Muthen posted on Monday, November 23, 2009 - 2:34 pm
There is not such an option.
 Dave posted on Tuesday, October 25, 2011 - 9:50 am
I have found a significant interaction in a two-step mediation model like X -> M1 -> M2 -> DV with the interaction predicting M1. The interaction is between X and a second variable (IV1). Other points to note, X, M1 and M2 are latent variables, DV is a 0/1 binary variable. I am trying to figure out how to interpret the interaction and I am would appreciate any guidance you can provide.

I have considered splitting the sample into groups (low/high) on the moderator and estimating the mediated model for each group. I tried this and the indirect effect appears to be different across low and high groups. Does it makes sense to use the multi-group analysis approach to test for a significant difference in the indirect effects? Reading the earlier posts I believe I could do this using constraints. Also, does the presence of the significant interaction change the need to show invariance across groups prior to using the multi-group approach to test differences in the indirect effect?

The model:
Usevariable are
X1 X2 X3 X4 X5
M11 M12 M13 M21 M22 M23 M24 M25
IV1
DV
Control;
Categorical is DV;
Missing is .;
ANALYSIS:
TYPE IS Random;
MODEL:
int | F1 xwith IV1;
F1 By X1 X2 X3 X4 X5;
F2 By M11 M12 M13;
F5 By M21 M22 M23 M24 M25;
F2 on F1 IV1 int;
DV on F1 F5 Control;
F5 on F2;
 Bengt O. Muthen posted on Tuesday, October 25, 2011 - 10:40 am
I think you are dichotomizing your IV1 covariate to get a 2-group analysis. That's fine if you don't think you lose too much information. The 2-group analysis highlights the usual assumption of all parameters being the same at these low and high values. The significant interaction is only part of what's assumed invariant, such as residual variances.

But why abandon the XWITH approach? Interpreting an interaction effect uses the same thinking as in regression (see e.g. the Aiken-West book).
 Xu, Man posted on Monday, March 19, 2012 - 4:10 pm
Could I just follow up on this thread. I have a two group SEM model too, with measurement model constrained to be equal across groups. The structural paths are the key points of difference testing. I tried the MODEL TEST and specified the two sets of structural coefficients (say 2*n) to be equal, but I found the output only gave an overal wald test given the number of Parameter Constraints.

Is there a way to get the wald test for each parameter please? Or do I have to manually create n number of models to test each constraint individually?

I also tried setting structural parameters to be equal across groups in the model part, and looked at MI indices, but it was not very obvious to me which paths need to be freed.

Thanks!
 Linda K. Muthen posted on Monday, March 19, 2012 - 6:51 pm
If modification indices don't guide you, you would need to do each test separately.
 Xu, Man posted on Tuesday, March 20, 2012 - 2:07 am
Thank you, Linda. Or maybe I am not very good at using the MI indices. I will watch some relevant Mplus teaching films on measurement invariance (I beleive it is the later half of Topic 1) and see if I can clarify something.
 Xu, Man posted on Tuesday, March 20, 2012 - 6:11 am
Dear Linda,

I have just tested each pair of the multiple group structural parameters using the MODEL TEST. I found that, although the previous overal wald test of structural paths are statistically signficant, when the paths are tested one by one, none is signficantly differrent across the two groups. I wonder if there is anything that is not consistent among the two approaches and which one is better. Thanks a lot for any thoughtd and advice!

The model has mlr estimator.

Kate
 Linda K. Muthen posted on Tuesday, March 20, 2012 - 9:22 am
The overall test can have more power than the individual tests. This is not inconsistent.
 Xu, Man posted on Tuesday, March 20, 2012 - 10:07 am
Thanks! In this case, would it be safer to base model interpretation on the model with all paths freely estimated across groups?
 Linda K. Muthen posted on Tuesday, March 20, 2012 - 6:47 pm
I would report the model with the parameters free and report that an overall test of all parameters equal was rejected.
 Yijie Wang posted on Monday, July 09, 2012 - 7:35 am
Hello,

I plan to compare a path coefficient among four groups and want to know the overall difference among four groups. But I ran into an error message with the following syntax:

y on x (a1);

model g2:
y on x (a2);

model g3:
y on x (a3);

model g4:
y on x (a4);

model test:
0 = a1-a2;
0 = a1-a3;
0 = a1-a4;
0 = a2-a3;
0 = a2-a4;
0 = a3-a4;

error message:
WALD'S TEST COULD NOT BE COMPUTED BECAUSE OF A SINGULAR COVARIANCE MATRIX.

Could you help me find out the problems in the syntax? Thanks a lot!

Yijie
 Bengt O. Muthen posted on Monday, July 09, 2012 - 8:46 pm
Please send your output to Support.
 brianne posted on Tuesday, November 13, 2012 - 7:45 am
Am I correctly understanding the multiple groups analysis to say that if I keep all of the paths free and then using the Wald test to test whether the “a” path is the same, then I can just interpret the moderator model below?


GROUPING IS CRISK (1=HIGH RISK 0=LOW RISK);

MODEL:
EMOR36 on CDIST14
PROGRAM
CESDBL;
INTR24 on CDIST14 ;
EMOR36 on INTR24 ;
CDIST14 CESDBL INTR24;

MODEL HIGH RISK (dichotomized):

EMOR36 on CDIST14 PROGRAM CESDBL;
INTR24 on CDIST14 (P1);
EMOR36 on INTR24;
CDIST14 CESDBL INTR24;

MODEL LOW RISK (dichotomized):

EMOR36 on CDIST14 PROGRAM CESDBL;
INTR24 on CDIST14 (P2);
EMOR36 on INTR24;
CDIST14 CESDBL INTR24;

MODEL TEST:
0 = (P1-P2);

ANALYSIS:
ESTIMATOR IS MLR;

OUTPUT: SAMPSTAT MODINDICES(ALL) STANDARDIZED;
 Bengt O. Muthen posted on Tuesday, November 13, 2012 - 1:25 pm
Yes.

MODEL TEST tells you if the moderation is significant.
 Jenny L.  posted on Monday, May 13, 2013 - 8:12 pm
Dear Professors,

I have a set of two-wave longitudinal data, and I'd like to see whether the associations among several variables vary across time. Specifically, there are 6 paths I'd like to test.

It appeared to me that both Wald tests and Chi square ratio/difference tests would be applicable, but could you tell me what the difference is? I tried both approaches but got different results. According to Wald tests, only 1 of the 6 paths was different from T1 to T2. But when I did chi square difference tests, it looked like 4 of the 6 paths have changed.

Here's how I did chi square difference tests: I started by constraining all 6 paths to be equal across time and treated it as the baseline model. Then I freed one path a time and compared each new model's chi square value with the baseline model's, and see if the difference was larger than 3.84 (the critical value of df=1). For 4 of the 6 models, chi square difference was significant, which seemed to be inconsistent with what the Wald tests suggested.

Another question I had was: when I do Wald tests, should I start with testing all 6 paths at once and treat it as an omnibus test (and only move on to test specific paths once the overall test is significant)?

Thank you in advance for your help.
 Bengt O. Muthen posted on Monday, May 13, 2013 - 8:28 pm
Wald tests and likelihood-ratio tests are expected to give similar results when they have the same df. One is not generally better than the other. It is unclear if you did the Wald test one at a time like you did for the LR chi-2.

I would test all 6 paths at once.
 Jenny L.  posted on Monday, May 13, 2013 - 11:29 pm
Thank you for your prompt reply, Prof. Muthen. Yes I was doing Wald tests one at a time, so the results should be similar to those of LR chi-2. However, while 2 of the paths showed similar results in the two tests, the other 4 were inconsistent.

Here's the code I wrote:

[Baseline model]
model:
SCC_T1 on fdbck_T1 SR_T1
auth_T1(1);
fdbck_T1 on bth_T1 dth_T1 pos_T1 auth_T1(2-5);
SR_T1 on bth_T1 dth_T1 fdbck_T1
int_T1(6);

SCC_T2 on fdbck_T2 SR_T2
auth_T2(1);
fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(2-5);
SR_T2 on bth_T2 dth_T2 fdbck_T2
int_T2(6);

[model for comparison: The path of interest is fdbck on bth]:
model:
SCC_T1 on fdbck_T1 SR_T1
auth_T1(1);
fdbck_T1 on bth_t1
dth_T1 pos_T1 auth_T1(2-4);
SR_T1 on bth_T1 dth_T1 fdbck_T1
int_T1(6);

SCC_T2 on fdbck_T2 SR_T2
auth_T2(1);
fdbck_T2 on bth_T2
dth_T2 pos_T2 auth_T2(2-4);
SR_T2 on bth_T2 dth_T2 fdbck_T2
int_T2(6);

The chi square difference between the two models were 5.382, whereas the wald test value was 2.041. All the 4 paths that showed inconsistencies between the two tests involved the variable "fdbck."

Could you kindly tell me where the codes went wrong? Thank you for your advice.
 Linda K. Muthen posted on Tuesday, May 14, 2013 - 9:34 am
I don't see MODEL TEST. Send the relevant outputs and your license number to support@statemodel.com.
 Danyel A.Vargas posted on Tuesday, March 04, 2014 - 9:41 am
Hello,

I want to test whether the means are significantly different from one another between classes and have asked for a wald test. However, I'm receiving an error. Can you please help me?

Thanks so much.

Danyel

MODEL ESTIMATION TERMINATED NORMALLY

WALD'S TEST COULD NOT BE COMPUTED BECAUSE OF A SINGULAR COVARIANCE MATRIX.
 Linda K. Muthen posted on Tuesday, March 04, 2014 - 10:09 am
Please send the output and your license number to support@statmodel.com.
 Chie Kotake posted on Thursday, May 08, 2014 - 2:52 pm
Hi,

I have a question regarding the covariance coefficients. I have a multi group model, and I want to test if the covariance between the two factors(f1 and f2) is equal across groups.

When I have the covariance freely estimated(at the strong factorial invariance step), group 2 showed statistically significant covariance between f1 and f2, while group 1 and 3 didnt.

BUT, when I equal the covariance between f1 and f2 across groups, the chi square different test comes out insignificant, meaning there are no group difference in the covariance between f1 and f2.

Can this happen, where I see difference in coefficient significance across groups when freely estimated, but no group difference when the coefficients are equated across groups?

Thank you for your help!
 Linda K. Muthen posted on Friday, May 09, 2014 - 8:53 am
The z-tests that you obtain in the results section of the output compare the regression coefficient to zero. The equality test compares the regression coefficients to each other. A coefficient may be significantly different from zero but not significantly different from another coefficient.

0 b1 b3 b2

b3 and b2 may be different from zero but not from each other.
 Ari J Elliot posted on Wednesday, January 14, 2015 - 5:11 pm
Hello Drs. Muthen,

I am conducting multigroup analyses in which I would like to test differences in path coefficients between two groups. I have established that factor loadings are invariant across groups, but intercepts are not.

In the model I have set up, factor loadings and intercepts are constrained by default, and I then use the MODEL TEST command to compare specific parameters.

Is it appropriate to compare path coefficients obtained using a multigroup model in which intercepts are constrained to be equal when they are in fact different? Instead, could one compare path coefficients when only (invariant) factor loadings are constrained? Whenever I try to free the intercepts the model is no longer identified.

Wang's book on MPLUS states that "equality restrictions have to be imposed on item intercepts in order to make the mean structure part of the model identifiable." This seems to imply that intercepts need to be constrained equal for the model to be identified. However, another SEM program (AMOS) appears able to provide estimates as well as parameter comparisons with only loadings constrained (as well as fully unconstrained when identified).

To summarize, when intercepts are not invariant, does it make more sense to compare path coefficients with intercepts constrained equal or not, and is the latter possible in general and in MPLUS?

Thank you for your help!
 Bengt O. Muthen posted on Wednesday, January 14, 2015 - 5:31 pm
Path coefficients can be compared with only loading (metric/weak) invariance. For factor mean comparisons the additional intercept invariance is needed (scalar/strong invariance).

You can free the intercepts when you fix the factor means to zero in all groups. Although the distortions may not be large, I wouldn't hold intercepts invariant if that is rejected by testing.

The configural metric and scalar invariance models can be automatically set up by using the Model = configural metric scalar option of the Analysis command.

You can also investigate partial invariance wrt the intercepts.
 Ari J Elliot posted on Wednesday, January 14, 2015 - 6:03 pm
Ok I see, the model was identified with the factor means fixed to zero. Thank you so much Dr. Muthen for your quick reply.
 Simon Schus posted on Wednesday, May 20, 2015 - 6:08 am
Hi there,

I note that MODEL TEST cannot be combined with bootstrap.

Are you able to advise what one could do if I wanted to test some direct and indirect effects across multiple groups, whilst retaining the bootstrapping procedure?

Simon
 Linda K. Muthen posted on Wednesday, May 20, 2015 - 6:14 am
You could try to use MODEL CONSTRAINT to specify the indirect effects.
 Jinxin ZHU posted on Friday, July 03, 2015 - 9:54 am
Dear Prof. Muthen,

I found a DIF item in my analysis and I decided to keep it. To examine the effect of keeping the DIF item, I want compare the results of the path analyses with and without the DIF item. Both of the analyses used two-step analysis employing plausible values and Rasch model.

1. Would you please suggest whether there is any method I can do use to test the differences between the two sets of the path coefficients from the path analyses with and with the DIF item?


2. I have used Wald test for coeficient comparison in multiple group analysis in other study before. However, this time the analyses with and without the DIF item are two seperate analyses. So the scales for these two analyses are different. Is Wald test still applicable in this case?

3. Do you think it is appropriate to consider the analyses with and without the DIF item as two-groups comparison in one analysis? (Still, what I am concerned is that the analyses with and without the DIF item are actually based on two different data sets).

Thank you so much.
 ehrbc1 posted on Tuesday, March 01, 2016 - 3:50 am
Hello,

I am trying to compare the total effects of pride to both other-focused wellbeing with pride to self-focused wellbeing. 2 mediators are involved. I have included my syntax below. I am wondering why I am getting completely different answers depending on whether I use a summing technique for wellbeing (i.e., adding all the individual items together) versus an average score. The model standardised/unstandardised estimates are coming up exactly the same. I also find the exact same problem when I use the model test/wald analysis.

Thanks for your help,
Elizabeth

MODEL:

COMMUNAL on PRIDE (cp);
COMMUNAL on COMP (cc);
AGENTIC on PRIDE (ap);
AGENTIC on COMP;
OTHERWB on PRIDE (op);
OTHERWB on COMP;
OTHERWB on COMMUNAL (oc);
OTHERWB on AGENTIC (oa);
SELFWB on PRIDE (sp);
SELFWB on COMP;
SELFWB on COMMUNAL (sc);
SELFWB on AGENTIC (sa);


MODEL CONSTRAINT:

NEW (TOTAL1 TOTAL2 DIFFERENCE);
TOTAL1= ap*sa+cp*sc+sp;
TOTAL2= ap*oa+cp*oc+op;
DIFFERENCE = TOTAL1-TOTAL2;
 Bengt O. Muthen posted on Tuesday, March 01, 2016 - 6:00 pm
The sum and the average are on different scales so you should not expect the same estimates. Only the standardized solutions are comparable, so effect sizes are the same.
 ehrbc1 posted on Tuesday, March 01, 2016 - 7:34 pm
Thanks for your response. So is there a way to run the model constraint option above on the standardised solutions so that the same results are produced irrespective of whether scales are averaged or summed?
 Bengt O. Muthen posted on Wednesday, March 02, 2016 - 3:03 pm
No, but you can standardize the effects in Model Constraint by doing the right multiplying and dividing by SDs.

But why not settle on one or the other - sum or average. It won't matter in the interpretations.
 ehrbc1 posted on Wednesday, March 02, 2016 - 7:24 pm
Hello,

Yes I plan to just use one or the other - I stumbled across this problem when I was switching one study over to summed scores for consistency purposes.

However, it does appear to be effecting the interpretations of model constraint difference scores (total effects or mediation pathways). Not in all cases, but in some it does. Using the following syntax - and if I use summed scores for wellbeing (WB) - 0 lies within the CI (but it doesn't when I use average).

COMMUNAL on PRIDE (cp);
COMMUNAL on COMP (cc);
AGENTIC on PRIDE (ap);
AGENTIC on COMP;
OTHERWB on PRIDE (op);
OTHERWB on COMP;
OTHERWB on COMMUNAL (oc);
OTHERWB on AGENTIC (oa);
SELFWB on PRIDE (sp);
SELFWB on COMP;
SELFWB on COMMUNAL (sc);
SELFWB on AGENTIC (sa);


MODEL CONSTRAINT:

NEW (MED1 MED2 DIFFERENCE);
MED1= cc*oc;
MED2= cc*sc;
DIFFERENCE = MED2-MED1;

Is this to be expected given that the difference score is based on unstandardised estimates, or am I making some type of error? The correlation between summed and average scores is perfect, however. I do note that the the IV is average scored and the outcome is summed......

Thank you,
Elizabeth
 Bengt O. Muthen posted on Thursday, March 03, 2016 - 6:43 pm
Sounds like some sort of error.

You can also request bootstrap CIs and see if you get different results.
 ehrbc1 posted on Thursday, March 03, 2016 - 9:31 pm
Thank you! My interpretations are now consistent across summed and average.

I would now like to compare the difference in total effects of pride to SELFWB and pride to OTHERWB, preferably using the model constraint.

So:
MODEL CONSTRAINT:
NEW (TOTAL1 TOTAL2 DIFFERENCE)
TOTAL1=sp+ap*sa+cp*sc;
TOTAL2=op+ap*oa+cp*oc;
DIFFERENCE = MED2-MED1;

I'm unsure of what ANALYSIS and OUTPUT commands to state for these total effect comparisons.

Thanks.
 Bengt O. Muthen posted on Friday, March 04, 2016 - 10:30 am
None unless you want bootstrapping. But I am not sure I understand your question.
 ehrbc1 posted on Saturday, March 05, 2016 - 7:53 pm
Hi Bengt,

Unfortunately, I am still experiencing some issues depending on whether I sum or average variables.

For example, using the below constraint, both the unstandardised estimate divided by the standard error is remaining pretty much the same (couple of decimal difference) for each indirect effect, regardless of summing or averaging. It’s the new difference score testing the difference between indirect effects that has a different significance level and unstandardised estimate divided by standard error, when I switch from average to summed scores.

MODEL CONSTRAINT:

NEW (MED1 MED2 DIFFERENCE);
MED1= cc*oc;
MED2= cc*sc;
DIFFERENCE = MED2-MED1;

I should note that the outcome variable in the mediation pathways (so o and s) above have a different number of individual items (36 versus 9), although both measured on 1-7 Likert Scale.

Do you have any insight into what could be producing this disrepancy across summed and average scales?

Thank you.
 Linda K. Muthen posted on Sunday, March 06, 2016 - 6:45 am
Please send the outputs and your license number to support@statmodel.com.
 Pia H. posted on Wednesday, April 13, 2016 - 2:26 am
Hello,

I think this has been written about before, but just to make sure I got this right:

I have a SEM with two groups and two latent factors with categorical indicators, one of which is regressed on the other. To find out if the regression coefficients are significantly different between the two groups, I use one model where the regression between the factors is free and another model where it is equal across group and compare the model fit using DIFFTEST? I'm not sure if I read that is not possible to constrain an ON statement.

Thank you very much,
Pia
 Linda K. Muthen posted on Wednesday, April 13, 2016 - 6:59 am
ON statements can be constrained.
 anonymous Z posted on Monday, July 25, 2016 - 10:29 am
Dear Drs. Muthen,

I am doing a multiple group comparison (two groups) for the structural paths. Below is a simplified model.

X -> M1-> M2->Y
Model indirect:
Y ind x;

The results showed that in group 1, x was NOT significantly associated with M1. But in group 2, x was significantly associated with M1. Chi-square test showed that this path was significantly different between the two groups. The rest of paths were significant but did not differ across groups.

The bootstrapping results showed that the 95% CI of the indirect effects excluded zero for group 2. In this case, can I conclude that in group 2, mediating effects held, while the mediating effects were not found with group 1? Or do I still have to test the difference of indirect effects? I don’t think I need to test the difference because in group 1, x was NOT significantly associated with M1, so the indirect effects were not significant anyway. However, I want to check with you.

Also, as for reporting results,I read your conversation with Xu, Man on March 20, 2012, it seems that you suggest to report model with parameters free and report the path difference test. I wondered if it applies to my case too. My results look somewhat different when I constrained all equal paths.

Thanks so much,
Jing
 Bengt O. Muthen posted on Monday, July 25, 2016 - 3:57 pm
These choices are more or less up to personal taste in how to present results. The indirect effect difference might be of interest to test. I would try SEMNET for these general analysis/presentation strategy choices so you get many opinions.
 anonymous Z posted on Thursday, July 28, 2016 - 9:29 am
Dear Drs. Muthen,

I am stuck with a mplus syntax question. I am trying to constrain the equal paths for model parsimony, meanwhile create indirect effects with MODEL CONSTRAINT in order to compare the indirect effects difference. But it seems that I cannot do both at the same time because I cannot put both (1) and (a1)/(a2) after the same path. How should I resolve the problem?

Group 1
ippa on care(1)(a1);
aggre on ippa(b1);

Group 2
ippa on care(1)(a1);
aggre on ippa(b1);

MODEL CONSTRAINT:

new (a1b1 a2b2 diff);
a1b1=a1*b1;
a2b2=a2*b2;
diff=a1b1-a2b2;

Thanks so much!
 Linda K. Muthen posted on Thursday, July 28, 2016 - 10:05 am
You cannot both hold the parameters equal and get different indirect effects for each group. If you hold them equal, the indirect effects are by definition equal.
 anonymous Z posted on Thursday, July 28, 2016 - 10:17 am
Dr. Muthen,

Thanks so much. I have two paths here, one of them is equivalent, but the other is not. So the indirect effects should be different.

So do you mean that if I want to use MODEL CONSTRAINT, I need to keep the paths freely estimated?

Thanks!
 Linda K. Muthen posted on Thursday, July 28, 2016 - 12:01 pm
If you have them equal, you will get the same indirect effect for both groups. If you have them unequal, you will get a different indirect effect for each group. This is your choice.
 anonymous Z posted on Thursday, July 28, 2016 - 12:50 pm
Dr. Muthen,

I apologize that I got the syntax wrong.

Group 1
ippa on care(1)(a1);
aggre on ippa(b1);

Group 2
ippa on care(1)(a2);
aggre on ippa(b2);

MODEL CONSTRAINT:

new (a1b1 a2b2 diff);
a1b1=a1*b1;
a2b2=a2*b2;
diff=a1b1-a2b2;

So the path care -> ippa are equivalent across groups, however, the path ippa->aggre are not equivalent. So by constraining care-> ippa, the indirect effects will be still different.

So my question is not about if I get same indirect effect for both group, but if I can put "(1)" and "(a1)" after "ippa on care", so I can equate the path and meanwhile create indirect effects?

Thanks so much!
 Linda K. Muthen posted on Thursday, July 28, 2016 - 3:07 pm
No. When you put a1 after two coefficients, they are held equal. You can use a label or a number for an equality.
 ehrbc1 posted on Sunday, January 01, 2017 - 8:06 pm
Hello Mplus,

I believe potential suppression may be explaining a significant negative link in my model.

ORIGINAL MODEL:
M1 on X1;
M2 on X1;
Y1 on X1;
Y1 on M1; ** Beta; = -0.13, p = .05
Y1 on M2;
Y2 on X1;
Y2 on M1;
Y2 on M2;

REDUCED MODEL: (where suppressor variable (M2) is removed:
M1 on X1;
Y1 on X1;
Y1 on M1; ** Beta; = -0.02, ns.
Y2 on X1;
Y2 on M1;

Is there a way in mplus to compare the two regression coefficients for the different models that are using the same sample (whether the suppression effect is significant)? Alternatively, would you suggest the Z score equation: Z=(b1-b2)/ sqr root (SEb1sqred+SEb2sqred).

Thank you.
 Bengt O. Muthen posted on Monday, January 02, 2017 - 2:23 pm
I don't know how that can be done. The Z-score doesn't take into account the dependence caused by using the same sample. Maybe ask on SEMNET.
 ehrbc1 posted on Saturday, January 07, 2017 - 8:21 pm
Hi Bengt,

Does the fact that the models are nested make this possible in Mplus? For example, could I use DIFFTEST or MODEL CONSTRAINT? Ie in the reduced model specify M2 on X1@0, Y1 on M2@0, and Y2 on M2@0 and then run a test of whether Y1 on M1 in original versus reduced model differ?

Thank you.
 Daniel Lee posted on Friday, February 24, 2017 - 12:44 pm
Hi Dr. Muthen, if I ran a path analysis (1 mediator) and found that the indirect was significant for females, but not males...but then also realized that there wasn't a significant difference (used model constraints to test differences)...how do I interpret that finding?

Does that mean sex doesn't condition the mediation model...even though the indirect effect of mediation is significant for one group but not the other?

Thank you as always!
 Bengt O. Muthen posted on Friday, February 24, 2017 - 5:06 pm
This is a perfectly possible outcome that is not strange. Say that the indirect effect estimates are

0.1 for males (ns)

0.2 for females (sig)

Then the difference of 0.1 need not be significant.

But you may want to discuss these types of general matters with folks on SEMNET.
 Filipa Alexandra da Costa Rico Cala posted on Thursday, May 18, 2017 - 3:41 pm
Dear Linda/Bengt,

For my PhD research, I am running a cross-cultural study, and I am using multi-group analysis for checking the invariance of structural parameters among two different countries. My dependent variable is dichotomous, so I am using the WLSMV estimator. After starting running my models, I have a few questions that I would like to ask to you: Could you please let me know if the Wald test will be calculated automatically when we insert the model test command? In addition, could you please let me know if the following syntax is correct for checking the structural invariance of a regression path in my two groups:

Model
dsm ON sk gamexp ilusctrl
predctrl inagamb
inbias zanga
VincPos Vincpos2;

Group 1:
dsm ON sk gamexp ilusctrl
predctrl inagamb
inbias zanga
VincPos Vincpos2 (a1);

Group 2:
dsm ON sk gamexp ilusctrl
predctrl inagamb
inbias zanga
VincPos Vincpos2 (a2);

Model test:
0=a2-a1

Finally, I also have a mediation in my model. Could you please tell me if I need to use separate commands for examine the invariance of each path? For instance, do I need to run one command for checking the invariance of this regression effect that I mentioned above, and another command for checking the invariance of the total effect, and finally another command for checking the invariance of the indirect effect?

Many thanks for your help
 Bengt O. Muthen posted on Thursday, May 18, 2017 - 4:30 pm
You cannot say for instance:

VincPos Vincpos2 (a1);

but instead have to put the labeled parameter on its own row:

VincPos
Vincpos2 (a1);


Yes, you have to do different runs for these different tests.
 Filipa Alexandra da Costa Rico Cala posted on Friday, May 19, 2017 - 4:56 am
Dear Bengt,

Thank you very much for your response and for your help. Could you please tell me if the Wald test will automatically be calculated if we write the model test command?
Once again, many thanks
 Bengt O. Muthen posted on Friday, May 19, 2017 - 11:14 am
Yes.
 Jesper Ingvardson posted on Wednesday, August 23, 2017 - 10:54 am
Dear Bengt and Linda,

I'm doing an analysis on transit satisfaction from respondents from different cities, hence a multiple group analysis. However, in my model I have a set of binary variables stating what mode the respondent most often use, e.g. metro. Not all cities have all modes, so for some cities only a subset of these binary variables are included. How can I make such analysis in Mplus?

The overall model structure is:

OBS1 ON LV1-LV7
OBS2 ON LV1-LV7
OBS3 ON OBS1-OBS2 MetroUser TramUser BusUser

I did the analysis on each city first, and observed different coefficients for some of the LV's (LV1-LV7). The main goal is therefore to estimate a model where these are not the same for all cities - and including all relevant binary variables for each city.
 Bengt O. Muthen posted on Wednesday, August 23, 2017 - 4:32 pm
If the coefficients for

MetroUser TramUser BusUser

are the same across cities (for categories that exist) in the regressions

OBS3 ON OBS1-OBS2 MetroUser TramUser BusUser

then I think you can handle the problem by bringing in these 3 variable into the model (mention their means for instance) and use Knownclass to handle the multiple groups.
 Jesper Ingvardson posted on Wednesday, August 23, 2017 - 10:41 pm
Dear Bengt,

Thank you for the prompt reply.

The coefficients for

MetroUser TramUser BusUser

also vary across cities, especially tram which is positive for some and negative for others.

When I try having them in the general MODEL statement, I get the following error (due to Metro not being present in GVA group).

*** ERROR
One or more variables have a variance of zero.
Check your data and format statement.
...
Group GVA
...
**METROU 6965 0.000

Is there another way of modelling this?

Thanks again.
 Bengt O. Muthen posted on Thursday, August 24, 2017 - 6:47 pm
I can't think of an easy way to handle this. But see our FAQ:

Different number of variables in different groups
 A.K.E. Holl posted on Tuesday, August 29, 2017 - 4:30 am
I want to compare two paths in my multi-group model, and have a question regarding the result of my Wald Test.
When I test two paths in my multi-group model (girls and boys) to be equal, I get the value ********** for the Wald test, and .0000 for the p-value.

So the test is significant, and the two paths differ significantly between girls and boys, but why is there no value for the wald test?
Or does it mean, the test is not valid in this situation?

I received values for other wald tests in the same model, so I am not quite sure, how to interpret this one.

I would appreciate, if you could help me.

Thank you!
 Linda K. Muthen posted on Tuesday, August 29, 2017 - 6:45 am
Are you using MODEL TEST? If so, you should not set the paths of boys and girls equal. You should give the parameters different labels, for example, p1 and p2. The Wald test is then:

MODEL TEST:
0 = p1 - p2;
 A.K.E. Holl posted on Tuesday, August 29, 2017 - 6:54 am
Thank you for your reply. Sorry, for not writing clearly. That is exactly what I did.
Here is the part from my syntax:


model girls:
t2TMCQ on t1LAggrel (59);

model boys:
t2TMCQ on t1LAggrel (j59);


model test:
0= 59-j59
 Jesper Ingvardson posted on Wednesday, August 30, 2017 - 1:02 pm
I tried adding the statement VARIANCES=NOCHECK;
It did help on estimating some of my models. However, now I get this error

*** FATAL ERROR
THE SAMPLE COVARIANCE MATRIX FOR THE INDEPENDENT VARIABLES IN THE MODEL CANNOT BE INVERTED. THIS CAN OCCUR IF A VARIABLE HAS NO VARIATION OR IF TWO VARIABLES ARE PERFECTLY CORRELATED. CHECK YOUR DATA.

I believe it is due to some binary variables not being present for all cities. Do you agree?
I tried also estimating the same model for each city separately (I removed binary variables if they were not present for that given city). This works out well. Could you comment (or point me to relevant literature) on whether this approach would be appropriate?
 Bengt O. Muthen posted on Wednesday, August 30, 2017 - 4:00 pm
Q1: Right

Q2: Sure. But that doesn't solve the problem of your simultaneous analysis. You may also try SEMNET.
 Angela Sorgente posted on Saturday, November 25, 2017 - 3:52 am
Hi,
I have to run two SEPARATED Wald tests in the same model.

I tried using this input:

model test:
0=m1-m2;
0=m1-m3;

model test:
0=m4-m5;
0=m4-m6;

But as result I obtained only one test:

Wald Test of Parameter Constraints
Value 13.923
Degrees of Freedom 4
P-Value 0.0075


In other words, I would like to obtain two separated wald tests (each with 2df) and not just one with 4df.

Which is the right syntax to obtain what I need?

Thank you!
 Bengt O. Muthen posted on Sunday, November 26, 2017 - 4:33 pm
You have to run it twice, each Model Test separately.
 dummyvariable123 posted on Sunday, February 25, 2018 - 12:23 pm
Dear Dr. Muthen,

Can I hold two paths to equality and simultaneously model indirect effect? I was trying this syntax (as well as putting c, b after equality constraints) but I get error messages.

%BETWEEN id%
Y;
s;
s WITH Y;

s ON X(2);
s ON Z(2);

Z ON X(a);
Y ON Z(c)(1);
Y ON X(b)(1);

MODEL CONSTRAINT:
new (direct, indirect, total);
indirect = a*b;
direct = c;
total = c+a*b;
 Bengt O. Muthen posted on Sunday, February 25, 2018 - 5:33 pm
The problem is where you give double labels:

Y ON Z(c)(1);
Y ON X(b)(1);

Instead, just say

Y ON Z(c);
Y ON X(c);
 dummyvariable123 posted on Monday, February 26, 2018 - 12:33 am
Dear Dr. Muthen,

To estimate the indirect effect with model constraint command I have to specify which path is (b) and which path is (c), i.e. they cannot be both named the same.

At the same time I would like to hold them to equality. Is that possible?
 Bengt O. Muthen posted on Monday, February 26, 2018 - 11:14 am
Why couldn't they be named the same if they are constrained to have the same value?

An equivalent alternative is to keep calling them b and c and then use

Model Constraint:

0 = b - c;

That's the same thing - gives the same results.
 dummyvariable123 posted on Monday, February 26, 2018 - 1:18 pm
Dear Dr. Muthen,

I see your point. So in a case where I would like to include also random slope in the indirect effect the syntax could be:

%BETWEEN id%
Y;
S;
S WITH Y;

Z ON X(a);

S ON X(c);
S ON Z(c);

Y ON X(b);
Y ON Z(b);

MODEL CONSTRAINT:
new (direct, indirect, total);
indirect = a*c;
direct = c;
total = c+a*c;

Is this correct?
 Bengt O. Muthen posted on Monday, February 26, 2018 - 3:03 pm
Looks fine.
 Ellen Houben posted on Monday, June 18, 2018 - 8:04 am
Hi, I want to test whether groups of two paths differ in strength.

I did the following:

PEIT2 ON WRLFT1 (PIWF);
PEIT2 ON WRLIT1 (PIWI);
PEIT2 ON WRLNT1 (PIWN);

PEET2 ON WRLFT1 (PEWF);
PEET2 ON WRLIT1 (PEWI);
PEET2 ON WRLNT1 (PEWN);

Model constraint:

new (PIFI, PEFI, PIFN, PEFN, PIIN, PEIN);

PIFI= PIWF-PIWI;
PEFI= PEWF-PEWI;

PIFN= PIWF-PIWN;
PEFN= PEWF-PEWN;

PIIN= PIWI-PIWN;
PEIN= PEWI-PEWN;

Results show that:

Estimate S.E. Est./S.E. P-Value
PIFI 0.029 0.038 0.763 0.445
PEFI 0.040 0.037 1.087 0.277
PIFN 0.134 0.041 3.247 0.001
PEFN 0.040 0.036 1.113 0.266
PIIN 0.104 0.036 2.905 0.004
PEIN 0.000 0.036 -0.010 0.992

So you could state that based on CI=95%, for instance (PIFN) these relationships significantly differ:
PEIT2 ON WRLFT1 (PIWF) and PEIT2 ON WRLNT1 (PIWN), as the difference does not equal zero.

But now I saw that the path coefficient of PEIT2 ON WRLNT1 (PIWN), is negative.

Can I still perform this difference test? And how do I interpret it?

Thank you very much!
 Bengt O. Muthen posted on Monday, June 18, 2018 - 9:53 am
Draw a horizontal line marking zero in the middle. Mark the negative value of PIWN and the positive value of PIWF. The distance between them is what is being evaluated for significance; it doesn't matter that one of the estimates is negative.
 Joy Thompson posted on Saturday, September 14, 2019 - 11:14 pm
Hi,

I conducted a multi-group analysis where the outcome is dichotomous. Whereas some of the probit coefficients for paths are non-significant, the z-test comparison from the MODEL CONSTRAINT command indicate that the paths are significantly differ from each other. I understand that the comparisons are different, and that it is possible for paths to significantly differ from zero but not from each other, but is it possible for paths to not significantly differ from zero but differ from each other (where either one or both paths are not significantly different from zero)? My inclination would be to not compare paths that are non-significant, but I'm not sure given the test are different. Any insights are appreciated.

Thanks!

Joy
 Bengt O. Muthen posted on Monday, September 16, 2019 - 4:08 pm
I can imagine that this would happen e.g. if one estimate is -0.5 say and the other +0.5. The distance between them is longer than either from zero.

If there are non-significant coefficients in the groups, it wouldn't be of much interest to test that they are the same.
 Joy Thompson posted on Friday, November 01, 2019 - 4:49 pm
I just realized that you responded, thanks so much! It seems that I may have used the MODEL CONSTRAINT command incorrectly, as it seems it should be used to define or constrain parameters. Am I correct that it is appropriate to use MODEL TEST to compare path coefficients across groups (i.e., path1_groupa = path1_groupb)? My understanding is that I'd need to run separate models with the MODEL TEST specified for each comparison of interest and get a corresponding Wald's test; that is, I can't compare all paths simultaneously. As you noted, it probably does not make sense to compare coefficients if they were non-significant for one or both groups of interest.
 Bengt O. Muthen posted on Friday, November 01, 2019 - 5:28 pm
With Model Test, you can include several tests - but they will be evaluated all together, not one at a time. For one at a time, you have to run it one at a time.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: