You mention factor loadings so I assume that x, y, and z are latent variables. If so, you should first establish measurement invariance before testing structural parameters. How to do this is discussed in Chapter 13 of the Mplus User's Guide available on the website at the end of the multiple group discussion. This is also shown in the Day 1 handout along with how to test for structural parameter differences.
tommy lake posted on Tuesday, June 12, 2007 - 11:54 am
Sorry I did not make it clear enough. Let's assume X and Y are latent variable, and Z is not.
I have established measurement invariance by constraining all parameters equal across the two groups. After a Chi-square difference test, I found when relaxing the constraints on coefficients and factor loadings, the model fits best.
I understand this means the coefficients and factor loadings, as a WHOLE, are significantly different across groups. My question is: Based on such information, can I say one INDIVIDUAL coefficient, from Y to X, is also significantly different cross the two groups? Can I say one individual indirect effect, from Z to X, is significantly different cross the two groups?
Measurement invariance is established by looking at measurement parameters -- intercepts and factor loadings in most cases. If these are not the same for both groups, then you do not have measurement invariance. Only after establishing measurement invariance, would one compare structural parameters -- means, variances, covariances, and regression coefficients of the factors. One would not constrain both measurement and structural parameters equal at the same time to test measurement invariance.
tommy lake posted on Wednesday, June 13, 2007 - 1:32 am
Sorry for my confusion about the concepts. I re-read Chapter 13 of the Mplus User's Guide and tried several model tests. Yet I still got problems.
My model is: f1 by y1 y2 y3; f2 by u4 u5 u6; f1 on f2 x1; f2 on x2;
It is estimated in two groups (female and male). My purpose is to compare the coefficient from x2 to f2 across groups.
As you suggested, I first test the measurement invariance of the two latent variables, f1 and f2. Then my question is:
1) should I test the two factors as in the above model, or test them separately (without ON statements)? I tried both but not sure which one is correct.
2) I found measurement non-invariance of f1 and f2. Does that mean I have no way to compare structure parameters? Is it possible to fix this problem?
3) In other questions I found you said: "Chi-square difference testing can be used to test the significance of any parameter. You just run a model where the parameter is held equal across groups and another model where the parameter is free across groups. " Can I use this method to test the coefficient from x2 to f2 across groups, even though there are measurement non-invariances?
1. I would do this without ON. 2. If you don't have measurement invariance, it means that the factor is not the same for both groups so it does not make sense to compare the structural parameters. 3. You can do this but it would be hard to justify its meaning.
tommy lake posted on Wednesday, June 13, 2007 - 1:10 pm
Thanks a lot! I am much clearer about the process now. I will reformulate the factors to see if I can get measurement invariance.
I have a follow-up question. In the above model, if we have measurement invariance, and if we know the coefficients from x2 to f2 and from f2 to f1 are significantly different across groups, can we say the indirect effect from x2 to f1 is also significantly different across groups? If not, how do we compare indirect effects across groups?
No. You would need to test the indirect effect using MODEL CONSTRAINT.
sunyforever posted on Wednesday, June 20, 2007 - 4:03 pm
I have a question related to the above discussion. Suppose I have a model:
f1 by y1 y2 y3; f2 by u4 u5 u6;
f1 on f2 x1 x2; f2 on x1 x2;
We can see x1 and x2 influence f1 both directly and indirectly through f2. My question is: can I compare the total effects of x1 and x2 on f1? Can I use the coefficients of the total effects to argue for one of x1 and x2 is more influential?
sunyforever posted on Wednesday, June 20, 2007 - 4:13 pm
In addition, I have got all the indirect effects and thus total effects with "model indirect". I just don't know how to compare them. To see their p-values, or structural coefficients, or standardized coefficients?
If the two x variables are on the same scale it would make sense to compare their total effects (without standardization). Model Indirect also gives standardized values so that total effects are expressed using unit x variance, making them comparable.
sunyforever posted on Thursday, June 21, 2007 - 11:43 am
Thanks for the quick answer. What if I estimate the above model in two groups (by gender), can I compare the total effects across groups?
Assume I have measurement invariance and the x variables are on the same scale. Should I compare their unstandardized coefficients or standardized ones? Should I test they are significantly different or not?
I would urge across-group comparisons to be made using unstandardized coefficients. Different groups may have different covariate variances so that standardized values differ across groups even when the unstandardized do not. Unstandardized coefficients are more likely to be invariant. These are classic arguments in SEM.
If the amount of two total effects are close, do I need to test their difference? I know how to test the difference between direct effects (with Chi-square difference test), but I am not clear how to handle indirect effects and total effects.
In the above model, how can I test if the total effects of x1 and x2 on f1 is significantly different? Could you drop me several lines of commands as an example?
You can do this using Model Test, which is Wald chi-square testing.
In Model, you give labels to the 3 slopes involved, e.g.
y on m (p1); m on x1 (p2); m on x2 (p3); y on x1 (p4); y on x2 (p5);
In Model test you use
total1=p1*p2+p4; total2=p1*p3+p5; total1=total2;
sunyforever posted on Wednesday, June 27, 2007 - 10:27 am
Thank you for the detailed instruction. I tried to run this model test in a two-group SEM, but always got the error message: "Unknown group name TEST specified in group-specific MODEL command." How can I resolve this problem?
Also, can I use this method to test the difference of total effects across groups?
Hi Linda and Bengt: I have tested for and achieved partial invariance in my measurement model using CFA for the latent variables alone. I have added my covariates and wish to test for structural invariance (factor means, variances, covariances, and regression coefficients). I have found very little in the literature about the best way to do this. I have queried SEMNET and reviewed their archives without success. Can you refer me to any references as to how best to move forward? Are there recommendations similar to what is in the UG related to measurement invariance in regards to order? Can I perform the analysis using chi-square difference testing? Thank you. Sue ALSO: when I connect to www.statmodel.com I am getting a message from Norton AV indicating that a virus was blocked, specifically trojan.asprox. This has been happening for several days.
We discuss this extensively in our "Topic 1" and "Topic 2" of the 8-part Mplus Short Course series. For a web video of Topics 1 and 2, see our home page under New Mplus Web Videos. This also provides handouts. The home page also has links to information on all of our 8 topics - with Topic 3 and Topic 4 coming up next week at Johns Hopkins University.
given the situation that X impacts Y directly and indirectly through a mediator M, I have a question on comparing the total effects across groups (WLSMV estimation).
I use the following statements to compare 2 groups at a time in my 3 group scenario:
Group1: Y on X (a1); M on X (a2); Y on M (a3);
Group2: Y on X (b1); M on X (b2; Y on M (b3);
Group3: Y on X; M on X; Y on M;
MODEL TEST: 0 = (a1+a2*a3)-(b1+b2*b3);
! The following statement did not work that Benqt gave on June 24, 2007 did not work for me !total1=a1+a2*a3; !total2=b1+b2*b3; !total1=total2; ! This resulted always in an error message: "A parameter label or the constant 0 must appear on the left-hand side of a MODEL TEST statement."
Is this approach correct? Secondly, is the Wald-Test applicable with WLSMV-estimation as I generally use the DIFFTEST test option to compare models estimated with WLSMV?
Dave posted on Tuesday, October 25, 2011 - 9:50 am
I have found a significant interaction in a two-step mediation model like X -> M1 -> M2 -> DV with the interaction predicting M1. The interaction is between X and a second variable (IV1). Other points to note, X, M1 and M2 are latent variables, DV is a 0/1 binary variable. I am trying to figure out how to interpret the interaction and I am would appreciate any guidance you can provide.
I have considered splitting the sample into groups (low/high) on the moderator and estimating the mediated model for each group. I tried this and the indirect effect appears to be different across low and high groups. Does it makes sense to use the multi-group analysis approach to test for a significant difference in the indirect effects? Reading the earlier posts I believe I could do this using constraints. Also, does the presence of the significant interaction change the need to show invariance across groups prior to using the multi-group approach to test differences in the indirect effect?
The model: Usevariable are X1 X2 X3 X4 X5 M11 M12 M13 M21 M22 M23 M24 M25 IV1 DV Control; Categorical is DV; Missing is .; ANALYSIS: TYPE IS Random; MODEL: int | F1 xwith IV1; F1 By X1 X2 X3 X4 X5; F2 By M11 M12 M13; F5 By M21 M22 M23 M24 M25; F2 on F1 IV1 int; DV on F1 F5 Control; F5 on F2;
I think you are dichotomizing your IV1 covariate to get a 2-group analysis. That's fine if you don't think you lose too much information. The 2-group analysis highlights the usual assumption of all parameters being the same at these low and high values. The significant interaction is only part of what's assumed invariant, such as residual variances.
But why abandon the XWITH approach? Interpreting an interaction effect uses the same thinking as in regression (see e.g. the Aiken-West book).
Xu, Man posted on Monday, March 19, 2012 - 4:10 pm
Could I just follow up on this thread. I have a two group SEM model too, with measurement model constrained to be equal across groups. The structural paths are the key points of difference testing. I tried the MODEL TEST and specified the two sets of structural coefficients (say 2*n) to be equal, but I found the output only gave an overal wald test given the number of Parameter Constraints.
Is there a way to get the wald test for each parameter please? Or do I have to manually create n number of models to test each constraint individually?
I also tried setting structural parameters to be equal across groups in the model part, and looked at MI indices, but it was not very obvious to me which paths need to be freed.
If modification indices don't guide you, you would need to do each test separately.
Xu, Man posted on Tuesday, March 20, 2012 - 2:07 am
Thank you, Linda. Or maybe I am not very good at using the MI indices. I will watch some relevant Mplus teaching films on measurement invariance (I beleive it is the later half of Topic 1) and see if I can clarify something.
Xu, Man posted on Tuesday, March 20, 2012 - 6:11 am
I have just tested each pair of the multiple group structural parameters using the MODEL TEST. I found that, although the previous overal wald test of structural paths are statistically signficant, when the paths are tested one by one, none is signficantly differrent across the two groups. I wonder if there is anything that is not consistent among the two approaches and which one is better. Thanks a lot for any thoughtd and advice!
brianne posted on Tuesday, November 13, 2012 - 7:45 am
Am I correctly understanding the multiple groups analysis to say that if I keep all of the paths free and then using the Wald test to test whether the “a” path is the same, then I can just interpret the moderator model below?
GROUPING IS CRISK (1=HIGH RISK 0=LOW RISK);
MODEL: EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 ; EMOR36 on INTR24 ; CDIST14 CESDBL INTR24;
MODEL HIGH RISK (dichotomized):
EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 (P1); EMOR36 on INTR24; CDIST14 CESDBL INTR24;
MODEL LOW RISK (dichotomized):
EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 (P2); EMOR36 on INTR24; CDIST14 CESDBL INTR24;
I have a set of two-wave longitudinal data, and I'd like to see whether the associations among several variables vary across time. Specifically, there are 6 paths I'd like to test.
It appeared to me that both Wald tests and Chi square ratio/difference tests would be applicable, but could you tell me what the difference is? I tried both approaches but got different results. According to Wald tests, only 1 of the 6 paths was different from T1 to T2. But when I did chi square difference tests, it looked like 4 of the 6 paths have changed.
Here's how I did chi square difference tests: I started by constraining all 6 paths to be equal across time and treated it as the baseline model. Then I freed one path a time and compared each new model's chi square value with the baseline model's, and see if the difference was larger than 3.84 (the critical value of df=1). For 4 of the 6 models, chi square difference was significant, which seemed to be inconsistent with what the Wald tests suggested.
Another question I had was: when I do Wald tests, should I start with testing all 6 paths at once and treat it as an omnibus test (and only move on to test specific paths once the overall test is significant)?
Wald tests and likelihood-ratio tests are expected to give similar results when they have the same df. One is not generally better than the other. It is unclear if you did the Wald test one at a time like you did for the LR chi-2.
I would test all 6 paths at once.
Jenny L. posted on Monday, May 13, 2013 - 11:29 pm
Thank you for your prompt reply, Prof. Muthen. Yes I was doing Wald tests one at a time, so the results should be similar to those of LR chi-2. However, while 2 of the paths showed similar results in the two tests, the other 4 were inconsistent.
Here's the code I wrote:
[Baseline model] model: SCC_T1 on fdbck_T1 SR_T1 auth_T1(1); fdbck_T1 on bth_T1 dth_T1 pos_T1 auth_T1(2-5); SR_T1 on bth_T1 dth_T1 fdbck_T1 int_T1(6);
SCC_T2 on fdbck_T2 SR_T2 auth_T2(1); fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(2-5); SR_T2 on bth_T2 dth_T2 fdbck_T2 int_T2(6);
[model for comparison: The path of interest is fdbck on bth]: model: SCC_T1 on fdbck_T1 SR_T1 auth_T1(1); fdbck_T1 on bth_t1 dth_T1 pos_T1 auth_T1(2-4); SR_T1 on bth_T1 dth_T1 fdbck_T1 int_T1(6);
SCC_T2 on fdbck_T2 SR_T2 auth_T2(1); fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(2-4); SR_T2 on bth_T2 dth_T2 fdbck_T2 int_T2(6);
The chi square difference between the two models were 5.382, whereas the wald test value was 2.041. All the 4 paths that showed inconsistencies between the two tests involved the variable "fdbck."
Could you kindly tell me where the codes went wrong? Thank you for your advice.
The z-tests that you obtain in the results section of the output compare the regression coefficient to zero. The equality test compares the regression coefficients to each other. A coefficient may be significantly different from zero but not significantly different from another coefficient.
0 b1 b3 b2
b3 and b2 may be different from zero but not from each other.
Ari J Elliot posted on Wednesday, January 14, 2015 - 5:11 pm
Hello Drs. Muthen,
I am conducting multigroup analyses in which I would like to test differences in path coefficients between two groups. I have established that factor loadings are invariant across groups, but intercepts are not.
In the model I have set up, factor loadings and intercepts are constrained by default, and I then use the MODEL TEST command to compare specific parameters.
Is it appropriate to compare path coefficients obtained using a multigroup model in which intercepts are constrained to be equal when they are in fact different? Instead, could one compare path coefficients when only (invariant) factor loadings are constrained? Whenever I try to free the intercepts the model is no longer identified.
Wang's book on MPLUS states that "equality restrictions have to be imposed on item intercepts in order to make the mean structure part of the model identifiable." This seems to imply that intercepts need to be constrained equal for the model to be identified. However, another SEM program (AMOS) appears able to provide estimates as well as parameter comparisons with only loadings constrained (as well as fully unconstrained when identified).
To summarize, when intercepts are not invariant, does it make more sense to compare path coefficients with intercepts constrained equal or not, and is the latter possible in general and in MPLUS?
You could try to use MODEL CONSTRAINT to specify the indirect effects.
Jinxin ZHU posted on Friday, July 03, 2015 - 9:54 am
Dear Prof. Muthen,
I found a DIF item in my analysis and I decided to keep it. To examine the effect of keeping the DIF item, I want compare the results of the path analyses with and without the DIF item. Both of the analyses used two-step analysis employing plausible values and Rasch model.
1. Would you please suggest whether there is any method I can do use to test the differences between the two sets of the path coefficients from the path analyses with and with the DIF item?
2. I have used Wald test for coeficient comparison in multiple group analysis in other study before. However, this time the analyses with and without the DIF item are two seperate analyses. So the scales for these two analyses are different. Is Wald test still applicable in this case?
3. Do you think it is appropriate to consider the analyses with and without the DIF item as two-groups comparison in one analysis? (Still, what I am concerned is that the analyses with and without the DIF item are actually based on two different data sets).
Thank you so much.
ehrbc1 posted on Tuesday, March 01, 2016 - 3:50 am
I am trying to compare the total effects of pride to both other-focused wellbeing with pride to self-focused wellbeing. 2 mediators are involved. I have included my syntax below. I am wondering why I am getting completely different answers depending on whether I use a summing technique for wellbeing (i.e., adding all the individual items together) versus an average score. The model standardised/unstandardised estimates are coming up exactly the same. I also find the exact same problem when I use the model test/wald analysis.
Thanks for your help, Elizabeth
COMMUNAL on PRIDE (cp); COMMUNAL on COMP (cc); AGENTIC on PRIDE (ap); AGENTIC on COMP; OTHERWB on PRIDE (op); OTHERWB on COMP; OTHERWB on COMMUNAL (oc); OTHERWB on AGENTIC (oa); SELFWB on PRIDE (sp); SELFWB on COMP; SELFWB on COMMUNAL (sc); SELFWB on AGENTIC (sa);
The sum and the average are on different scales so you should not expect the same estimates. Only the standardized solutions are comparable, so effect sizes are the same.
ehrbc1 posted on Tuesday, March 01, 2016 - 7:34 pm
Thanks for your response. So is there a way to run the model constraint option above on the standardised solutions so that the same results are produced irrespective of whether scales are averaged or summed?
No, but you can standardize the effects in Model Constraint by doing the right multiplying and dividing by SDs.
But why not settle on one or the other - sum or average. It won't matter in the interpretations.
ehrbc1 posted on Wednesday, March 02, 2016 - 7:24 pm
Yes I plan to just use one or the other - I stumbled across this problem when I was switching one study over to summed scores for consistency purposes.
However, it does appear to be effecting the interpretations of model constraint difference scores (total effects or mediation pathways). Not in all cases, but in some it does. Using the following syntax - and if I use summed scores for wellbeing (WB) - 0 lies within the CI (but it doesn't when I use average).
COMMUNAL on PRIDE (cp); COMMUNAL on COMP (cc); AGENTIC on PRIDE (ap); AGENTIC on COMP; OTHERWB on PRIDE (op); OTHERWB on COMP; OTHERWB on COMMUNAL (oc); OTHERWB on AGENTIC (oa); SELFWB on PRIDE (sp); SELFWB on COMP; SELFWB on COMMUNAL (sc); SELFWB on AGENTIC (sa);
Is this to be expected given that the difference score is based on unstandardised estimates, or am I making some type of error? The correlation between summed and average scores is perfect, however. I do note that the the IV is average scored and the outcome is summed......
None unless you want bootstrapping. But I am not sure I understand your question.
ehrbc1 posted on Saturday, March 05, 2016 - 7:53 pm
Unfortunately, I am still experiencing some issues depending on whether I sum or average variables.
For example, using the below constraint, both the unstandardised estimate divided by the standard error is remaining pretty much the same (couple of decimal difference) for each indirect effect, regardless of summing or averaging. It’s the new difference score testing the difference between indirect effects that has a different significance level and unstandardised estimate divided by standard error, when I switch from average to summed scores.
Pia H. posted on Wednesday, April 13, 2016 - 2:26 am
I think this has been written about before, but just to make sure I got this right:
I have a SEM with two groups and two latent factors with categorical indicators, one of which is regressed on the other. To find out if the regression coefficients are significantly different between the two groups, I use one model where the regression between the factors is free and another model where it is equal across group and compare the model fit using DIFFTEST? I'm not sure if I read that is not possible to constrain an ON statement.
I am doing a multiple group comparison (two groups) for the structural paths. Below is a simplified model.
X -> M1-> M2->Y Model indirect: Y ind x;
The results showed that in group 1, x was NOT significantly associated with M1. But in group 2, x was significantly associated with M1. Chi-square test showed that this path was significantly different between the two groups. The rest of paths were significant but did not differ across groups.
The bootstrapping results showed that the 95% CI of the indirect effects excluded zero for group 2. In this case, can I conclude that in group 2, mediating effects held, while the mediating effects were not found with group 1? Or do I still have to test the difference of indirect effects? I don’t think I need to test the difference because in group 1, x was NOT significantly associated with M1, so the indirect effects were not significant anyway. However, I want to check with you.
Also, as for reporting results,I read your conversation with Xu, Man on March 20, 2012, it seems that you suggest to report model with parameters free and report the path difference test. I wondered if it applies to my case too. My results look somewhat different when I constrained all equal paths.
These choices are more or less up to personal taste in how to present results. The indirect effect difference might be of interest to test. I would try SEMNET for these general analysis/presentation strategy choices so you get many opinions.
anonymous Z posted on Thursday, July 28, 2016 - 9:29 am
Dear Drs. Muthen,
I am stuck with a mplus syntax question. I am trying to constrain the equal paths for model parsimony, meanwhile create indirect effects with MODEL CONSTRAINT in order to compare the indirect effects difference. But it seems that I cannot do both at the same time because I cannot put both (1) and (a1)/(a2) after the same path. How should I resolve the problem?
Group 1 ippa on care(1)(a1); aggre on ippa(b1);
Group 2 ippa on care(1)(a1); aggre on ippa(b1);
new (a1b1 a2b2 diff); a1b1=a1*b1; a2b2=a2*b2; diff=a1b1-a2b2;
No. When you put a1 after two coefficients, they are held equal. You can use a label or a number for an equality.
ehrbc1 posted on Sunday, January 01, 2017 - 8:06 pm
I believe potential suppression may be explaining a significant negative link in my model.
ORIGINAL MODEL: M1 on X1; M2 on X1; Y1 on X1; Y1 on M1; ** Beta; = -0.13, p = .05 Y1 on M2; Y2 on X1; Y2 on M1; Y2 on M2;
REDUCED MODEL: (where suppressor variable (M2) is removed: M1 on X1; Y1 on X1; Y1 on M1; ** Beta; = -0.02, ns. Y2 on X1; Y2 on M1;
Is there a way in mplus to compare the two regression coefficients for the different models that are using the same sample (whether the suppression effect is significant)? Alternatively, would you suggest the Z score equation: Z=(b1-b2)/ sqr root (SEb1sqred+SEb2sqred).
I don't know how that can be done. The Z-score doesn't take into account the dependence caused by using the same sample. Maybe ask on SEMNET.
ehrbc1 posted on Saturday, January 07, 2017 - 8:21 pm
Does the fact that the models are nested make this possible in Mplus? For example, could I use DIFFTEST or MODEL CONSTRAINT? Ie in the reduced model specify M2 on X1@0, Y1 on M2@0, and Y2 on M2@0 and then run a test of whether Y1 on M1 in original versus reduced model differ?
Daniel Lee posted on Friday, February 24, 2017 - 12:44 pm
Hi Dr. Muthen, if I ran a path analysis (1 mediator) and found that the indirect was significant for females, but not males...but then also realized that there wasn't a significant difference (used model constraints to test differences)...how do I interpret that finding?
Does that mean sex doesn't condition the mediation model...even though the indirect effect of mediation is significant for one group but not the other?
For my PhD research, I am running a cross-cultural study, and I am using multi-group analysis for checking the invariance of structural parameters among two different countries. My dependent variable is dichotomous, so I am using the WLSMV estimator. After starting running my models, I have a few questions that I would like to ask to you: Could you please let me know if the Wald test will be calculated automatically when we insert the model test command? In addition, could you please let me know if the following syntax is correct for checking the structural invariance of a regression path in my two groups:
Model dsm ON sk gamexp ilusctrl predctrl inagamb inbias zanga VincPos Vincpos2;
Group 1: dsm ON sk gamexp ilusctrl predctrl inagamb inbias zanga VincPos Vincpos2 (a1);
Group 2: dsm ON sk gamexp ilusctrl predctrl inagamb inbias zanga VincPos Vincpos2 (a2);
Model test: 0=a2-a1
Finally, I also have a mediation in my model. Could you please tell me if I need to use separate commands for examine the invariance of each path? For instance, do I need to run one command for checking the invariance of this regression effect that I mentioned above, and another command for checking the invariance of the total effect, and finally another command for checking the invariance of the indirect effect?
I'm doing an analysis on transit satisfaction from respondents from different cities, hence a multiple group analysis. However, in my model I have a set of binary variables stating what mode the respondent most often use, e.g. metro. Not all cities have all modes, so for some cities only a subset of these binary variables are included. How can I make such analysis in Mplus?
The overall model structure is:
OBS1 ON LV1-LV7 OBS2 ON LV1-LV7 OBS3 ON OBS1-OBS2 MetroUser TramUser BusUser
I did the analysis on each city first, and observed different coefficients for some of the LV's (LV1-LV7). The main goal is therefore to estimate a model where these are not the same for all cities - and including all relevant binary variables for each city.
I can't think of an easy way to handle this. But see our FAQ:
Different number of variables in different groups
A.K.E. Holl posted on Tuesday, August 29, 2017 - 4:30 am
I want to compare two paths in my multi-group model, and have a question regarding the result of my Wald Test. When I test two paths in my multi-group model (girls and boys) to be equal, I get the value ********** for the Wald test, and .0000 for the p-value.
So the test is significant, and the two paths differ significantly between girls and boys, but why is there no value for the wald test? Or does it mean, the test is not valid in this situation?
I received values for other wald tests in the same model, so I am not quite sure, how to interpret this one.
I tried adding the statement VARIANCES=NOCHECK; It did help on estimating some of my models. However, now I get this error
*** FATAL ERROR THE SAMPLE COVARIANCE MATRIX FOR THE INDEPENDENT VARIABLES IN THE MODEL CANNOT BE INVERTED. THIS CAN OCCUR IF A VARIABLE HAS NO VARIATION OR IF TWO VARIABLES ARE PERFECTLY CORRELATED. CHECK YOUR DATA.
I believe it is due to some binary variables not being present for all cities. Do you agree? I tried also estimating the same model for each city separately (I removed binary variables if they were not present for that given city). This works out well. Could you comment (or point me to relevant literature) on whether this approach would be appropriate?
Draw a horizontal line marking zero in the middle. Mark the negative value of PIWN and the positive value of PIWF. The distance between them is what is being evaluated for significance; it doesn't matter that one of the estimates is negative.
Joy Thompson posted on Saturday, September 14, 2019 - 11:14 pm
I conducted a multi-group analysis where the outcome is dichotomous. Whereas some of the probit coefficients for paths are non-significant, the z-test comparison from the MODEL CONSTRAINT command indicate that the paths are significantly differ from each other. I understand that the comparisons are different, and that it is possible for paths to significantly differ from zero but not from each other, but is it possible for paths to not significantly differ from zero but differ from each other (where either one or both paths are not significantly different from zero)? My inclination would be to not compare paths that are non-significant, but I'm not sure given the test are different. Any insights are appreciated.
I just realized that you responded, thanks so much! It seems that I may have used the MODEL CONSTRAINT command incorrectly, as it seems it should be used to define or constrain parameters. Am I correct that it is appropriate to use MODEL TEST to compare path coefficients across groups (i.e., path1_groupa = path1_groupb)? My understanding is that I'd need to run separate models with the MODEL TEST specified for each comparison of interest and get a corresponding Wald's test; that is, I can't compare all paths simultaneously. As you noted, it probably does not make sense to compare coefficients if they were non-significant for one or both groups of interest.