Wald test calculations
Message/Author
 Jeremy Miles posted on Monday, March 09, 2009 - 10:20 pm
Hi,

I was trying to find some details on how the Wald test was calculated when the model test: option is used. I couldn't find any details on the website - are there some anywhere, or can you recommend a reference?

Thanks,

Jeremy
 Linda K. Muthen posted on Tuesday, March 10, 2009 - 8:58 am

http://en.wikipedia.org/wiki/Wald_test#cite_note-1
 Sofie Henschel posted on Wednesday, February 22, 2012 - 12:27 am
Hello,
I was running a Wald test and now I wonder if it is possible to compute an effect size for a significant wald test?! I read something about Cohens's q (Cohen, 1988). Would that be appropriate or is there any other option you would suggest?
 Linda K. Muthen posted on Wednesday, February 22, 2012 - 12:53 pm
This question is probably more appropriate for a general discussion forum like SEMNET.
 Sofie Henschel posted on Thursday, March 08, 2012 - 6:03 am

1) The Semnet forum suggested W = ?(chi-squ/N) by rosenthal & rostow. My problem now is that my model uses the complex option and I don't know how MPlus calculates the "clustered" N. I thought N must be between my observed N and n-cluster. Is that correct? How is the N calculated in a complex model?

2) I also calculated the effect size using Cohens q because it is independet of N. I got two medium sized effects (.44, .42) in the same sample. One effect size is significant and the other is not. How is that possible?
Thanks Sofie
 Linda K. Muthen posted on Thursday, March 08, 2012 - 12:43 pm
1. How does Rosenthal and Rostow suggest doing this?

2. Significance is based on the ratio of the parameter estimate to its standard error. One effect must have a larger standard error than the other.
 Sofie Henschel posted on Friday, March 09, 2012 - 2:02 am
1. They suggest omega = (Wald-chi-squ/N)^0.5. But they don't refer to a clustered sample. When I use the complex option in Mplus SE's and N will be corrected but I don't know how. And I guess I have to use the corrected N to calculate the effect size suggested by Rosenthal and Rostow.

2. The standard errors are approximately equal in my model.
 Linda K. Muthen posted on Friday, March 09, 2012 - 12:18 pm
1. It sounds like they did not develop for clustered data. I would not know how to generalize this to clustered data.

2. Significance is determined by the ratio of the parameter estimate to its standard error.
 Sofie Henschel posted on Friday, March 09, 2012 - 12:39 pm
Thanks again for your response. Can you provide or suggest any information or literature that describes how N is adjusted in a complex model?
 Linda K. Muthen posted on Friday, March 09, 2012 - 4:11 pm
I don't know. You may want to ask on a general discussion forum like SEMNET.
 Alexander Kapeller posted on Sunday, March 25, 2012 - 3:57 am
hello

I have a problem considering wald test in mplus. i want to test if moderator effects at two different values of a moderator are different from each other.

given the following code:

model constraint:
diff=(b1+b2*1)-(b1+b2*1.01);

model test:
diff=0;
-----

I can change the constant numbers (above 1 and 1.01) in the the equation, the wald test constantly gives

Wald Test of Parameter Constraints
Value 24.473
Degrees of Freedom 1
Value 0.0000

the b1 and b2 parameters are values from the model command (regression parameters).
is there a wrong specification ?

thanks
Alex
 Linda K. Muthen posted on Sunday, March 25, 2012 - 11:39 am
I think diff is a new variable so your MODEL CONSTRAINT is wrong. I'm not sure what is happening. I would need to see the files and your license number at support@statmodel.com to figure it out.

MODEL CONSTRAINT should be:

model constraint:
NEW (diff);
diff=(b1+b2*1)-(b1+b2*1.01);

The z-test of diff tell you if diff is significant.

In a Wald test, if the p-value is less than .05 diff is different from 0.
 Huan Liu posted on Friday, September 12, 2014 - 4:04 am
Dear Linda,
I have a problem in considering wald test in mplus. I want to test if two mediator effects are different from each other.
given the following code:
model:zgxfg by shmyd jjqx xjqx;
qzyl by mzyl fzyl;
jtzz by jtcy jtnb jtwb jtrt;

zgxfg on jtzz(b1);
zgxfg on gtzz(b2);

zgxfg on tbyl qzyl;

jtzz on tbyl(a1);
jtzz on qzyl(a3);

gtzz on tbyl(a2);
gtzz on qzyl(a4);

gtzz with jtzz;
tbyl with qzyl;

model constraint:new(t1 t2);
t1=a1*b1;
t2=a2*b2;
model test:
t1=t2;

I wonder that is it proper to apply wald test to this case? Because in my experience doing this wald test, the result is always not significant.
Huan Liu
 Bengt O. Muthen posted on Friday, September 12, 2014 - 6:10 pm
Looks correct. Perhaps your sample is small or the effects have large SEs, so that you don't have much power.
 Huan Liu posted on Friday, September 12, 2014 - 7:54 pm
Dear Linda,

Thank you very much for your reply, I will check the sample size and SEs.

Huan Liu
 Lisa M. Yarnell posted on Wednesday, September 17, 2014 - 8:18 pm
Hello Linda and Bengt,

I am trying to use the Wald Test as implemented in MODEL TEST, specifically to test equalities of parameteres estimated and labeled under the MODEL section of my code. (I have also tried imposing these constraints using only parameter labels in the MODEL section, but my loglikelihood comparisons are coming out negative, as is Satorra and Bentler's stictly positive chi-square test, so I am trying to get the Wald test to work.)

I have written in my Mplus syntax:
MODEL TEST:
m1a=m1b;
m1b=m1c;

I also tried running this using MODEL CONSTRAINT rather than MODEL TEST. However, either way I run it, I receive output saying that a parameter label or the constant zero must appear on the left-hand side of a MODEL CONSTRAINT or MODEL TEST command.

I do not understand, since I have used m1a, m1b, and m1c as parameter labels in my model. Can you suggest a solution for comparing two models: one with, and one without the equality constraints?

Thank you.
 Linda K. Muthen posted on Thursday, September 18, 2014 - 11:04 am
You should say in MODEL TEST

0 = m1a -m1b;
0 = m1b - m1c;

In MODEL CONSTRAINT you should create new parameters, for example, diff1 and diff2 and say

diff1 = m1a -m1b;
diff2 = m1b - m1c;
 Grant Jackson posted on Wednesday, January 11, 2017 - 11:39 am
Greetings,

I am analyzing an SEM. I am using Wald tests to test whether the difference in a given coefficent across two groups (n=377, n=343) is statistically significant. Specifically, I am testing 18 differences, one at a time.

When I do my Wald tests, I get an error of:

*** ERROR
P30 – P12
^
ERROR

However, I also get results for each Wald test, for example:

Wald Test of Parameter Constraints

Value 7.879
Degrees of Freedom 1
P-Value 0.0050

So, I thought that error might be something I can ignore.

However, there is a great inconsistency in my results. Very minor coefficient differences across the two groups are statistically significant, while another one of my largest differences is NOT statistically significant . . . while another large difference IS statistically significant. I expected my large differences to be statistically significant and look like the results I included above . . . and my small differences to not be statistically significant.

Thoughts on the error message? Or, how I may be interpreting the output incorrectly?

Thank you!
 Linda K. Muthen posted on Wednesday, January 11, 2017 - 2:28 pm
 Pia H. posted on Tuesday, February 07, 2017 - 7:54 am
Dear Linda & Bengt

I am wondering if it is possible to obtain confidence intervals for the Wald chi square test directly from the Mplus output?

Thank you!
 Bengt O. Muthen posted on Tuesday, February 07, 2017 - 5:52 pm
Chi-square tests provide p-values, not confidence intervals.
 Daniel Lee posted on Wednesday, March 08, 2017 - 1:30 pm
Hi Dr. Muthen,

Are there any other tests available (apart from Walds Test) to assess whether direct and indirect effects are different between multiple groups?

Dan
 Bengt O. Muthen posted on Wednesday, March 08, 2017 - 5:54 pm
Yes, you can do 2 runs and compute a likelihood-ratio chi-2. But it's not going to be much different.
 Daniel Lee posted on Thursday, March 09, 2017 - 10:32 am
If n = 317, and the chi-square LRT is not significant, but difference in CFI is .05 (between nested models), is it still enough to report a significant difference.

Thank you again!
 Daniel Lee posted on Thursday, March 09, 2017 - 10:54 am
Sorry and to follow-up on my first question, if I am doing a multi-group (male vs. female) mediation model, I realized that the indirect effect keeps changing as I free of the direct path coefficients that are held equal.

So if X-> M -> Y, and I free X->M, the indirect effect changes.

For these models, should we exclude the indirect effect until we begin to test group differences between indirect effects? The indirect effects don't seem to mean much (b/c if path a is freed, path b is constrained).

Thank you!
 Krysten Bold posted on Wednesday, April 19, 2017 - 10:08 am
Hi,

I'm running an ordered logistic regression model in Mplus version 7.4 and received an error: "error occurred in the brant wald test for proportional odds" for my outcomes. Does this mean the proportional odds assumption is violated with this outcome and thus ordinal probit or logit are not appropriate?

 Bengt O. Muthen posted on Friday, April 21, 2017 - 5:59 pm
No, it means that the test failed so you can't use it.
 Xu, Man posted on Friday, August 11, 2017 - 9:27 am
Dear Dr. Muthens,

I would like to ask something along the lines of this thread. When I use MODEL CONSTRAINT for example the following way:

model constraint:new(t1 t2 diff);
t1=a1*b1;
t2=a2*b2;
diff=t1-t2;
model test:
t1=t2;

The z statistic of diff seems to correspond to square root of the chi_2 of the Wald.

Would the result from Wald test (model test) correspond exactly to the result of the new diff parameter, under all conditions (for example under MLR?

Thank you!
 Bengt O. Muthen posted on Friday, August 11, 2017 - 10:07 am
Yes, they are close in large samples also for MLR.
 Tyler Hatchel posted on Wednesday, December 27, 2017 - 12:17 pm
Dear Dr. Muthens,

I am somewhat new to Mplus and have a seemingly basic question that I was unable to resolve with the user's guide.

I am completing a multigroup curve model and would like to get a Wald's estimate to determine if the difference in I and S are sign different:

Variable:
Names = h1 h2 h3 hp;
missing = all (-99);
grouping is hp (0 = noperp 1 = perp);

Model:
i s | h1@1 h2@2 H3@3;

Model test:

Output:
Stand;

I have tried a number of options but am unsure on how to differentiate the two groups.

Best,

Tyler
 Bengt O. Muthen posted on Wednesday, December 27, 2017 - 1:51 pm

Model noperp:
[i] (p01);
[s] (p02);

Model perp:
[i] (p11);
[s] (p12);

Then you can refer to these 4 p parameters in Model Test.
 Alissa Mahler posted on Wednesday, February 07, 2018 - 4:08 pm
Hello,

I am using a wald test for comparing a 1-factor and 2-factor CFA (based on this webpage: https://www.statmodel.com/download/Testing%20of%20factor%20corr=1.pdf)

I was hoping to clarify my interpretation of the output.

I ran the following code:

MODEL TEST:
0 = 1-p1;

(where p1 = the labeled covariance between F1 and F2).

The wald test is not significant, and I'm wondering if this suggests the 1-factor or 2-factor model is best. Thanks for your clarification.
 Bengt O. Muthen posted on Wednesday, February 07, 2018 - 4:35 pm
That suggests 1 factor.