Wald test calculations PreviousNext
Mplus Discussion > Structural Equation Modeling >
 Jeremy Miles posted on Monday, March 09, 2009 - 10:20 pm

I was trying to find some details on how the Wald test was calculated when the model test: option is used. I couldn't find any details on the website - are there some anywhere, or can you recommend a reference?


 Linda K. Muthen posted on Tuesday, March 10, 2009 - 8:58 am
See the following link for information about the Wald test:

 Sofie Henschel posted on Wednesday, February 22, 2012 - 12:27 am
I was running a Wald test and now I wonder if it is possible to compute an effect size for a significant wald test?! I read something about Cohens's q (Cohen, 1988). Would that be appropriate or is there any other option you would suggest?
 Linda K. Muthen posted on Wednesday, February 22, 2012 - 12:53 pm
This question is probably more appropriate for a general discussion forum like SEMNET.
 Sofie Henschel posted on Thursday, March 08, 2012 - 6:03 am
Thanks Linda for your reply. I have got two further questions.

1) The Semnet forum suggested W = ?(chi-squ/N) by rosenthal & rostow. My problem now is that my model uses the complex option and I don't know how MPlus calculates the "clustered" N. I thought N must be between my observed N and n-cluster. Is that correct? How is the N calculated in a complex model?

2) I also calculated the effect size using Cohens q because it is independet of N. I got two medium sized effects (.44, .42) in the same sample. One effect size is significant and the other is not. How is that possible?
Thanks Sofie
 Linda K. Muthen posted on Thursday, March 08, 2012 - 12:43 pm
1. How does Rosenthal and Rostow suggest doing this?

2. Significance is based on the ratio of the parameter estimate to its standard error. One effect must have a larger standard error than the other.
 Sofie Henschel posted on Friday, March 09, 2012 - 2:02 am
1. They suggest omega = (Wald-chi-squ/N)^0.5. But they don't refer to a clustered sample. When I use the complex option in Mplus SE's and N will be corrected but I don't know how. And I guess I have to use the corrected N to calculate the effect size suggested by Rosenthal and Rostow.

2. The standard errors are approximately equal in my model.
 Linda K. Muthen posted on Friday, March 09, 2012 - 12:18 pm
1. It sounds like they did not develop for clustered data. I would not know how to generalize this to clustered data.

2. Significance is determined by the ratio of the parameter estimate to its standard error.
 Sofie Henschel posted on Friday, March 09, 2012 - 12:39 pm
Thanks again for your response. Can you provide or suggest any information or literature that describes how N is adjusted in a complex model?
 Linda K. Muthen posted on Friday, March 09, 2012 - 4:11 pm
I don't know. You may want to ask on a general discussion forum like SEMNET.
 Alexander Kapeller posted on Sunday, March 25, 2012 - 3:57 am

I have a problem considering wald test in mplus. i want to test if moderator effects at two different values of a moderator are different from each other.

given the following code:

model constraint:

model test:

I can change the constant numbers (above 1 and 1.01) in the the equation, the wald test constantly gives

Wald Test of Parameter Constraints
Value 24.473
Degrees of Freedom 1
Value 0.0000

the b1 and b2 parameters are values from the model command (regression parameters).
is there a wrong specification ?

 Linda K. Muthen posted on Sunday, March 25, 2012 - 11:39 am
I think diff is a new variable so your MODEL CONSTRAINT is wrong. I'm not sure what is happening. I would need to see the files and your license number at support@statmodel.com to figure it out.


model constraint:
NEW (diff);

The z-test of diff tell you if diff is significant.

In a Wald test, if the p-value is less than .05 diff is different from 0.
 Huan Liu posted on Friday, September 12, 2014 - 4:04 am
Dear Linda,
I have a problem in considering wald test in mplus. I want to test if two mediator effects are different from each other.
given the following code:
model:zgxfg by shmyd jjqx xjqx;
qzyl by mzyl fzyl;
jtzz by jtcy jtnb jtwb jtrt;

zgxfg on jtzz(b1);
zgxfg on gtzz(b2);

zgxfg on tbyl qzyl;

jtzz on tbyl(a1);
jtzz on qzyl(a3);

gtzz on tbyl(a2);
gtzz on qzyl(a4);

gtzz with jtzz;
tbyl with qzyl;

model constraint:new(t1 t2);
model test:

I wonder that is it proper to apply wald test to this case? Because in my experience doing this wald test, the result is always not significant.
Thank you in advance.
Huan Liu
 Bengt O. Muthen posted on Friday, September 12, 2014 - 6:10 pm
Looks correct. Perhaps your sample is small or the effects have large SEs, so that you don't have much power.
 Huan Liu posted on Friday, September 12, 2014 - 7:54 pm
Dear Linda,

Thank you very much for your reply, I will check the sample size and SEs.

Huan Liu
 Lisa M. Yarnell posted on Wednesday, September 17, 2014 - 8:18 pm
Hello Linda and Bengt,

I am trying to use the Wald Test as implemented in MODEL TEST, specifically to test equalities of parameteres estimated and labeled under the MODEL section of my code. (I have also tried imposing these constraints using only parameter labels in the MODEL section, but my loglikelihood comparisons are coming out negative, as is Satorra and Bentler's stictly positive chi-square test, so I am trying to get the Wald test to work.)

I have written in my Mplus syntax:

I also tried running this using MODEL CONSTRAINT rather than MODEL TEST. However, either way I run it, I receive output saying that a parameter label or the constant zero must appear on the left-hand side of a MODEL CONSTRAINT or MODEL TEST command.

I do not understand, since I have used m1a, m1b, and m1c as parameter labels in my model. Can you suggest a solution for comparing two models: one with, and one without the equality constraints?

Thank you.
 Linda K. Muthen posted on Thursday, September 18, 2014 - 11:04 am
You should say in MODEL TEST

0 = m1a -m1b;
0 = m1b - m1c;

In MODEL CONSTRAINT you should create new parameters, for example, diff1 and diff2 and say

diff1 = m1a -m1b;
diff2 = m1b - m1c;
 Grant Jackson posted on Wednesday, January 11, 2017 - 11:39 am

I am analyzing an SEM. I am using Wald tests to test whether the difference in a given coefficent across two groups (n=377, n=343) is statistically significant. Specifically, I am testing 18 differences, one at a time.

When I do my Wald tests, I get an error of:

P30 P12

However, I also get results for each Wald test, for example:

Wald Test of Parameter Constraints

Value 7.879
Degrees of Freedom 1
P-Value 0.0050

So, I thought that error might be something I can ignore.

However, there is a great inconsistency in my results. Very minor coefficient differences across the two groups are statistically significant, while another one of my largest differences is NOT statistically significant . . . while another large difference IS statistically significant. I expected my large differences to be statistically significant and look like the results I included above . . . and my small differences to not be statistically significant.

Thoughts on the error message? Or, how I may be interpreting the output incorrectly?

Thank you!
 Linda K. Muthen posted on Wednesday, January 11, 2017 - 2:28 pm
Please send the output and your license number to support@statmodel.com.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message