Message/Author 


Hi, I was trying to find some details on how the Wald test was calculated when the model test: option is used. I couldn't find any details on the website  are there some anywhere, or can you recommend a reference? Thanks, Jeremy 


See the following link for information about the Wald test: http://en.wikipedia.org/wiki/Wald_test#cite_note1 


Hello, I was running a Wald test and now I wonder if it is possible to compute an effect size for a significant wald test?! I read something about Cohens's q (Cohen, 1988). Would that be appropriate or is there any other option you would suggest? 


This question is probably more appropriate for a general discussion forum like SEMNET. 


Thanks Linda for your reply. I have got two further questions. 1) The Semnet forum suggested W = ?(chisqu/N) by rosenthal & rostow. My problem now is that my model uses the complex option and I don't know how MPlus calculates the "clustered" N. I thought N must be between my observed N and ncluster. Is that correct? How is the N calculated in a complex model? 2) I also calculated the effect size using Cohens q because it is independet of N. I got two medium sized effects (.44, .42) in the same sample. One effect size is significant and the other is not. How is that possible? Thanks Sofie 


1. How does Rosenthal and Rostow suggest doing this? 2. Significance is based on the ratio of the parameter estimate to its standard error. One effect must have a larger standard error than the other. 


1. They suggest omega = (Waldchisqu/N)^0.5. But they don't refer to a clustered sample. When I use the complex option in Mplus SE's and N will be corrected but I don't know how. And I guess I have to use the corrected N to calculate the effect size suggested by Rosenthal and Rostow. 2. The standard errors are approximately equal in my model. 


1. It sounds like they did not develop for clustered data. I would not know how to generalize this to clustered data. 2. Significance is determined by the ratio of the parameter estimate to its standard error. 


Thanks again for your response. Can you provide or suggest any information or literature that describes how N is adjusted in a complex model? 


I don't know. You may want to ask on a general discussion forum like SEMNET. 


hello I have a problem considering wald test in mplus. i want to test if moderator effects at two different values of a moderator are different from each other. given the following code: model constraint: diff=(b1+b2*1)(b1+b2*1.01); model test: diff=0;  I can change the constant numbers (above 1 and 1.01) in the the equation, the wald test constantly gives Wald Test of Parameter Constraints Value 24.473 Degrees of Freedom 1 Value 0.0000 the b1 and b2 parameters are values from the model command (regression parameters). is there a wrong specification ? thanks Alex 


I think diff is a new variable so your MODEL CONSTRAINT is wrong. I'm not sure what is happening. I would need to see the files and your license number at support@statmodel.com to figure it out. MODEL CONSTRAINT should be: model constraint: NEW (diff); diff=(b1+b2*1)(b1+b2*1.01); The ztest of diff tell you if diff is significant. In a Wald test, if the pvalue is less than .05 diff is different from 0. 

Huan Liu posted on Friday, September 12, 2014  4:04 am



Dear Linda, I have a problem in considering wald test in mplus. I want to test if two mediator effects are different from each other. given the following code: model:zgxfg by shmyd jjqx xjqx; qzyl by mzyl fzyl; jtzz by jtcy jtnb jtwb jtrt; zgxfg on jtzz(b1); zgxfg on gtzz(b2); zgxfg on tbyl qzyl; jtzz on tbyl(a1); jtzz on qzyl(a3); gtzz on tbyl(a2); gtzz on qzyl(a4); gtzz with jtzz; tbyl with qzyl; model constraint:new(t1 t2); t1=a1*b1; t2=a2*b2; model test: t1=t2; I wonder that is it proper to apply wald test to this case? Because in my experience doing this wald test, the result is always not significant. Thank you in advance. Huan Liu 


Looks correct. Perhaps your sample is small or the effects have large SEs, so that you don't have much power. 

Huan Liu posted on Friday, September 12, 2014  7:54 pm



Dear Linda, Thank you very much for your reply, I will check the sample size and SEs. Huan Liu 


Hello Linda and Bengt, I am trying to use the Wald Test as implemented in MODEL TEST, specifically to test equalities of parameteres estimated and labeled under the MODEL section of my code. (I have also tried imposing these constraints using only parameter labels in the MODEL section, but my loglikelihood comparisons are coming out negative, as is Satorra and Bentler's stictly positive chisquare test, so I am trying to get the Wald test to work.) I have written in my Mplus syntax: MODEL TEST: m1a=m1b; m1b=m1c; I also tried running this using MODEL CONSTRAINT rather than MODEL TEST. However, either way I run it, I receive output saying that a parameter label or the constant zero must appear on the lefthand side of a MODEL CONSTRAINT or MODEL TEST command. I do not understand, since I have used m1a, m1b, and m1c as parameter labels in my model. Can you suggest a solution for comparing two models: one with, and one without the equality constraints? Thank you. 


You should say in MODEL TEST 0 = m1a m1b; 0 = m1b  m1c; In MODEL CONSTRAINT you should create new parameters, for example, diff1 and diff2 and say diff1 = m1a m1b; diff2 = m1b  m1c; 


Greetings, I am analyzing an SEM. I am using Wald tests to test whether the difference in a given coefficent across two groups (n=377, n=343) is statistically significant. Specifically, I am testing 18 differences, one at a time. When I do my Wald tests, I get an error of: *** ERROR P30 – P12 ^ ERROR However, I also get results for each Wald test, for example: Wald Test of Parameter Constraints Value 7.879 Degrees of Freedom 1 PValue 0.0050 So, I thought that error might be something I can ignore. However, there is a great inconsistency in my results. Very minor coefficient differences across the two groups are statistically significant, while another one of my largest differences is NOT statistically significant . . . while another large difference IS statistically significant. I expected my large differences to be statistically significant and look like the results I included above . . . and my small differences to not be statistically significant. Thoughts on the error message? Or, how I may be interpreting the output incorrectly? Thank you! 


Please send the output and your license number to support@statmodel.com. 

Back to top 