I have run across several cases where the p-values given in the output do not always match what I obtain when I use DIFFTEST (in the case of WLSMV), or with the corrected chi-square difference test (in the case of MLR). In many cases this happens with twin models, but not always (sometimes with growth or SEM). I'm wondering why this would happen and which p-values I should trust.
I don't see why DIFFTEST or chi-square difference tests should give the same p values as any given H0 model. The H0 model is tested against a completely unrestricted H1 model, whereas difference testing typically uses a more specific H1 model - a model that is just a little bit more general than H0. Maybe I am misunderstanding your question.
Hi. Sorry I was unclear. The p-values I am referring to are for a particular parameter, not the overall model fit. So, in my specific case, I am running an ACE model, and the output z-test for the C loading suggests it is highly significant, but setting this loading to zero results in a non-significant DIFFTEST result.