DIFFTEST Related PreviousNext
Mplus Discussion > Structural Equation Modeling >
 Sanjoy posted on Wednesday, April 13, 2005 - 5:52 pm
Dear Professor Muthen/s

Before asking the main questions related to "difftest" let me clarify couple of things. Please rectify me if I'm wrong.

1. I have MPlus version (3.12). I suppose this one is the most updated. I could not find the example 12.12 (page 278) in the "User's Guide example folder", in fact for that matter there is not a single file from chapter 12 of the MPlus User's Guide Manual in the above mentioned folder that we got from MPlus CD

2. As an alternative therefore, I replicate ur codes (page 278) and tried ... it fails to run, it said " *** FATAL ERROR
RESPECIFY THE VARIABLE AS CATEGORICAL." … same for the variables Y8 and Y9

Next, I include y7-y9 as categorical in the command line and tried to run ... THEN it WORKED WELL, I have used the same data set that u have mentioned at page 278, though u have NOT mentioned y7-y9 as categorical ... I just need to make sure I have not messed up, kindly rectify me if I have done so

Now, coming to the DIFFTEST issues

Q1. Usually, though not always, under Null Hypothesis (H0) we assume less restrictive model and under Alternative Hypothesis (H1) we put the restrictions on model parameters (e.g. chow test) ...it looks here for the case of "difftest" we alter the usual practices, why is it so

Q2. Can you suggest some article written on “Difftest” practice

Q3. How should we use the “Difftest” result?

From the second step result I got this

Chi-Square Test for Difference Testing

Value 2.968
Degrees of Freedom 3**
P-Value 0.3953

Usually a p-value close to zero (0.05 is a typical threshold) signals that our null hypothesis is false and we reject Null while a Large p-values (like the above .39) implies that there is no detectable difference for the sample size used, and therefore we fail to reject Null. … However in this “difftest” case, would it be the reverse ?

Thanks and regards
 BMuthen posted on Wednesday, April 13, 2005 - 11:24 pm
1. The examples from Chapter 12 are not included with the Mplus CD.

2. I would have to see the full model to answer this question.

A p-value greater than .05 says that the restrictions cannot be rejected that is the restrictions do not worsen the fit of the model. There is currently no article written on DIFFTEST.
 Sanjoy posted on Thursday, April 14, 2005 - 9:37 am
Thank you Professor ... I will mail you the full model
 Sanjoy posted on Thursday, April 14, 2005 - 10:47 am
Dear Professor .... Why it's the case that for WLSMV the conventional approach of taking the difference between the chi-square values and the difference in the degrees of freedom is not appropriate ...
I mean
Q1. How can we show that the standard chi-square difference is NOT distributed as chi-square


Q2. How do ensure that DIFFTEST is doing the correct thing

Thanks and regards
 BMuthen posted on Friday, April 15, 2005 - 1:29 am
You may want to look at the literature by Satorra and Bentler on robust chi-square difference testing with continuouos nonnormal outcomes. The issues are the same.

You can do a simulation study to see how well DIFFTEST performs. There will be a forthcoming paper on the DIFFTEST theory.
 Sanjoy posted on Friday, April 15, 2005 - 9:17 am
1. Is it the article "A scaled difference chi-square test statistic for moment structure analysis", Psychometrika, 66,507-514, 2001(A. Satora and P.M. Bentler) that you have referred me to check with ... or something else

2. I'm severely time constrained, nonetheless I will try the simulation things...in between, if you kindly send me an electronic copy of "forthcoming paper on the DIFFTEST theory" ... that would be a tremendous help to me ... if the authors prohibit us from quoting, it's goes without saying that we will stick to that, however reading their article will help me to understand the nuances of DIFFTEST more comprehensively

Thanks and regards
 BMuthen posted on Saturday, April 16, 2005 - 4:25 am
1. Yes.

2. The paper is not ready to be sent at this time.
 Jeremiah Schumm posted on Friday, June 02, 2006 - 12:58 pm
Dr. Muthen,
I am trying to follow example 12.12 in the Mplus version 4 manual to use the chi-square difference test in models involving the WLSMV estimator. I am receiving the following error message:


My second step model involves restraining 2 regression coefficients as being equal:

y1 ON x1 (1);
y1 ON x2 (1);

These predictors were freely regressed on y1 in the model that I am using in the first step as indicated on p. 314. My interest is to test whether fixing these regression coefficients in the second step deteriorates the model fit. Is this possible to do following example 12.12, or am I off basis with regard to using the chi-square difference test for such a purpose?
Thank you.
 Linda K. Muthen posted on Friday, June 02, 2006 - 3:07 pm
It sounds like what you are doing is possible. You would need to send your input, data, output, and license number to support@statmodel.com for us to say any more.
 D C posted on Friday, September 24, 2010 - 2:48 pm
Hello Professors,

I am doing a multiple group analysis of a factor structure defined by categorical-ordinal indicators. I am using the WLSMV estimator, and hence I use the DIFFTEST to judge whether various restrictions imposed on the model significantly worsen the fit. However, my data has a relative large sample size (N=3650 with 2100 in one group category and the 1550 in the other.

My questions are:
1. Is the DIFFTEST sensitive to large sample size as the CHI-Square tests?

2. If so, I would like to use differences in CFIs (Meade et al, 2008)values to help judge the difference in model fit between restricted and less restricted models.
So, in multiple group analysis (when GROUP statement is used)and various restrictions are imposed on a series of models, are the CFI values estimated each time anew? That is, is it advisable to take differences of CFI values between the restricted and less restricted models to judge model fit?

Thank you!
 Linda K. Muthen posted on Friday, September 24, 2010 - 4:54 pm
1. Somewhat less sensitive.

2. Each model has its own CFI. I don't believe using CFI to compare models is well established.
 Maggie Ledwell posted on Monday, October 18, 2010 - 1:54 pm
Dear Dr. Muthen,
I am running a structural equation model with categorical latent variables (both IV and DV) and I am attempting to do a Chi-Square difference test for my multiple group analysis. I have been running into problems when attempting to do the two-step chi-square test of model fit required when using a WLSMV estimator. After saving my derivatives in step-one, with every pathway constrained, I then go on to unconstrain one pathway for one group. Here is where I run into problems. I keep getting the error message : THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL. The chi-square in the second model is larger and there are a greater number of degrees of freedom (the model with one pathway unconstrained), compared to the baseline, fully constrained model. Is there something I’m not doing properly that is making my H0 not nest in the H1 model? Any guidance you can give me would be much appreciated. I can send you my input and data if that is helpful. Thank you!
 Linda K. Muthen posted on Monday, October 18, 2010 - 2:07 pm
Please send the two outputs and your license number to support@statmodel.com.
 Sabine Spindler posted on Sunday, April 17, 2011 - 7:00 am
Dear Dr. Muthen,

2 questions:

If with WLSMV, the Chi² values cannot be interpreted, and DIFFTEST must be used to compare nested models, then

1. am I correct in assuming that this implies also, that the other Fit Indices which are based on Chi² (such as RMSEA, CFI etc.) can NOT be interpreted?

2. a significant DIFFTEST is unidirectional and ALWAYS suggests that the more restricted Model has a worse fit?

Thank you very much, Sabine
 Linda K. Muthen posted on Sunday, April 17, 2011 - 2:10 pm
1. All fit statistics can be interpreted.
2. Yes.
 Fernando H Andrade posted on Friday, April 22, 2011 - 4:18 pm
Dear Dr Muthen
I am fitting a cross lagged model and comparing group differences among white, hispanic and black. i am using categorical indicators for my latent factors.

The difftest comparing the invariance and non invariance is

Chi-Square Test for Difference Testing
Value 103.284
Degrees of Freedom 38
P-Value 0.0000

but RMSEA (0.04) TLI(.982) and CFI (.98) of the more restricted model are better than the RMSE (0.051) TLI (0.978) and CFI (0.966) of the less restricted model.

should not it be otherwise? i mean if the diffest is significant then should i expect that the goodness of fit of the less restricted model were better than the indicators of the more restricted model?
thank you
 Bengt O. Muthen posted on Saturday, April 23, 2011 - 4:09 pm
I don't know why RMSEA and CFI are so good for the more restrictive model, but I assume that the chi-2 is bad also for the less restrictive model. In such cases, these fit indices don't always come out in an expected order of magnitude. I would rely more on the chi-2 DIFFTEST.
 Fernando H Andrade posted on Sunday, April 24, 2011 - 2:31 pm
thank you very much,
the chi2 for the less restricted model is 1079.328* while the chi 2 for the more restricted model is 911.407*

would you know of some literature i could cite to support what you recommend?
 Bengt O. Muthen posted on Monday, April 25, 2011 - 9:41 am
No, I think this is an open research area.
 Gabriella Melis posted on Wednesday, September 05, 2012 - 7:32 am
Dear Dr. Muthén,

my question concerns the interpretation of the chi-square difference test under WLSMV estimator. In your post above (April 13, 2005 at 11:24pm) you suggested that "A p-value greater than .05 says that the restrictions cannot be rejected that is the restrictions do not worsen the fit of the model". However, on the UCLA website (precisely here: http://www.ats.ucla.edu/stat/mplus/faq/difftest.htm) an opposite interpretation of the p-value seems to be followed. Am I missing something? I cannot see how the two interpretations could match.

Please, I will be grateful if you would suggest me some key references as well.

Many thanks!
 Linda K. Muthen posted on Wednesday, September 05, 2012 - 9:03 am
I think they are saying the same thing in a slightly different way.

See DIFFTEST under Technical Appendices on the website.
 Suhaer Yunus posted on Thursday, November 28, 2013 - 12:27 pm
The independent variables in my study are binary and I have computed EFA (using Mplus version 7.1). The EFA results show that there are four first order correlated factors. The CFA results confirm that too.

Now I want to show that whether four correlated factors model is better or whether there should be one higher order factor representing the four first order factors or a single factor measuring all items that form four factors.

I understand that the models are not really nested so the DIFFTEST option may not be appropriate. I have computed three different models separately but how can I compare the results of these to choose the best one. Can I report the change in Chi-square and change df for these results?

The results of the models are:

Base Model - Distinct First Order Factors
Ch-sq - 1160.660*(df= 48)
RMSEA= 0.034 CFI=0.935 TLI= 0.911

Model A –Second Order Model
Chi-Sq (1106.889*)(df=50)
RMSEA= 0.032 CFI= 0.938 TLI=0.919 But it suggest 0.000 correlation for one first order factor with the higher order factor.

Model B - Single Factor
Chi-Sq=9593.979*(df= 54)
RMSEA=0.093 CFI=0.444 TLI=0.321
 Linda K. Muthen posted on Friday, November 29, 2013 - 9:39 am
I believe the second-order factor model is nested in the first-order correlated factor model. I would use DIFFTEST.
 Suhaer Yunus posted on Monday, December 02, 2013 - 9:18 am
Hi Linda,

Thanks for your reply.

I have computed the DIFFTEST. The four correlated factor model is the least restrictive. The second order is more restrictive and the single factor is most restrictive. I am comparing the single factor to the second order factor. With ESTIMATOR=WLSMV and PARAMETRIZATION=DELTA I get the following results:

Base Model - First Order
Ch-sq - 1160.660*(df= 48)
RMSEA= 0.034 CFI=0.935 TLI= 0.911

Second Order Model
Chi-Sq (1520.644*)(df=50)

Chi-square test for difference testing:
Value= 223.535 Df=2 p-value= 0.000

RMSEA= 0.038 CFI= 0.914 TLI=0.887

Single Factor
Chi-Sq=9280.721*(df= 54)

Chi-square test for difference testing:

Value=6388.571 Df=(4) P-value= 0.000
RMSEA=0.091 CFI=0.462 TLI=0.343

I have following queries:

1-If I have used the correct PARAMETRIZATION. If I use PARAMETRIZATION= THETA I get considerably low fit indices on all my models i.e.

For Base model:

Ch-sq - 1796.660*(df= 48)
RMSEA= 0.042 CFI=0.898 TLI= 0.860
and WORSE for other models

2- Single factor be compared with four correlated factors instead?

3-All p-values in chi-square DIFFTEST are greater than 0.5. Does it mean that the restrictions worsen the fit of the data.

 Linda K. Muthen posted on Tuesday, December 03, 2013 - 9:21 am
Typically the H1 model is the least restrictive model.

See the DIFFTEST technical appendix on the website.

You should update to the current version of Mplus.
 SY Khan posted on Monday, March 24, 2014 - 6:04 am
Dear Dr. Muthen,

I am trying to see if a higher order factor for my indepndent variables is better than four correlated factors through DIFFTEST.

Only a few of my factor indicators are binary and the rest are composite variables depicting a range.

I am getting the following message.



I have tried different convergence values:

1- default

values 1-3 don't work and I get the same error message.In the output I have noticed that there are two values for CONVERGENCE CRITERION i.e. convergence criterion and another convergence criterions for H1. Which value needs reducing?

If I set CONVERGENCE= 0.15 or a value higher than 0.15 it gives me the DIFFTEST results. But I am not sure if it is ok to use CONVERGENCE=0.15 or above.

Please advise.

Thanks for you guidance in advance.
 Linda K. Muthen posted on Monday, March 24, 2014 - 8:24 am
Please send the two outputs and your license number to support@statmodel.com so I can see why the models are not nested. Changing the convergence criteria will not solve an identification problem.
 Shandra Forrest-Bank posted on Thursday, July 17, 2014 - 2:22 pm
Hi there,

I conducted a chi-square difference test for 2 fairly complex nested models with categorical variables using WLSMV and found a stat sig result that I expected.

BUT, there were several error term correlations specified through the CFA that I could not include in this analysis because it could not converge.

I had previously run both of the models with the specifications included and written up the analysis. I wanted to add in this chi square diff test.

Does it make sense to report the results of the chi square diff test? Please let me know what your recommendations are.

Thank you!
 Linda K. Muthen posted on Friday, July 18, 2014 - 11:31 am
I would look into the reason for the non-convergence.

If the difference test is not on the models for which you reported results, I would not report it.
 Oliver Rizmanoski posted on Sunday, October 12, 2014 - 3:24 pm

I'm trying to compare a model of two latent factors with a model of one latent factor. I did that with DIFFTEST, since I am relying either on MLMV, ULSMV or WLSMV.

MPLUS does not report a DIFFTEST-result, when I fix the correlation between the two factors at 1. At the same time it reports a warning NO CONVERGENCE. SERIOUS PROBLEMS IN ITERATIONS. ESTIMATED COVARIANCE MATRIX NON-INVERTIBLE. CHECK YOUR STARTING VALUES.

This warning also occurrs when I run the model with the 2 factors with fixed1 correlation without the Difftest option.

All models work perfectly when the correlation is not fixed or when there is only one latent variable. Could there be a specific reason for the problem? My sample size is <200 or did I misspecify the model:

ANALYSIS: type=general;
estimator = MLMV;

MODEL: OD BY cb_16_m4 cb_12_m4
cb_11_m4 cb_1_m4
cb_17_m4 cb_6_m4;
ID BY cb_10_m4 cb_14_m4 cb_15_m4

Is there any workaround here?
Many thanks.
 Linda K. Muthen posted on Monday, October 13, 2014 - 9:07 am
If you want to test if the covariance is one, use MODEL TEST.
 Oliver Rizmanoski posted on Monday, October 13, 2014 - 11:54 am
Okay, thank you for the advice. I will have to try that.
And is there any way how can I compute the chi-squared test for nested models when one model has two factors and the restricted model fixes the correlation between the two factors to 1? (The problem being that there seem to be somehow convergence problems for the model with the factors who correlate perfectly.)

 Linda K. Muthen posted on Monday, October 13, 2014 - 2:49 pm
You are fixing the covariance to one. It seems that is not what the covariance is so it causes convergence problems. If you want to test if the covariance is one, MODEL TEST allows you to do that.
 Marie Nancy Seraphin posted on Friday, June 05, 2015 - 9:19 am
Hello, I am working on a mediation model which includes latent and observed variables. I got my measurement model to run fine. The SEM model is working great as well. However, I cannot get the difftest to compare the two models to work. I keep getting the warning that the H0 is not nested in the H1 model. I just cannot figure out where I went wrong. Please help.
Measure model:
categorical are mn35a mn35b mn35c mn35e mn35d anemia;
CHW by mn35a mn35b mn35c mn35e mn35d;
know by vit diare Nution ebf;
chw with anemia;
savedata: difftest is first.out;

SEM Model:
CHW by mn35a mn35b mn35c mn35e mn35d;
know by vit diare Nution ebf;
anemia on know chw;
know on chw;
model indirect:
anemia ind chw;
 Linda K. Muthen posted on Friday, June 05, 2015 - 9:26 am
Please send the two outputs and your license number to support@statmodel.comm.
 Bengt O. Muthen posted on Friday, June 05, 2015 - 4:33 pm
Doesn't your first model have fewer parameters than your second? The second adds anemia on know which isn't in your first model. Is so, your first model is nested in your second and not vice versa.
 Marie Nancy Seraphin posted on Monday, June 08, 2015 - 6:50 am
Thank you, Dr. Muthen. I will check my models again.
 Sara De Bruyn posted on Friday, April 21, 2017 - 8:21 am
Dear Prof. Muthen,

I am doing a multiple group analysis and want to test whether some paths in my structural model differ significantly between the different groups. I did this by constraining all the paths, except for the path i'm interested in (H1) and compare this model with a fully constrained model (H0). Since I'm using WLSMV as estimator, I use the difftest option to get chi square difference test. However, I get the following warning:



Could you explain me what this message means and how I can fix this?

Thank you very much for your help.

 Bengt O. Muthen posted on Friday, April 21, 2017 - 5:39 pm
Add Tech5 and then send the two output files to Support along with your license number.
 Goran Pavlov posted on Friday, October 26, 2018 - 7:48 am
Is DIFFTEST with MLMV in Mplus 7 and 8 implementing T2 formula from Asparouhov and Muthen (2006) or T3 from Asparouhov and Muthen (2010)?
Thank you.
 Tihomir Asparouhov posted on Friday, October 26, 2018 - 8:41 am
We use T3 since version 6.
 Goran Pavlov posted on Friday, October 26, 2018 - 8:45 am
Thank you.
 Lisa van Zutphen posted on Thursday, April 09, 2020 - 6:34 am
Dear sir/madam,

I compared nine models to a full CLPM, using the difftest function. The full CLPM had the best fit, however, many estimates in this model are not statistically significant. I find it difficult to interpret this, as I would have expected that an alternative model in which these paths were constrained to 0 would have had a better fit.
I have 5 waves of data, and first I thought that it may have been due to the fact that only one estimate of the 4 lagged effects (a1->b2, a2->b3 etc) was significant, but as I have also lagged associations between two variables that are not significant on any of the time intervals, I still do not understand why the full model had a better fit based on the difftest.
I was wondering what your thoughts are on this topic.

Kind regards,

 Bengt O. Muthen posted on Thursday, April 09, 2020 - 4:17 pm
If two estimates are each insignificant, it can still happen that a test of both of them being zero rejects. This is because the estimates are correlated. You can check by using a Wald test using the Mplus feature of Model Test where you can include several parameter tests.
 Lisa van Zutphen posted on Tuesday, April 21, 2020 - 4:56 am
Thank you for your response, I hope you can give an additional comment. What do you mean by estimates are correlated? That they are equal in size (that is what I would test in the Model Test? Path1=path2?)? Or can I also do a with-statement in Model test?
 Bengt O. Muthen posted on Tuesday, April 21, 2020 - 1:11 pm
If you ask for TECH3, you see that each parameter estimate has a variance, the square root of which is the standard error (SE). And you see that different estimates also are correlated.

Model Test can be used for several things:

You can test that each is zero:

0 = path1;
0 = path2;

or you can test that they are equal:

0 = path1-path2;
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message