Sanjoy posted on Wednesday, April 13, 2005 - 5:52 pm
Dear Professor Muthen/s
Before asking the main questions related to "difftest" let me clarify couple of things. Please rectify me if I'm wrong.
1. I have MPlus version (3.12). I suppose this one is the most updated. I could not find the example 12.12 (page 278) in the "User's Guide example folder", in fact for that matter there is not a single file from chapter 12 of the MPlus User's Guide Manual in the above mentioned folder that we got from MPlus CD
2. As an alternative therefore, I replicate ur codes (page 278) and tried ... it fails to run, it said " *** FATAL ERROR VARIABLE Y7 CAUSES A SINGULAR WEIGHT MATRIX PART. THIS MAY BE DUE TO THE VARIABLE BEING DICHOTOMOUS BUT DECLARED AS CONTINUOUS. RESPECIFY THE VARIABLE AS CATEGORICAL." … same for the variables Y8 and Y9
Next, I include y7-y9 as categorical in the command line and tried to run ... THEN it WORKED WELL, I have used the same data set that u have mentioned at page 278, though u have NOT mentioned y7-y9 as categorical ... I just need to make sure I have not messed up, kindly rectify me if I have done so
Now, coming to the DIFFTEST issues
Q1. Usually, though not always, under Null Hypothesis (H0) we assume less restrictive model and under Alternative Hypothesis (H1) we put the restrictions on model parameters (e.g. chow test) ...it looks here for the case of "difftest" we alter the usual practices, why is it so
Q2. Can you suggest some article written on “Difftest” practice
Q3. How should we use the “Difftest” result?
From the second step result I got this
“ Chi-Square Test for Difference Testing
Value 2.968 Degrees of Freedom 3** P-Value 0.3953 ”
Usually a p-value close to zero (0.05 is a typical threshold) signals that our null hypothesis is false and we reject Null while a Large p-values (like the above .39) implies that there is no detectable difference for the sample size used, and therefore we fail to reject Null. … However in this “difftest” case, would it be the reverse ?
Thanks and regards
BMuthen posted on Wednesday, April 13, 2005 - 11:24 pm
1. The examples from Chapter 12 are not included with the Mplus CD.
2. I would have to see the full model to answer this question.
A p-value greater than .05 says that the restrictions cannot be rejected that is the restrictions do not worsen the fit of the model. There is currently no article written on DIFFTEST.
Sanjoy posted on Thursday, April 14, 2005 - 9:37 am
Thank you Professor ... I will mail you the full model
Sanjoy posted on Thursday, April 14, 2005 - 10:47 am
Dear Professor .... Why it's the case that for WLSMV the conventional approach of taking the difference between the chi-square values and the difference in the degrees of freedom is not appropriate ... I mean Q1. How can we show that the standard chi-square difference is NOT distributed as chi-square
Q2. How do ensure that DIFFTEST is doing the correct thing
Thanks and regards
BMuthen posted on Friday, April 15, 2005 - 1:29 am
You may want to look at the literature by Satorra and Bentler on robust chi-square difference testing with continuouos nonnormal outcomes. The issues are the same.
You can do a simulation study to see how well DIFFTEST performs. There will be a forthcoming paper on the DIFFTEST theory.
1. Is it the article "A scaled difference chi-square test statistic for moment structure analysis", Psychometrika, 66,507-514, 2001(A. Satora and P.M. Bentler) that you have referred me to check with ... or something else
2. I'm severely time constrained, nonetheless I will try the simulation things...in between, if you kindly send me an electronic copy of "forthcoming paper on the DIFFTEST theory" ... that would be a tremendous help to me ... if the authors prohibit us from quoting, it's goes without saying that we will stick to that, however reading their article will help me to understand the nuances of DIFFTEST more comprehensively
Thanks and regards
BMuthen posted on Saturday, April 16, 2005 - 4:25 am
2. The paper is not ready to be sent at this time.
Dr. Muthen, I am trying to follow example 12.12 in the Mplus version 4 manual to use the chi-square difference test in models involving the WLSMV estimator. I am receiving the following error message:
THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL.
My second step model involves restraining 2 regression coefficients as being equal:
y1 ON x1 (1); y1 ON x2 (1);
These predictors were freely regressed on y1 in the model that I am using in the first step as indicated on p. 314. My interest is to test whether fixing these regression coefficients in the second step deteriorates the model fit. Is this possible to do following example 12.12, or am I off basis with regard to using the chi-square difference test for such a purpose? Thank you.
It sounds like what you are doing is possible. You would need to send your input, data, output, and license number to firstname.lastname@example.org for us to say any more.
D C posted on Friday, September 24, 2010 - 2:48 pm
I am doing a multiple group analysis of a factor structure defined by categorical-ordinal indicators. I am using the WLSMV estimator, and hence I use the DIFFTEST to judge whether various restrictions imposed on the model significantly worsen the fit. However, my data has a relative large sample size (N=3650 with 2100 in one group category and the 1550 in the other.
My questions are: 1. Is the DIFFTEST sensitive to large sample size as the CHI-Square tests?
2. If so, I would like to use differences in CFIs (Meade et al, 2008)values to help judge the difference in model fit between restricted and less restricted models. So, in multiple group analysis (when GROUP statement is used)and various restrictions are imposed on a series of models, are the CFI values estimated each time anew? That is, is it advisable to take differences of CFI values between the restricted and less restricted models to judge model fit?
Dear Dr. Muthen, I am running a structural equation model with categorical latent variables (both IV and DV) and I am attempting to do a Chi-Square difference test for my multiple group analysis. I have been running into problems when attempting to do the two-step chi-square test of model fit required when using a WLSMV estimator. After saving my derivatives in step-one, with every pathway constrained, I then go on to unconstrain one pathway for one group. Here is where I run into problems. I keep getting the error message : THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL. The chi-square in the second model is larger and there are a greater number of degrees of freedom (the model with one pathway unconstrained), compared to the baseline, fully constrained model. Is there something I’m not doing properly that is making my H0 not nest in the H1 model? Any guidance you can give me would be much appreciated. I can send you my input and data if that is helpful. Thank you!
Dear Dr Muthen I am fitting a cross lagged model and comparing group differences among white, hispanic and black. i am using categorical indicators for my latent factors.
The difftest comparing the invariance and non invariance is
Chi-Square Test for Difference Testing Value 103.284 Degrees of Freedom 38 P-Value 0.0000
but RMSEA (0.04) TLI(.982) and CFI (.98) of the more restricted model are better than the RMSE (0.051) TLI (0.978) and CFI (0.966) of the less restricted model.
should not it be otherwise? i mean if the diffest is significant then should i expect that the goodness of fit of the less restricted model were better than the indicators of the more restricted model? thank you fernando
I don't know why RMSEA and CFI are so good for the more restrictive model, but I assume that the chi-2 is bad also for the less restrictive model. In such cases, these fit indices don't always come out in an expected order of magnitude. I would rely more on the chi-2 DIFFTEST.
my question concerns the interpretation of the chi-square difference test under WLSMV estimator. In your post above (April 13, 2005 at 11:24pm) you suggested that "A p-value greater than .05 says that the restrictions cannot be rejected that is the restrictions do not worsen the fit of the model". However, on the UCLA website (precisely here: http://www.ats.ucla.edu/stat/mplus/faq/difftest.htm) an opposite interpretation of the p-value seems to be followed. Am I missing something? I cannot see how the two interpretations could match.
Please, I will be grateful if you would suggest me some key references as well.
I think they are saying the same thing in a slightly different way.
See DIFFTEST under Technical Appendices on the website.
Suhaer Yunus posted on Thursday, November 28, 2013 - 12:27 pm
The independent variables in my study are binary and I have computed EFA (using Mplus version 7.1). The EFA results show that there are four first order correlated factors. The CFA results confirm that too.
Now I want to show that whether four correlated factors model is better or whether there should be one higher order factor representing the four first order factors or a single factor measuring all items that form four factors.
I understand that the models are not really nested so the DIFFTEST option may not be appropriate. I have computed three different models separately but how can I compare the results of these to choose the best one. Can I report the change in Chi-square and change df for these results?
The results of the models are:
Base Model - Distinct First Order Factors Ch-sq - 1160.660*(df= 48) RMSEA= 0.034 CFI=0.935 TLI= 0.911
Model A –Second Order Model Chi-Sq (1106.889*)(df=50) RMSEA= 0.032 CFI= 0.938 TLI=0.919 But it suggest 0.000 correlation for one first order factor with the higher order factor.
Model B - Single Factor Chi-Sq=9593.979*(df= 54) RMSEA=0.093 CFI=0.444 TLI=0.321
I have computed the DIFFTEST. The four correlated factor model is the least restrictive. The second order is more restrictive and the single factor is most restrictive. I am comparing the single factor to the second order factor. With ESTIMATOR=WLSMV and PARAMETRIZATION=DELTA I get the following results:
Base Model - First Order Ch-sq - 1160.660*(df= 48) RMSEA= 0.034 CFI=0.935 TLI= 0.911
Second Order Model Chi-Sq (1520.644*)(df=50)
Chi-square test for difference testing: Value= 223.535 Df=2 p-value= 0.000
values 1-3 don't work and I get the same error message.In the output I have noticed that there are two values for CONVERGENCE CRITERION i.e. convergence criterion and another convergence criterions for H1. Which value needs reducing?
If I set CONVERGENCE= 0.15 or a value higher than 0.15 it gives me the DIFFTEST results. But I am not sure if it is ok to use CONVERGENCE=0.15 or above.
I'm trying to compare a model of two latent factors with a model of one latent factor. I did that with DIFFTEST, since I am relying either on MLMV, ULSMV or WLSMV.
MPLUS does not report a DIFFTEST-result, when I fix the correlation between the two factors at 1. At the same time it reports a warning NO CONVERGENCE. SERIOUS PROBLEMS IN ITERATIONS. ESTIMATED COVARIANCE MATRIX NON-INVERTIBLE. CHECK YOUR STARTING VALUES.
This warning also occurrs when I run the model with the 2 factors with fixed1 correlation without the Difftest option.
All models work perfectly when the correlation is not fixed or when there is only one latent variable. Could there be a specific reason for the problem? My sample size is <200 or did I misspecify the model:
ANALYSIS: type=general; estimator = MLMV;
MODEL: OD BY cb_16_m4 cb_12_m4 cb_11_m4 cb_1_m4 cb_17_m4 cb_6_m4; ID BY cb_10_m4 cb_14_m4 cb_15_m4 cb_5_m4; OD WITH ID@1; [OD@0];[ID@0];
Okay, thank you for the advice. I will have to try that. And is there any way how can I compute the chi-squared test for nested models when one model has two factors and the restricted model fixes the correlation between the two factors to 1? (The problem being that there seem to be somehow convergence problems for the model with the factors who correlate perfectly.)
Hello, I am working on a mediation model which includes latent and observed variables. I got my measurement model to run fine. The SEM model is working great as well. However, I cannot get the difftest to compare the two models to work. I keep getting the warning that the H0 is not nested in the H1 model. I just cannot figure out where I went wrong. Please help. Measure model: categorical are mn35a mn35b mn35c mn35e mn35d anemia; analysis: type=complex; parameterization=theta; estimator=wlsmv; model: CHW by mn35a mn35b mn35c mn35e mn35d; MN35D WITH MN35B ; know by vit diare Nution ebf; chw with anemia; savedata: difftest is first.out;
SEM Model: analysis: type=complex; parameterization=theta; estimator=wlsmv; difftest=first.out; model: CHW by mn35a mn35b mn35c mn35e mn35d; MN35D WITH MN35B ; know by vit diare Nution ebf; anemia on know chw; know on chw; model indirect: anemia ind chw;
I am doing a multiple group analysis and want to test whether some paths in my structural model differ significantly between the different groups. I did this by constraining all the paths, except for the path i'm interested in (H1) and compare this model with a fully constrained model (H0). Since I'm using WLSMV as estimator, I use the difftest option to get chi square difference test. However, I get the following warning:
THE MODEL ESTIMATION TERMINATED NORMALLY
THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL MAY NOT BE NESTED IN THE H1 MODEL. DECREASING THE CONVERGENCE OPTION MAY RESOLVE THIS PROBLEM.
Could you explain me what this message means and how I can fix this?
I compared nine models to a full CLPM, using the difftest function. The full CLPM had the best fit, however, many estimates in this model are not statistically significant. I find it difficult to interpret this, as I would have expected that an alternative model in which these paths were constrained to 0 would have had a better fit. I have 5 waves of data, and first I thought that it may have been due to the fact that only one estimate of the 4 lagged effects (a1->b2, a2->b3 etc) was significant, but as I have also lagged associations between two variables that are not significant on any of the time intervals, I still do not understand why the full model had a better fit based on the difftest. I was wondering what your thoughts are on this topic.
If two estimates are each insignificant, it can still happen that a test of both of them being zero rejects. This is because the estimates are correlated. You can check by using a Wald test using the Mplus feature of Model Test where you can include several parameter tests.
Thank you for your response, I hope you can give an additional comment. What do you mean by estimates are correlated? That they are equal in size (that is what I would test in the Model Test? Path1=path2?)? Or can I also do a with-statement in Model test?