Message/Author 

Jon Elhai posted on Sunday, December 03, 2006  12:37 pm



Dear Drs. Muthen, From reading the documentation and discussion emails, it sounds like the DIFFTEST command is to be used for WLSMV estimation when comparing two nested models. I could not, however, find the answer to this question of mine... If I am comparing two models with DIFFTEST, how do I interpret the resulting p value? Does a statistically significant p value (e.g., < .05) merely mean that the less restrictive and more restrictive models are significantly different from eachother, without inferring directionality? If so, if DIFFTEST results in a statistically significant difference between models, would I merely examine the two models' goodness of fit indices, and assume that the model with the better fit was found by DIFFTEST to be statistically better? I recall seeing one posting that suggested that a nonsignificant DIFFTEST merely means that the more restrictive model cannot be assumed to have a significantly poorer fit. This suggests to me that directionality is an issue. And if this is the case, I wonder how to test using DIFFTEST the hypothesis that the more restrictive model is significant better than the less restrictive model, in terms of fit. 


Using DIFFTEST, the order is predetermined. The least restrictive model is fit first. So if the pvalue is significant, it means the the restriction worsens model fit. 


I am using DIFFTEST to check if parameters can be equated across multiple groups. While I have gotten a significant result (p=0.0184) from the ChiSquare Test for Difference Testing, which I understand indicates that the unrestricted model (fit first, used to generate deriv.dat) fit better. However, the other fit statistics suggest that the constrained model fits relatively better (i.e. CFI & TLI are greater, RMSEA is lower for the restricted model). Which indicator (DIFFTEST versus CFI/TLI/RMSEA) should I use to choose the best model? 


You should use DIFFTEST to compare nested models. 

Chad Gundy posted on Thursday, December 03, 2009  2:42 am



Dear Drs. Muthen, I have a question about testing nested models using the DIFFTEST function for WLSMV estimators. I tried to directly compare several models which I had thought were nested in each other, and DIFFTEST had no complaints: everything seemed to work well. However a colleague pointed out that one of my models did not seem to be nested in another one. Namely, both models were two dimensional CFA models, and the "nested" model was clearly more restricted, for it had two extra fixed parameters. However, in the two models, an observed variable was allowed to load on a different factor. My colleague also objected to directly comparing a 1st order CFA with a higher order CFA. My question is whether I would be justified in using DIFFTEST in these cases, noting that it doesn't complain about any problems? If so, how can I explain this to my colleague? If not, why not? Thanks for your time and insight. 


Mplus checks that the nested model, the more restrictive model, has a worse fittingfunction values and fewer parameters than the other model. This does not totally insure that the model is nested. I don't think the model with an observed variable loading on a different factor is nested. I think the other model is nested because it restricts the psi matrix but there may be something else I do not know which would make it not nested. 

Catherine posted on Friday, February 25, 2011  8:22 am



Dear Drs Muthen, I want to use the Difftest option to compair a 2factor model with the same model but with measurement errors allowed to correlate. But all i get is this: THE CHISQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL. Iam wondering whats wrong with my model? 


Please send the relevant outputs and your license number to support@statmodel.com. 

Kurt Beron posted on Friday, April 15, 2011  12:23 pm



Dear Drs. Muthen, I am running CFAs with categorical data and using DIFFTEST for my nested models. Things work fine when I use WLSMV. However I have some models with many parameters and receive the program's advice to try ULSMV with them given the extraordinarily long time for convergence otherwise. I've tried this and also used DIFFTEST with this based on the program output that says: * The chisquare value for MLM, MLMV, MLR, ULSMV, WLSM and WLSMV cannot be used for chisquare difference testing in the regular way. MLM, MLR and WLSM chisquare difference testing is described on the Mplus website. MLMV, WLSMV, and ULSMV difference testing is done using the DIFFTEST option. However when I run ULSMV with DIFFTEST I get the message: *** WARNING in ANALYSIS command DIFFTEST is valid only for estimators WLSMV and MLMV. Request for DIFFTEST will be ignored. I'm missing something here. The manual seems silent on ULSMV for this. Would you explain what the proper difference test is to use here and how I should implement it? I am using v6.1. Thanks. 


How many factors do you have? 

Kurt Beron posted on Saturday, April 16, 2011  4:04 pm



I am comparing a twofactor model to a one factor model. The code is identical to my successful ones using wlsmv. All I change is the addition of estimator=ulsmv. 


And how many items do you have? 

Kurt Beron posted on Sunday, April 17, 2011  12:01 pm



Bengt, I have 18 indicators for one latent variable and 8 for the second in one time period, and then I have the same setup for a different time period, and then I constrain across time periods. For example, the actual code for the constrained model is socvic9 by bb2seq2* bb2seq3 bb2seq6 bb2seq14 bb2seq16 bb2seq17 (16) bb2seq21 bb2seq23 bb2seq25 bb2seq26 bb2seq28 bb2seq29 (712) bb2seq31 bb2seq32 bb2seq34 bb2seq37 bb2seq39 bb2seq40 (1318); ovrtvic9 by bb2seq5* bb2seq8 bb2seq10 bb2seq12 bb2seq19 bb2seq24 (1924) bb2seq27 bb2seq41 (2526); socvic9@1; ovrtvic9@1; socvic10 by tcbb32* tcbb33 tcbb36 tcbb314 tcbb316 tcbb317 (16) tcbb321 tcbb323 tcbb325 tcbb326 tcbb328 tcbb329 (712) tcbb331 tcbb332 tcbb334 tcbb337 tcbb339 tcbb340 (1318); ovrtvic10 by tcbb35* tcbb38 tcbb310 tcbb312 tcbb319 tcbb324 (1924) tcbb327 tcbb341 (2526); socvic10@1; ovrtvic10@1; 

Kurt Beron posted on Sunday, April 17, 2011  12:04 pm



And one addendum to the previous post is that this is my test file  which still works with wlsmv but doesn't with ulsmv. However the one that is the time consuming one has this over five time periods, not just two. Thanks. 


So with 5 time periods you have 10 factors and 130 categorical items. That's a tough model to fit in either WLSMV or ML (which is also available in Mplus). WLSMV takes a long time due to the large weight matrix for many variables and ML takes a long time due to the numerical integration over 10 dimensions. With ML, Monte Carlo integration could possibly be used but LRT testing is problematic with Monte Carlo due to only approximate loglikelihoods. I don't think ULSMV helps here given that you need DIFFTEST. In version 6.1, ULSMV is inadvertently shut off in connection with DIFFTEST (which will be fixed in the new 6.11 version coming shortly), but my testing of a 72item example shows that ULSMV isn't faster than WLSMV. This is because you can't use NOSERR and NOCHI since you need "TECH3type" information for the second step of DIFFTEST. I guess I would try WLSMV and not work with all 5 time points together in order to reduce the size of the problem. 

Kurt Beron posted on Sunday, April 17, 2011  2:48 pm



Thanks, Bengt. I have worked on cutting the problem into pieces but wanted to make sure the DIFFTEST issue with ULSMV wasn't suggesting some other issue I needed to be aware of. With your information I'll keep going with the current splitting process and not worry about 6.11 fixing the "feature" of 6.1. Thanks again. 

Jo Brown posted on Friday, June 01, 2012  4:00 am



Hi Bengt, I was planning to use the DIFFTEST option to estimate the difference in parameters between boys and girls in my sample. However, the girls and boys files are separate as I ran multiple imputation on boys and girls separately. Is there a way to still use DIFFTEST when the groups you want to compare are not in the same file or should I considerate alternatives? 


See page 431 of the user's guide. 


Dear Drs. Muthén, I want to compare two nested models, but I was wondering whether the chisquare difference test using the WLSMV and MLMV estimators (DIFFTEST) is, just like regular chisquare test, dependent on sample size? Thanks in advance for your help! Ank 


The issues of sample size would be the same. 

Jo Brown posted on Wednesday, June 06, 2012  6:15 am



Thanks for your earlier reply Linda. I had a look at the example and see how to apply it to my data. I want to compare model fit for boys and girls (whose missing data has been imputed separately). So following the example I could: File (male) = "D:\male.txt" File (female) = D:\female.txt" with the text files listing the actual imputed datasets. However, looking at some earlier board post it does not seem that I could use difftest on imputed data and wonder whether it would actually make sense? Many thanks 


DIFFTEST is not available for imputed data. 

Jo Brown posted on Wednesday, June 06, 2012  8:52 am



Thanks! 

Walt Davis posted on Wednesday, January 16, 2013  4:04 pm



Is it possible to run a DIFFTEST "directly"? Or multiple DIFFTESTs in one run? Or can DIFFTEST results be added together? I have a series of nested models  they don't take a long time to run but long enough to not want to run them repeatedly to test against a series of less restricted models. So for example: HO: most restricted H1: less restricted H2: least restricted So H0 nested in H1 nested in H2. I've saved the derivatives from H1 and H2 but I'd rather not have to run the H0 model twice, first testing against H1 and then H2. 


No, there is currently no option to run DIFFTEST in one run or do multiple DIFFTESTS. 

JMC posted on Thursday, June 13, 2013  7:53 pm



Dear Drs. Muthen, I am trying to compare models that I believe are nested, but MPlus is saying that are not. I am unclear on why; can you lend some insight? h0 SAVEDATA: DIFFTEST IS deriv.dat; MODEL: EFF BY EFF1EFF7; VAL BY VAL1VAL7; COG BY COG1COG12; VAL WITH EFF; ITC ON VAL; ITC ON EFF; ITC ON COG; ITC ON FARMS; ITC ON ETH; ITC ON GENDER; COG ON VAL; COG ON EFF; h1 ANALYSIS: DIFFTEST IS C:\Users\Jenna Red\Desktop\deriv.dat; MODEL: EFF BY EFF1EFF7; COG BY COG1COG12; VAL BY VAL1VAL7; ITC ON EFF; ITC ON COG; ITC ON FARMS; ITC ON ETH; ITC ON GENDER; COG ON EFF; ANALYSIS: DIFFTEST IS C:\Users\Jenna Red\Desktop\deriv.dat; h2 MODEL: EFF BY EFF1EFF7; VAL BY VAL1VAL7; COG BY COG1COG12; VAL WITH EFF; VAL WITH COG@0; ITC ON VAL; ITC ON EFF; ITC ON COG; ITC ON FARMS; ITC ON ETH; ITC ON GENDER; COG ON EFF; ANALYSIS: DIFFTEST IS C:\Users\Jenna Red\Desktop\deriv.dat; Thank you again! JC 


Please send the outputs and your license number to support@statmodel.com. 


Dear Discussion Community, I am running a multigroup CFA with 4 binary indicators for one continuous factor using WLSMV. The goal is to compare nested models using the DIFFTEST option in order to identify measurement noninvariance. I have established the configural invariance model as a baseline for the DIFFTEST using the model constraints described in the UG (Referent loading @1, all other loadings free, all thresholds free, all scaling matrices@1 and factor means@0). Scalar invariance was rejected, so I estimated partial invariance models based on modification indices. When freeing the loading and threshold of a noninvariant item, I set its scaling factor to 1 according to the UG. For one noninvariant item the DIFFTEST option worked. The model fit was still not satisfactory, however. I therefore released the threshold and loading of another item, again setting its scaling factor to 1. When running the model, I receive the message that DIFFTEST could not be used because H0 is not nested in H1. I do not see how this is possible. As far as I can see the model is perfectly nested in the configural model. I am wondering if I need to set factor means to zero in this partial invariance model because I am releasing loadings and thresholds for half of my indicators. Thank you very much for your help. 


Please send the two outputs and your license number to support@statmodel.com. 

Back to top 