DIFFTEST PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 Jon Elhai posted on Sunday, December 03, 2006 - 12:37 pm
Dear Drs. Muthen,
From reading the documentation and discussion emails, it sounds like the DIFFTEST command is to be used for WLSMV estimation when comparing two nested models.

I could not, however, find the answer to this question of mine...
If I am comparing two models with DIFFTEST, how do I interpret the resulting p value? Does a statistically significant p value (e.g., < .05) merely mean that the less restrictive and more restrictive models are significantly different from eachother, without inferring directionality? If so, if DIFFTEST results in a statistically significant difference between models, would I merely examine the two models' goodness of fit indices, and assume that the model with the better fit was found by DIFFTEST to be statistically better?

I recall seeing one posting that suggested that a non-significant DIFFTEST merely means that the more restrictive model cannot be assumed to have a significantly poorer fit. This suggests to me that directionality is an issue. And if this is the case, I wonder how to test using DIFFTEST the hypothesis that the more restrictive model is significant better than the less restrictive model, in terms of fit.
 Linda K. Muthen posted on Monday, December 04, 2006 - 8:27 am
Using DIFFTEST, the order is predetermined. The least restrictive model is fit first. So if the p-value is significant, it means the the restriction worsens model fit.
 Jaime Derringer posted on Tuesday, April 14, 2009 - 1:30 pm
I am using DIFFTEST to check if parameters can be equated across multiple groups. While I have gotten a significant result (p=0.0184) from the Chi-Square Test for Difference Testing, which I understand indicates that the unrestricted model (fit first, used to generate deriv.dat) fit better. However, the other fit statistics suggest that the constrained model fits relatively better (i.e. CFI & TLI are greater, RMSEA is lower for the restricted model). Which indicator (DIFFTEST versus CFI/TLI/RMSEA) should I use to choose the best model?
 Linda K. Muthen posted on Thursday, April 16, 2009 - 5:52 pm
You should use DIFFTEST to compare nested models.
 Chad Gundy posted on Thursday, December 03, 2009 - 2:42 am
Dear Drs. Muthen,

I have a question about testing nested models using the DIFFTEST function for WLSMV estimators.

I tried to directly compare several models which I had thought were nested in each other, and DIFFTEST had no complaints: everything seemed to work well.

However a colleague pointed out that one of my models did not seem to be nested in another one. Namely, both models were two dimensional CFA models, and the "nested" model was clearly more restricted, for it had two extra fixed parameters. However, in the two models, an observed variable was allowed to load on a different factor.

My colleague also objected to directly comparing a 1st order CFA with a higher order CFA.

My question is whether I would be justified in using DIFFTEST in these cases, noting that it doesn't complain about any problems? If so, how can I explain this to my colleague? If not, why not?

Thanks for your time and insight.
 Linda K. Muthen posted on Thursday, December 03, 2009 - 9:57 am
Mplus checks that the nested model, the more restrictive model, has a worse fitting-function values and fewer parameters than the other model. This does not totally insure that the model is nested.

I don't think the model with an observed variable loading on a different factor is nested. I think the other model is nested because it restricts the psi matrix but there may be something else I do not know which would make it not nested.
 Catherine posted on Friday, February 25, 2011 - 8:22 am
Dear Drs Muthen,

I want to use the Difftest option to compair a 2factor model with the same model but with measurement errors allowed to correlate.
But all i get is this:
THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL.

Iam wondering whats wrong with my model?
 Linda K. Muthen posted on Friday, February 25, 2011 - 8:35 am
Please send the relevant outputs and your license number to support@statmodel.com.
 Kurt Beron posted on Friday, April 15, 2011 - 12:23 pm
Dear Drs. Muthen,

I am running CFAs with categorical data and using DIFFTEST for my nested models. Things work fine when I use WLSMV. However I have some models with many parameters and receive the program's advice to try ULSMV with them given the extraordinarily long time for convergence otherwise. I've tried this and also used DIFFTEST with this based on the program output that says:

* The chi-square value for MLM, MLMV, MLR, ULSMV, WLSM and WLSMV cannot be used
for chi-square difference testing in the regular way. MLM, MLR and WLSM chi-square difference testing is described on the Mplus website. MLMV, WLSMV, and ULSMV difference testing is done using the DIFFTEST option.

However when I run ULSMV with DIFFTEST I get the message:

*** WARNING in ANALYSIS command
DIFFTEST is valid only for estimators WLSMV and MLMV.
Request for DIFFTEST will be ignored.

I'm missing something here. The manual seems silent on ULSMV for this.

Would you explain what the proper difference test is to use here and how I should implement it? I am using v6.1.

Thanks.
 Bengt O. Muthen posted on Saturday, April 16, 2011 - 2:49 pm
How many factors do you have?
 Kurt Beron posted on Saturday, April 16, 2011 - 4:04 pm
I am comparing a two-factor model to a one factor model. The code is identical to my successful ones using wlsmv. All I change is the addition of estimator=ulsmv.
 Bengt O. Muthen posted on Sunday, April 17, 2011 - 11:31 am
And how many items do you have?
 Kurt Beron posted on Sunday, April 17, 2011 - 12:01 pm
Bengt,

I have 18 indicators for one latent variable and 8 for the second in one time period, and then I have the same setup for a different time period, and then I constrain across time periods. For example, the actual code for the constrained model is

socvic9 by bb2seq2* bb2seq3 bb2seq6 bb2seq14 bb2seq16 bb2seq17 (1-6)
bb2seq21 bb2seq23 bb2seq25 bb2seq26 bb2seq28 bb2seq29 (7-12)
bb2seq31 bb2seq32 bb2seq34 bb2seq37 bb2seq39 bb2seq40 (13-18);

ovrtvic9 by bb2seq5* bb2seq8 bb2seq10 bb2seq12 bb2seq19 bb2seq24 (19-24)
bb2seq27 bb2seq41 (25-26);


socvic9@1;
ovrtvic9@1;

socvic10 by tcbb32* tcbb33 tcbb36 tcbb314 tcbb316 tcbb317 (1-6)
tcbb321 tcbb323 tcbb325 tcbb326 tcbb328 tcbb329 (7-12)
tcbb331 tcbb332 tcbb334 tcbb337 tcbb339 tcbb340 (13-18);

ovrtvic10 by tcbb35* tcbb38 tcbb310 tcbb312 tcbb319 tcbb324 (19-24)
tcbb327 tcbb341 (25-26);


socvic10@1;
ovrtvic10@1;
 Kurt Beron posted on Sunday, April 17, 2011 - 12:04 pm
And one addendum to the previous post is that this is my test file - which still works with wlsmv but doesn't with ulsmv. However the one that is the time consuming one has this over five time periods, not just two.

Thanks.
 Bengt O. Muthen posted on Sunday, April 17, 2011 - 2:22 pm
So with 5 time periods you have 10 factors and 130 categorical items. That's a tough model to fit in either WLSMV or ML (which is also available in Mplus). WLSMV takes a long time due to the large weight matrix for many variables and ML takes a long time due to the numerical integration over 10 dimensions. With ML, Monte Carlo integration could possibly be used but LRT testing is problematic with Monte Carlo due to only approximate loglikelihoods.

I don't think ULSMV helps here given that you need DIFFTEST. In version 6.1, ULSMV is inadvertently shut off in connection with DIFFTEST (which will be fixed in the new 6.11 version coming shortly), but my testing of a 72-item example shows that ULSMV isn't faster than WLSMV. This is because you can't use NOSERR and NOCHI since you need "TECH3-type" information for the second step of DIFFTEST.

I guess I would try WLSMV and not work with all 5 time points together in order to reduce the size of the problem.
 Kurt Beron posted on Sunday, April 17, 2011 - 2:48 pm
Thanks, Bengt. I have worked on cutting the problem into pieces but wanted to make sure the DIFFTEST issue with ULSMV wasn't suggesting some other issue I needed to be aware of. With your information I'll keep going with the current splitting process and not worry about 6.11 fixing the "feature" of 6.1.

Thanks again.
 Jo Brown posted on Friday, June 01, 2012 - 4:00 am
Hi Bengt,

I was planning to use the DIFFTEST option to estimate the difference in parameters between boys and girls in my sample.

However, the girls and boys files are separate as I ran multiple imputation on boys and girls separately.

Is there a way to still use DIFFTEST when the groups you want to compare are not in the same file or should I considerate alternatives?
 Linda K. Muthen posted on Friday, June 01, 2012 - 5:57 am
See page 431 of the user's guide.
 Ank Ringoot posted on Friday, June 01, 2012 - 7:12 am
Dear Drs. Muthén,

I want to compare two nested models, but I was wondering whether the chi-square difference
test using the WLSMV and MLMV estimators (DIFFTEST) is, just like regular chi-square test, dependent on sample size?
Thanks in advance for your help!
Ank
 Linda K. Muthen posted on Friday, June 01, 2012 - 9:53 am
The issues of sample size would be the same.
 Jo Brown posted on Wednesday, June 06, 2012 - 6:15 am
Thanks for your earlier reply Linda. I had a look at the example and see how to apply it to my data.

I want to compare model fit for boys and girls (whose missing data has been imputed separately). So following the example I could:

File (male) = "D:\male.txt"
File (female) = D:\female.txt"

with the text files listing the actual imputed datasets.

However, looking at some earlier board post it does not seem that I could use difftest on imputed data and wonder whether it would actually make sense?

Many thanks
 Linda K. Muthen posted on Wednesday, June 06, 2012 - 8:28 am
DIFFTEST is not available for imputed data.
 Jo Brown posted on Wednesday, June 06, 2012 - 8:52 am
Thanks!
 Walt Davis posted on Wednesday, January 16, 2013 - 4:04 pm
Is it possible to run a DIFFTEST "directly"? Or multiple DIFFTESTs in one run? Or can DIFFTEST results be added together?

I have a series of nested models -- they don't take a long time to run but long enough to not want to run them repeatedly to test against a series of less restricted models.

So for example:

HO: most restricted
H1: less restricted
H2: least restricted

So H0 nested in H1 nested in H2. I've saved the derivatives from H1 and H2 but I'd rather not have to run the H0 model twice, first testing against H1 and then H2.
 Linda K. Muthen posted on Thursday, January 17, 2013 - 7:29 am
No, there is currently no option to run DIFFTEST in one run or do multiple DIFFTESTS.
 JMC posted on Thursday, June 13, 2013 - 7:53 pm
Dear Drs. Muthen,

I am trying to compare models that I believe are nested, but MPlus is saying that are not. I am unclear on why; can you lend some insight?

h0
SAVEDATA: DIFFTEST IS deriv.dat;

MODEL:
EFF BY EFF1-EFF7;
VAL BY VAL1-VAL7;
COG BY COG1-COG12;
VAL WITH EFF;

ITC ON VAL;
ITC ON EFF;
ITC ON COG;
ITC ON FARMS;
ITC ON ETH;
ITC ON GENDER;
COG ON VAL;
COG ON EFF;



h1
ANALYSIS:
DIFFTEST IS C:\Users\Jenna Red\Desktop\deriv.dat;

MODEL:
EFF BY EFF1-EFF7;
COG BY COG1-COG12;
VAL BY VAL1-VAL7;

ITC ON EFF;
ITC ON COG;
ITC ON FARMS;
ITC ON ETH;
ITC ON GENDER;
COG ON EFF;

ANALYSIS: DIFFTEST IS C:\Users\Jenna Red\Desktop\deriv.dat;


h2
MODEL:
EFF BY EFF1-EFF7;
VAL BY VAL1-VAL7;
COG BY COG1-COG12;
VAL WITH EFF;
VAL WITH COG@0;

ITC ON VAL;
ITC ON EFF;
ITC ON COG;
ITC ON FARMS;
ITC ON ETH;
ITC ON GENDER;
COG ON EFF;

ANALYSIS: DIFFTEST IS C:\Users\Jenna Red\Desktop\deriv.dat;

Thank you again!

JC
 Linda K. Muthen posted on Thursday, June 13, 2013 - 8:22 pm
Please send the outputs and your license number to support@statmodel.com.
 Jochen Stueber posted on Wednesday, August 07, 2013 - 8:41 am
Dear Discussion Community,

I am running a multigroup CFA with 4 binary indicators for one continuous factor using WLSMV. The goal is to compare nested models using the DIFFTEST option in order to identify measurement non-invariance.

I have established the configural invariance model as a baseline for the DIFFTEST using the model constraints described in the UG (Referent loading @1, all other loadings free, all thresholds free, all scaling matrices@1 and factor means@0).

Scalar invariance was rejected, so I estimated partial invariance models based on modification indices. When freeing the loading and threshold of a non-invariant item, I set its scaling factor to 1 according to the UG.
For one non-invariant item the DIFFTEST option worked. The model fit was still not satisfactory, however. I therefore released the threshold and loading of another item, again setting its scaling factor to 1.
When running the model, I receive the message that DIFFTEST could not be used because H0 is not nested in H1. I do not see how this is possible. As far as I can see the model is perfectly nested in the configural model. I am wondering if I need to set factor means to zero in this partial invariance model because I am releasing loadings and thresholds for half of my indicators.

Thank you very much for your help.
 Linda K. Muthen posted on Wednesday, August 07, 2013 - 10:40 am
Please send the two outputs and your license number to support@statmodel.com.
 Yvonne LEE posted on Wednesday, July 09, 2014 - 5:04 pm
I am new to Mplus and is running SEM with WLSMV estimation on a rather small offender sample (N = 175). In the end, the model chosen with fit indexes: RMSEA = 0.076; WRMR =0.773; CFI = 0.736; TFI = 0.705. I did not refer to the chi-square test because of the small sample. Would like to seek your expert advice on the followings:-

1. Based on RMSEA and WRMR, the model fit is acceptable. However, CFI and TFI is not good. Is that OK? I read some literature review suggests RMSEA <= 0.8 or <=0.6 and WRMR <=1 as the cut-off.

2. When compared to the alternative model, the DIFFTEST shows a significant result despite the less restrictive model shows RMSEA = 0.072. For a small sample research, is it proper to rely on DIFFTEST which is based on chi square test?
 Bengt O. Muthen posted on Wednesday, July 09, 2014 - 6:35 pm
That is a poor CFI. Also, I think RMSEA is a little high - you want it to be at most 0.05. RMSEA is also based on chi-2. I would use chi-2 as a rough approximation - and also DIFFTEST in the same way. Seems like the model needs more work.
 Lauren Brumley posted on Tuesday, September 29, 2015 - 10:04 am
I want to compare a two-factor (less restrictive model) to a one-factor (more restrictive model) CFA using 10 binary indicators and WLSMV estimator. I am using the Add Health dataset.

On the two-factor model, 5 items load onto a nonviolent antisocial behavior factor and 5 items load onto a violent antisocial behavior factor. On the one-factor model, all 10 items load onto one factor of antisocial behavior.

I used the DIFFTEST option to compare the models, and there is only one degree of freedom for the Chi Square Test for Difference Testing. Is the one parameter difference because 1 factor variance parameter is estimated in the one-factor model instead of 2 factor variances in the two-factor model?

Thank you very much for your help.
 Tihomir Asparouhov posted on Tuesday, September 29, 2015 - 10:13 pm
The additional parameter is the correlation parameter between the two factors. When the correlation is 1 the two factor model reduces to the one factor model.
 Lauren Brumley posted on Wednesday, September 30, 2015 - 4:59 pm
Ok great, thank you so much! That's very helpful.
 Brett Holfeld posted on Wednesday, November 04, 2015 - 10:52 am
Hi,

I ran a simple path model where I have two continuous variables predicting a binary outcome with several covariates. I used the WLSMV estimator. I ran a multigroup model to determine whether the model differs by sex using the DIFFTEST command, however, I am having some trouble with the interpretation. Comparing the less restrictive model (unconstrained) with the more restrictive model (constrained) results in a non-significant chi-square difference. Does this mean that the model does not differ by sex? Any help would be greatly appreciated!

Brett
 Linda K. Muthen posted on Wednesday, November 04, 2015 - 12:43 pm
Yes, this is what it means.
 Brett Holfeld posted on Wednesday, November 04, 2015 - 12:53 pm
Great, thanks Linda!
 Evelyn Hall posted on Wednesday, January 27, 2016 - 7:14 am
Hello trying to run DIFFTEST, but little confused regarding least restrictive model/most restrictive.

Could anybody tell me which model to run first, and explain why?

Model 1:

MODEL:
F1 BY B1 B5 B10 B12;
F2 BY B4 B7 B9 B15;
F3 BY B2 B3 B6 B8 B11 B13 B16;


Model 2:

MODEL:
F1 BY B1 B5 B10 B12;
F2 BY B4 B7 B9 B15;
F3 BY B2 B6 B11 B13;
F4 BY B3 B8 B16;

Many thanks
 Linda K. Muthen posted on Wednesday, January 27, 2016 - 12:03 pm
The least restrictive model is the model with the most parameters. The most restrictive model is the model with fewer parameters.

DIFFTEST is for testing nested models. Your models are not nested.
 Evelyn Hall posted on Wednesday, January 27, 2016 - 1:43 pm
Thank you for your quick response.

Also sorry, I thought they were nested as Mplus allows me to run the DIFFTEST.
 Bengt O. Muthen posted on Wednesday, January 27, 2016 - 3:19 pm
Mplus just tests for necessary conditions of nesting, no sufficient ones. That is, that the nested model has fewer parameters and worse fit.
 wayne smith posted on Wednesday, March 16, 2016 - 8:20 am
Hello,

I would like to perform a difftest between a bifactor and CFA model but when I do I get the message below:


THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE FILE CONTAINING INFORMATION ABOUT THE H1 MODEL HAS INSUFFICIENT DATA.

Can you tell me where I am going wrong with this model?




!H0

!Bifactor model

General BY I01* I02 I03 I04 I05 I06-I12;

! 3 factors:
f1 BY I01* I06 I08;
f2 BY I10* I11 I12;
f3 BY I02* I07;


General@1; f1-f3@1;


General with f1-f3@0;

SAVEDATA: DIFFTEST IS C:\Users\James Smith \Desktop\James.dat;


!H1

ANALYSIS: DIFFTEST IS C:\Users\James Smith\Desktop\James.dat;


MODEL:


!CFA three factors:
f1 BY I01, I06, I08;
f2 BY I09, I10, I11, I12;
f3 BY I02, I07;


Thanks in advance for your help

Marc
 Linda K. Muthen posted on Wednesday, March 16, 2016 - 2:50 pm
Please send the two outputs and your license number to support@statmodel.com.
 Dr. Kashdan's Lab posted on Wednesday, June 22, 2016 - 7:03 am
Hi Muthens,

I noticed on an earlier post on this thread by Jaime Derringer (posted on Tuesday, April 14, 2009 - 1:30 pm) you suggested that the p-value from the DIFFTEST results should be used for model selection over comparing the model fit indices (CFI/TLI & RMSEA), when comparing nested models using the WLSMV estimator.

I have a similar situation. The p-value from the DIFFTEST results suggests the constrained model fits WORSE, but the model fit indices suggest the constrained model fits BETTER.

I did what you suggested in a manuscript; however, a reviewer is confused by this. He says that I should go off the model fit indices and not the DIFFTEST results 1) because the model fit indices better take into account model parsimony and 2) because my sample size is very large (N = 7617) and the DIFFTEST results are highly influenced by sample size.

Is he right? If not, how would you recommend I reply to him?

Thank you,
David
 Bengt O. Muthen posted on Wednesday, June 22, 2016 - 5:06 pm
I don't think I've seen a study of how changes in model fit indices performs well with categorical outcomes. Even if DIFFTEST is powerful at this n, you can see if the model changes that are indicated are substantively important.
 Kelly Minor posted on Thursday, August 18, 2016 - 4:13 pm
Hello,

I am trying to run a multiple group analysis on nested data with a categorical outcome. I am reading conflicting suggestions on what to use to test for differences between groups: Satorra-Bentler, Wald, and DIFFTEST have all been in discussion. Please advise the correct test.

I am examining predictors (x1, x2, x3) of college enrollment (0=none, 1=delayed, 2=immediate) using WLSMV. I want to see if these differ based on SES quartile. My N=14,018, and the (condensed) input is :

CLUSTER is SCH_ID ;
WEIGHT is F2BYWT;
CATEGORICAL is college ;
GROUPING is SESquart (1=FIRST, 2=SECOND, 3=THIRD, 4=FOURTH);

ANALYSIS:
TYPE = COMPLEX

!For the first input ;
MODEL:
college ON x1 ;
college ON x2;
college ON x3;

college ON covariates;

!estimate variances to ensure no deletion ;

!REPEAT FOR MODEL SECOND, MODEL THIRD, MODEL FOURTH;
----------------------------
!For the second input file;
MODEL:
college ON x1 (1);
college ON x2 (2);
college ON x3 (3);

college ON covariates ;

!estimate variances to ensure no deletion ;

!REPEAT FOR MODEL SECOND, MODEL THIRD, MODEL FOURTH;


Any information on how to compare groups and simultaneously accommodate weighting, nesting, and categorial outcomes is appreciated.

Thanks!
 Bengt O. Muthen posted on Thursday, August 18, 2016 - 6:12 pm
I would use MLR and likelihood-ratio chi-square testing of group differences (such as the ones you indicate with numbers in parentheses). See

Difference Testing Using the Loglikelihood

on our web page:

http://www.statmodel.com/chidiff.shtml
 Margarita  posted on Friday, August 19, 2016 - 2:41 am
Dear Dr. Muthen,

I am a bit confused regarding the fit indices given by the H0 model during difftest.

I initially tested a model, e.g:

Model 1= Y on A B;
with 123 free parameters and x2(471)=1684.571, RMSEA = .038 (.037-040), CFI = 947, TLI = 941.

Then I decided to check the impact of a covariate (Model 2: Y on A B Covariate;) and then to statistically compare the two models using DIFFTEST.

So Model 1 became the H0 model where I constrained the covariate to zero =
Y on A B Covariate@0;
with again 123 free parameters but different fit indices than Model 1:
x2(504)=3014.608, RMSEA = .056 (.054-058), CFI = 862, TLI = 846.

I thought that Model 1 and H0 would have similar fit indices given that the additional DV is constrained to 0? Should the fit indices of H0 model be taken into consideration?

Which of the two models (H0 vs.H1) should then be reported in a paper?

Thank you!
 Bengt O. Muthen posted on Friday, August 19, 2016 - 11:53 am
You need to have the same variables in the analyses for which you do Difftest.

The model with Y on A B cov@0; says that cov doesn't influence Y which may lead to misfit. That's different from Y on A B; Just run the model Y on A B Cov; and see if the Cov slope is significant.
 Kelly Minor posted on Friday, August 19, 2016 - 1:48 pm
Thank you for your reply! I changed my input to specify ESTIMATOR = MLR but I got the following error:

"ALGORITHM=INTEGRATION is not available for multiple group analysis. Try using the KNOWNCLASS option for TYPE=MIXTURE."

I am unsure how to handle this as it is not a mixture model. Is there another way to request MLR that will allow the model to run?
 Linda K. Muthen posted on Friday, August 19, 2016 - 4:27 pm
For your model, multiple group analysis is obtained by using TYPE=MIXTURE and the CLASSES and KNOWNCLASS options. When all classes are known, it is the same as multiple group analysis.
 Margarita  posted on Monday, August 22, 2016 - 8:49 am
Thank you for your reply. The slope is stat. significant for the first time points (I am running a 3x3 cross-lag model).

I understand that to compare two models using chi square the two models need to be nested. Would it then be wrong to compare the fit of the initial model (Y on A B) to that of H1 model (Y on A B Cov) if one is not interested in the chi square difference?

I have seen both in published papers, and I am just trying to understand what it the best practice from a statistical point of view (i.e. comparing nested models using chi square vs. non-nested models without chi square)

Thank you for your help.
 Bengt O. Muthen posted on Monday, August 22, 2016 - 9:52 am
Yes, that would be wrong with the WLSMV estimator. You can do it with ML because then only the DVs need to be the same.
 Yoosun Chu posted on Sunday, August 06, 2017 - 7:56 pm
Hello,
I would like to do DIFFTEST in my two models. I thought that the two models are in a nested relationship, but the output said it isn't.
My H1 is 20 items and I assume four factors.
My H0 is 18 items (a subset of 20 items of H1) and I assume three factors.
I assume that the issue might be the different sample size. The N in H0 is smaller than the N in H1 due to the missing on all variables. Any advice would be appreciated. Thanks.
 Linda K. Muthen posted on Monday, August 07, 2017 - 5:42 am
At a minimum, for two models to be nested they must share the same set of dependent variables.
 Hillary Gorin posted on Thursday, December 14, 2017 - 10:55 am
Hi,

When conducting diff testing for hierarchical models, I am getting the following warning:

WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE
DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A
LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT
VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE ILIFANT.

I attempted to impose model constraints to avoid heywoods. Diff testing will not run with model constraints. I get the diff test to run without the model constraints. Can my diff testing results without the constraints be trusted?

Thanks!
Hillary
 Bengt O. Muthen posted on Thursday, December 14, 2017 - 12:14 pm
model constraints on residual variances typically doesn't solve the problem of the Warning.

If this doesn't help, send your relevant outputs to Support along with your license number.
 Hillary Gorin posted on Friday, December 15, 2017 - 11:06 am
Thanks for your response! Model constraints resolved the warning for the hierarchical models. Do you have other recommendations for resolving the warnings in the diff testing?

Thanks!
Hillary
 Bengt O. Muthen posted on Friday, December 15, 2017 - 5:22 pm
See if TECH4 can tell you why you get the message. Typically you have to modify the model, sometimes with the help of Modindices. Note that the message isn't due to Difftesting per se but due to one of the two models being analyzed.
 Natalia Conforti posted on Wednesday, February 14, 2018 - 12:20 pm
Hi,
we are testing several models. In all cases we use WLSMV (ordinal variables).
We have identified some models whith adequate fit according to some index (RMSEA, CFI, TLI), but they are not nested.
So, how we can compare them?
In all cases chi2 is significant (we have samples >350).
thanks!
 Bengt O. Muthen posted on Wednesday, February 14, 2018 - 4:07 pm
I don't think there is a great way to compare well-fitting models using WLSMV given that it doesn't have BIC. Perhaps you can see which model gets the lowest Modindices.
 Lu posted on Thursday, October 25, 2018 - 8:09 am
I wanted to perform a diff test comparing a 6-factor model with a two bi-factor model. Is this possible? I wasn't sure if these two are nested models, but when I tried this anyways, Mplus output does say these are not nested models. Thanks.
 Bengt O. Muthen posted on Thursday, October 25, 2018 - 4:13 pm
Try our NESTED option. We have a section on this in our new paper on our website:

Asparouhov, T. & Muthén, B. (2018). Nesting and equivalence testing for structural equation models. Structural Equation Modeling: A Multidisciplinary Journal. DOI:10.1080/10705511.2018.1513795 (Download scripts).
 Lisa van Zutphen posted on Thursday, November 21, 2019 - 1:01 pm
I have compared several alternative models to one main model, using DIFFTEST. Only one of the alternative models had a better fit according to the DIFFTEST. However, all the fit indices (RMSEA, CFI, TLI, Chi2) are better in all alternative models. Although I read on the forum that I should rely on the DIFFTEST, I am not sure how to interpret it. Can you give some advice on how to explain why all the alternative models had better fit indices than the main model, but only one of these alternative models had a better fit based on the DIFFTEST?

Thank you
 Tihomir Asparouhov posted on Thursday, November 21, 2019 - 8:14 pm
RMSEA, CFI, TLI don't have statistical significance attached so even though they are better we can't say that they are statistically significantly better. Chi2 is better by definition, i.e., it is always better,i.e., if the models are nested the chi-square will be better for the less restricted model. Statistical significance is obtained by the DIFFTEST command. You can also use the model test to test between the various models as an alternative.
 Lisa van Zutphen posted on Friday, November 22, 2019 - 4:51 am
So, the models improve, but not enough to be a real improvement? Is there also a role for the number of parameters in the models? As less parameters is usually preferred, the relation between model fit and number of parameters does not improve enough to prefer the simpeler model?
 Tihomir Asparouhov posted on Friday, November 22, 2019 - 10:09 am
Yes. The DIFFTEST has a number of parameter penalty imbedded in it. Just like the standard LRT - the test statistic is compared to a chi-square distribution with degrees of freedom that is equal to the excess of the number of parameters.
 Joanna Davies posted on Monday, February 10, 2020 - 7:37 am
Hello,

Im using DIFFTEST to compare nested models, to see if a less restrictive model (grouping by sex) is better. In my H1 model, is it ok to unrestrict each path one at a time to make a sort of interative comparison? Or should i unrestrict all paths at the same time to compare with the the H0 model where all paths are restricted?

When i compare H1 with all paths unrestricted to H0, DIFFTEST is non sig. But when i unrestrict certain paths but leave others restricted in H1 and compare with fully restricted H0 i get a sig DIFFTEST, suggesting there are sig differences between men and women for some paths but not for others.

Is it ok to take this iterative approach?

Thank you.
 Bengt O. Muthen posted on Tuesday, February 11, 2020 - 3:26 pm
I think so.
 Daniel Lee posted on Thursday, October 22, 2020 - 4:02 pm
Hi, I am trying to examine group differences in the "a", "b", and "c" paths in a simple mediation. I understand that bootstrap std. errors and confidence intervals are important when calculating indirect effects. Is there a way to compare group differences when employing bias-corrected bootstrapping?

I have tried for my model and was told that the modest test command & difftest options are not available in conjunction with bootstrap.

Thank you for your help!
 Bengt O. Muthen posted on Friday, October 23, 2020 - 4:02 pm
Did you use version 8.4?
 Daniel Lee posted on Sunday, October 25, 2020 - 8:56 am
Yes. I am using version 8.4
 Bengt O. Muthen posted on Sunday, October 25, 2020 - 5:19 pm
Use Model Constraint.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: