Obtaining scaled chi-square differenc... PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 S. Hoult posted on Friday, December 28, 2001 - 9:52 am
I am attempting to obtain chi-square difference tests of nested models using the S-B scaled chi-square. I see the formulas for calculating this difference test, but want to clarify what is meant by comparison model. Is this (or can this be) be an independence model? In this case, then, would I specify the idependence model (where the covariances of all ksi=0 and the error variances for all observed vars=0) for each subgroup and use the ML and MLM chi-square results from this as my comparison model for each nested model? Thank you for any assistance.
 bmuthen posted on Tuesday, January 01, 2002 - 6:00 pm
The comparison model is an "H1" model, that is a model that is less restrictive than the model you are considering.
 Anonymous posted on Friday, February 18, 2005 - 12:51 pm
I apologize for the seemingly silly question, but ...

I am trying to compute a chi-square difference test using the Satorra-Bentler scaled chi-square.

Step 4 requires the use of the "regular chi-square values" T0 and T1. Where are these values on the output?

Then, once step 4 is calculated, is the resulting number the significance of the chi-square difference test?

Thank you for your help.
 bmuthen  posted on Friday, February 18, 2005 - 5:10 pm
Regular chi-square values are those using the ML estimator.

The calculations give a chi-square value and its df - you have to look up the significance.
 Anonymous posted on Friday, February 18, 2005 - 6:16 pm
Okay. If I may follow-up ...

I am using the MLR estimator for a TYPE=COMPLEX MEANSTRUCTURE MISSING analysis. If I re-run with the ML estimator, I get an error message that it defaults to MLR due to the COMPLEX command. If the COMPLEX command is removed, I get an error message that it cannot run with my CLUSTER variable. If I remove the CLUSTER variable, is it still correct to compare the ML chi-square with the MLR chi-square? Thank you!
 bmuthen posted on Friday, February 18, 2005 - 11:13 pm
Stay with the MLR estimator. The MLR chi-square multiplied by the scaling correction factor is the ML chi-square.
 Anonymous posted on Wednesday, February 23, 2005 - 4:07 pm
I am trying to compute a chi-square difference test using the Satorra-Bentler scaled chi-square.

You said that the calculations give a chi-square value and its df - you have to look up the significance. I would like to make sure that is the TRd value a chi-square value? There is no df value in your step 4 and how do we get the df value then to look up the significance? Thank you!
 bmuthen posted on Saturday, February 26, 2005 - 4:50 pm
Yes, Trd is the chi square value. The df is the usual difference in df's for the h0 and h1 models you are considering.
 Anonymous posted on Tuesday, March 01, 2005 - 7:05 pm
I would like to know how to cite your instruction of "Chi-square difference testing using the S-B scaled chi-square" on this website?
http://www.statmodel.com/chidiff.html
Thank you for your help
 Linda K. Muthen posted on Wednesday, March 02, 2005 - 8:18 am
Something like the following should work:

Muthen, L. and Muthen, B. (2005) Chi-square difference testing using the S-B scaled chi-square. Note on Mplus website, www.statmodel.com.
 Anonymous posted on Thursday, March 03, 2005 - 7:47 pm
I am trying to compute a chi-square difference test using the Satorra-Bentler scaled chi-square.
Now, I got TR0>TR1, which is expected. but I have problem with T0<T1 that makes negative TRd. I really would like to have your advice because I am stuck here. I used Normal Theory Weighted Least Squares Chi-Square in ML estimation for the T values and S-B chi-square in DWLS estimation for the TR values.
Thank you so much for all your responses.
 bmuthen posted on Friday, March 04, 2005 - 6:16 am
Are you saying that your regular ML chi-square for the H0 model fits better than that for the H1 model? If so, it doesn't seem that H0 is nested within (is a special case of ) H1. Or am I misunderstanding something?
 Anonymous posted on Friday, March 04, 2005 - 5:20 pm
Yes, the regular ML chi-square for the H0 model fits better than that for the H1 model.
But why the S-B chi-square for the H1 model fits better than that for the H0 model?
My problem is T0<T1

H0: 152.90(TR0) 500.40(T0) 128(df0)
H1: 149.77(TR1) 500.44(T1) 125(df1)

Then, TRd became negative
C0: 3.27
C1: 3.34
cd: 0.41
TRd: -0.10

Thanks very much for your time.
 bmuthen posted on Friday, March 04, 2005 - 6:21 pm
If you are saying that you have an ML analysis with chi-square (cs) = 500.40 with 128 df for an H0 model that is nested within an H1 model with cs=500.44 and 125, then I would guess that the cs difference is due to numerical imprecision and that the extra restrictions are completely harmless. But to be able to tell for sure, send you input, output, and data to the Mplus office at support@statmodel.com and give your license number.
 Anonymous posted on Saturday, March 05, 2005 - 5:09 pm
Thanks Bengt.
Does your instruction for computing chi-square difference test using the Satorra-Bentler scaled chi-square apply to LISREL package? Because I am using LISREL for this project and will learn how to use Mplus after this one. Thank you so much for your responses.
 Linda K. Muthen posted on Sunday, March 06, 2005 - 6:10 am
If you are using the correct chi-square, then it should apply.
 Dr. Adam C. Carle posted on Monday, March 07, 2005 - 3:36 am
Bengt,
I too have occasionally run into situations where the model is nested and this same problem with the cs difference arises. I see your note above regarding numerical imprecision, i.e., that the additional constraints are harmless. My interpretation has been similar to yours with regard to the constraints, but I've not articulated the numerical imprecision hypothesis that you note. I find it logical, etc., However, I'm wondering if there is a citation that might discuss the numerical precision of the cs, etc., so that I could more forcefully make this argument in publications?
Best,
Adam
 bmuthen posted on Monday, March 07, 2005 - 10:42 am
I think you want to avoid the T0, T1 reversal simply by changing the convergence criterion, making it stricter until T1 is no larger than T0 - that's the imprecision that I had in mind.
 Dr. Adam C. Carle posted on Tuesday, March 08, 2005 - 5:44 am
Bengt,
Perhaps on the West coast you heard the head slapping "Duh" that I performed here in DC in response to your post. Increasing the convergence criterion did indeed resolve the issue, with a resulting cs that was not significant, much as we both suspected. For those following the discussion, it may also be necessary to increase the iterations in order to reach the convergence criterion. Thanks Bengt.
Best,
Adam
 Anonymous posted on Monday, March 14, 2005 - 4:55 pm
Hi Bengt,

I have got T0>T1 and TR0>TR1. However, d0*c0-d1*c1<0 that makes cd<0. Thus, TRd becomes negative. Could I have your suggestions on this problem? Thank you.
 BMuthen posted on Monday, March 14, 2005 - 6:44 pm
This has been discussed in the literature by Bentler and Satorra. It is a failure of the asymptotic approximation. There is nothing you can do. You cannot use the test in that case.
 Anonymous posted on Monday, March 14, 2005 - 8:42 pm
Bengt,
You said that the regular chi-square values are those using the ML estimator. I would like to make sure which one should we use: Normal Theory Weighted Least Squares Chi-Square Or Minimum Fit Function Chi-Square?
Thank you for your responses.
 bmuthen posted on Monday, March 14, 2005 - 10:24 pm
Normal theory.
 Anonymous posted on Tuesday, June 14, 2005 - 10:46 am
Just to clarrify my understanding of the DIFFTEST option. If I follow the example on pg 278 of the users manual I am testing a less restrictive model (H1)versus a more restrictive model (H0)and a nonsigificant chi-square tells me that the restrictive model does not improve the fit of the model whereas a signficant chi-square tells me that the less restrictive model better fits the data. So if I were looking to compare models and I thought the more restrictive model to be more interpretable, I would like to see a nonsigificant chi-square?
 Anonymous posted on Tuesday, June 14, 2005 - 5:51 pm
Hi Bengt,

Back to the problem that I had in Chi-square difference test using S-B chi-square. I encountered T0>T1 and TR0>TR1. However, d0*c0-d1*c1<0 makes cd<0.
You have said that this has been discussed in the literature by Bentler and Satorra. It is a failure of the asymptotic approximation.
Do you have the reference of Bentler and Satorra's discussion on this problem?
Thank you
 Linda K. Muthen posted on Wednesday, June 15, 2005 - 12:54 am
Re 10:46

You want the more restrictive model not to worsen the fit significantly for it to be the accepted model.
 Barbara  posted on Wednesday, October 05, 2005 - 12:21 am
When TRd is negative and this test can not be used to compare models, how should we compare models in this case?
Thank you
 bmuthen posted on Saturday, October 08, 2005 - 11:53 am
I don't think there is a good answer to that. You can accept the rough approximation of treating the variables as normal using the ML estimator and do a usual chi-square difference test. But this may not be very accurate given strong non-normality.
 Chan Wai Yen posted on Wednesday, December 14, 2005 - 6:41 pm
Hi Bengt,

Regarding the S-B scaled ChiSq test; I am using TYPE = COMPLEX MGROUP to test for factor and structural invariance. You recommended in one of your post to use the ChiSq from MLR, and multiply it by the correction factor produced in the output to get the ML ChiSq. May I clarify with you regarding the df. Should I also use the df from MLR, or should I run another analysis with ML and use that df?

Thanks.
 Linda K. Muthen posted on Wednesday, December 14, 2005 - 6:47 pm
You can use the degrees of freedom from MLR.
 Marie Marekwica posted on Monday, April 03, 2006 - 5:12 am
I have a question concerning Multigroup Analysis and scaled Chi-Scare testing:
I am trying to test in how far my fit improves after relaxing some correlations and allowing them to differ across groups.
If I follow the formulas given on the mplus-homepage for Chi-Square difference testing (calculating MLM and then ML- Chi-Squares and dividing them) I obtain very different correction factors that in the MPLUS-output when running MLM.
How do I calculate the correct Difference Test in this case?

From Chan Wai Yen's posting above I would assume that I multiply the MLM-ChisQ with the Scaling correction factor to obtain my ML-Chisq-values and then use those in the same way as usually, but I am not entirely sure whether I am right about that.
I hope my problem is understandable and ...Thank you.
 Linda K. Muthen posted on Monday, April 03, 2006 - 7:36 am
What you say in the second paragraph is correct. If you need further help, please send your outputs, computations, and license number to support@statmodel.com.
 Nina Zuna posted on Wednesday, September 27, 2006 - 8:31 am
Dear Drs. Muthen,

I have satisfactorily conducted the MLR difference test utilizing your website instructions to test for difference in measurement model vs. 2nd order model. My change in chi-sq. was ChgY-BX2=5.859, p=.053.
1. Is the Y-B chi sq. (and change Y-B chi sq) also impacted by sample size (n=566), thereby having the same impact as traditional chi-sq (usu. signif with large Ns)?
When I compared the CFI, TLI, RMSEA, and SRMR from MLR meas. model with MLR 2nd order model I noticed very little change-.001 less in CFI and SRMR with 2nd order model; no change was observed in TLI and RMSEA--were exactly the same.
2. Given the borderline signif, am I still safe to use other fit statistics to determine if the 2nd order model explains the data as well as the lower order meas. model?
Since my readings have indicated that rescaled chi-sqs. are not distributed like chi-sqs. I wasn't certain if I could make the same assumption about sample size impacting p-value or whether if it was advisable to use other fit statistics in making this determination.
Thank you for any advice you may have on this matter.

Sincerely,

Nina
 Bengt O. Muthen posted on Sunday, October 01, 2006 - 11:05 am
1. Yes.

2. Yes.
 Nina Zuna posted on Sunday, October 01, 2006 - 11:26 am
Thank you!
 Katherine A. Johnson posted on Tuesday, July 03, 2007 - 6:30 pm
I would like to test whether the fit of my model improves significantly when I add a quadratic term. I am using MLR. In order to do difference testing, my models must be nested and thus must include the same set of observed variables.

My comparison (quadratic) model is:
i s q | x1@0 x2@2 x3@4 x4@6 x5@8;

Would my nested (linear) model be:
i s q | x1@0 x2@2 x3@4 x4@6 x5@8;
q ON x1@0 x2@0 x3@0 x4@0 x5@0;
q@0;

Thank you for your help
 Bengt O. Muthen posted on Tuesday, July 03, 2007 - 6:51 pm
A good way to see if a quadratic term is needed is to simply estimate the quadratic model

i s q | x1@0 x2@2 x3@4 x4@6 x5@8;
q@0;

and see if the mean of q is significantly different from zero.

The model you specified,

Would my nested (linear) model be:
i s q | x1@0 x2@2 x3@4 x4@6 x5@8;
q ON x1@0 x2@0 x3@0 x4@0 x5@0;
q@0;

should not be used (it is not what you want).
 Katherine A. Johnson posted on Wednesday, July 04, 2007 - 7:46 am
Thank you for your reply.

So to clarify, I do not need to do chi-square difference testing to determine if the model fits better with a quadratic term?

If I specify the model as you suggested above, and the mean of the quadratic term is indeed significant, what exactly does that mean? The model with a quadratic term fits significantly better than the linear model?

Sorry for the elementary questions.
 Bengt O. Muthen posted on Wednesday, July 04, 2007 - 7:58 am
When two models differ with respect to only one parameter, in this case the quadratic growth factor mean, the z test addresses the same question as the chi-square test you get from the loglikelihood difference. Squaring the z value you get the chi square value.

Yes on your second question.
 Katherine A. Johnson posted on Wednesday, July 04, 2007 - 8:00 am
Thank you very much for your quick reply. This discussion board has been a life saver!
 Eser Sekercioglu posted on Wednesday, February 13, 2008 - 10:42 am
Hi
I would like to conduct a chi-square difference test for a multigroup CFA model with correlated errors. All my variables are categorical. As far as I know, the MLR estimator cannot work when there are correlated errors in the model, and MLM does not work for non-continuous variables. I tried to estimate the model using ML, but I get an error message that says "algorithm = integration is not available". How can I possibly estimate my models using ML? Do I have any other options besides MLR, MLM, or ML?
Thank you
 Linda K. Muthen posted on Wednesday, February 13, 2008 - 10:54 am
You can include residual covariances in a model that is estimated using maximum likelihood. How to do this is shown in Example 7.16. Each residual covariance requires one dimension of integration. We recommend no more than four dimensions of integration. Alternatively, you can use weighted least squares where residual covariances are easily estimated. If you use WLSMV, you can use the DIFFTEST option for difference testing.
 Guy Cafri posted on Friday, November 07, 2008 - 12:32 pm
Hi,
I am running a CFA with nested data (n=1089) with 100 clusters using the MLR estimator. I want to compare nested models but get negative cd values, and in turn a negative Satorra-Bentler chi-square difference test statistic. Here are my results:

comparison: chi-square=403.22 df=83
scaled-correction=1.18
nested: chi-square= 682.99 df=84
scaled-correction=1.15

cd=-1.34
SB test statistic=-208.78

Am I doing something wrong with the calculations or is it as you noted in responses to others, the test fails with small sample sizes? If so, is there an alternative? You mentioned the wald test in passing.
 Bengt O. Muthen posted on Friday, November 07, 2008 - 6:04 pm
This test is asymptotic and can fail in a given sample. 2 ways out of this:

- look up UCLA Statistics Series preprints where a new Satorr-Bentler paper describes how to use a modified version when negativity happens

- since you have only a 1-df difference, you can simply use the z test which with MLR is also robust to non-normality (and z**2 is approx chi-square).
 Guy Cafri posted on Friday, November 07, 2008 - 7:46 pm
Great, thank you for the advice.
 Britain A Mills posted on Thursday, February 26, 2009 - 8:23 am
I am running a multiple groups analysis with a continuous dv and a series of predictors, using MLR as an estimator. The data is complex with clustering and stratification. There are no latent variables. I am fitting nested models separately - e.g., a model with no constraints, a model constraining weights, a model constraining weights and intercepts, etc. Since I don't have the difftest option, I am trying to compute scaled difference tests, and I have two questions:

1. There are a total of 3 scaled correction factors (SCF) listed in each output: one under test of model fit, and two under the loglikelihood section (for Ho and H1). They all have different values. What is the difference between them?

2. Which exactly should I plug into the formula for computing the scaled difference test? For example, if I wanted to compare a pattern-constrained model with a saturated model?

Thanks in advance. This website is extremely helpful. I've already answered many of my questions my simply perusing the archives.
 Linda K. Muthen posted on Thursday, February 26, 2009 - 11:07 am
1. They are different because they apply to different statistics, one for chi-square and two for loglikelihoods.

2. You should use the one associated with the statistic you are using in the calculation of the test. You can do difference testing using either chi-square or loglikelihood. Both tests are described at http://www.statmodel.com/chidiff.shtml and in Chapter 13 of the Mplus User's Guide.
 Calvin D. Croy posted on Monday, April 06, 2009 - 2:01 pm
I ran two regression models with 96 obs using MLR. In run #1 the dependent continuous variable was regressed on 7 continuous predictors. In run #2 I added the square of one of the predictors. Both models were estimated using FIML.

Results:
Rsquare quadratic = .250
Rsquare linear = .220.
(Suggests quadratic model fit better)

Linear model H1 LL = -598.242
Quadratic model H1 LL = -637.278 (Suggests linear model fit better

1) Could you please explain why examining fit with the Rsquare and loglikelihood statistics gives contradictory results about which model fit better? Which should I believe?

Following
www.statmodel.com/chidiff.shtml for the Satorra-Bentler diference test using loglikelihoods, my value of TRd was negative! This was because L0 - L1 = (LL linear model - LL quadratic) = -598.242 - -637.278 = a positive value, which when multiplied by -2 yields a negative value of TRd.
(2) Have I done something wrong or do I just use the absolute value of the TRd statistic to test with the Chi-square distribution? If I use the absolute value, I will be testing Trd = 105.5 with 1 df. (Details: H1 scaling factor for linear model = 1.16, for quadratic = 1.118; linear model parameters estimated = 9, quadratic model = 10 paramters).

Thanks for your help!
 Bengt O. Muthen posted on Tuesday, April 07, 2009 - 10:16 am
1) I assume you mean H0 and not H1 in your reporting of those 2 LLs. The LL's are not in a comparable metric due to having different covariate sets in the 2 runs. In this analysis you are in the SEM framework where the covariates contribute to the LL. You can avoid this and make the metrics comparable by saying Type = Random, which conditions on covariates so the metric is determined only by the DVs.

2) This question is resolved by the answer to 1).

Note also that R-square is not a measure of model fit in the SEM sense. The regression model is just-identified (fits trivially). Over-identified models that fit well (in the SEM sense) can have poor R-squares and vice versa. But of course R-square is informative about how well the regression equation represents the data.
 Calvin D. Croy posted on Tuesday, April 07, 2009 - 1:14 pm
Thanks for your response.

1. I know that in the SEM sense, "fit" refers to how well the estimated variances and covariances match the sample variances and covariances. But to evaluate fit in terms of the accuracy of the predicted values from a multiple regression, should I use the Rsquare value or the reported loglikelihood (H1 or H0)?

2. With MLR reg models, are you saying that I can only compare the loglikelihoods of models that a) contain the same number of covariates or, b) contain exactly the same covariates (the predictor variables are exactly the same in both models)? For b) in terms of residuals, I would think the fits would be identical -- they're the same model.

3. From your first comment, it seems I've used the wrong loglikelihoods in my attempt to use the Satorra-Bentler test. Could you please explain what the difference is between the H0 and H1 loglikelihoods and which loglikelihood (H0 or H1)I should use for the more restrictive model? For the less restrictive model?

Your assistance is gratefully appreciated!
 Bengt O. Muthen posted on Tuesday, April 07, 2009 - 2:01 pm
1. You can use both in different ways. You use R-square in the usual way. You use the LL (taken from H0 - which is your model) by computing 2 times the H0 LL difference between your two models: the model with the quadratic effects fixed at zero and the model with the quadratic effects freely estimated. This is a chi-square variate.

2. b) But if you say Type = Random you can compare even if you change the covariate set.

3. For a given run, H0 is your model. H1 is an "unrestrictive" model in the SEM sense - a free mean vector and covariance matrix model. For a simple linear regression model H0 and H1 are one and the same.
 Calvin D. Croy posted on Friday, April 10, 2009 - 5:43 pm
Thanks for the clarification. It is very much appreciated.
 Sanja Franic posted on Wednesday, April 15, 2009 - 6:35 am
When I run a model using the ML estimator (on summary data), I do not get the the scaled chi-square values and am therefore wondering how to conduct the Satorra-Bentler difference test.

Thank you in advance.
 Amir Sariaslan posted on Wednesday, April 15, 2009 - 6:45 am
Hi Sanja,

This page might solve your problem: http://www.statmodel.com/chidiff.shtml

Sincerely,
Amir
 Sanja Franic posted on Wednesday, April 15, 2009 - 6:49 am
Thanks Amir. I had in fact been using that page, but the formulas require regular chi-square values (t0 and t1, these are the ML sci-square values), and the scaled chi-square values (tr0 and tr1, which I cannot find in the output).
 Alexandre Morin posted on Wednesday, April 15, 2009 - 6:57 am
Hi Sanja,

The scaling factors are obtained when you use one of the robust estimators: MLM (Satorra Bentler) or MLR (asymptotically equivalent to Yuan Bentler) and not when you use the ML estimator...
 Sanja Franic posted on Wednesday, April 15, 2009 - 7:04 am
Hi Alexandre,
Hm, I thought so. Do you know how I can conduct chi-square difference testing when using the ML estimator then?
Thanks for replying,
Sanja
 Alexandre Morin posted on Wednesday, April 15, 2009 - 7:36 am
Hi Sanja,

With regular ML, you do it the standard way...
The difference (substraction) between 2 chi squares is like a chi square with degrees of freedom (df) corresponding to the differences between the df of both models.
 Sanja Franic posted on Wednesday, April 15, 2009 - 7:43 am
Thanks!
 Alexandre Morin posted on Wednesday, April 15, 2009 - 7:52 am
But dont be "afraid" to use MLM or MLR if you suspect that you need a robut estimator (non normal data, clustered data, etc.).
 Sanja Franic posted on Wednesday, April 15, 2009 - 10:38 am
The thing is, I only have summary data and not individual data. For that I can only use ML, ULS, or GLS.

The data are a summary of ordinal data, for which I would normally use WLSMV. However, since I ran the polychoric correlation matrix of this data through Mx to get a genetic factor model decomposition (a decomposition of covariance matrix into 3 matrices - one due to additive genetics (A), one due to common environment (C), and one due to unique environment (E)), what I end up with is a summary statistic, i.e. the A, C, and E covariance matrices. So I only have covariance matrices of ordinal data derived by decomposing a polychoric correlation into 3 components. Now I need to do a CFA on these summary data, and I can't use WLSMV anymore (nor do I know whether I should). I can only use ML, ULS, or GLS. The ML and ULS seem to provide identical results. But my chi-squares are generally quite large, and other fit statistics indicate quite a bad fit as well (regardless of which model I specify in the CFA). I am wondering whether this is due to using ML on a covariance matrix of ordinal data. I am not simulating data to try to find this out, but if you have any insights I would greatly appreciate them, though this is a bit of a different topic from chi-square difference testing.

Thanks a lot!
 Sanja Franic posted on Wednesday, April 15, 2009 - 10:38 am
Btw the original (ordinal) data are not normally distributed.
 Bengt O. Muthen posted on Friday, April 17, 2009 - 3:14 pm
It sounds like you are analyzing an estimated covariance/correlation matrix from an ACE model. You are right in suspecting this - your chi-square and SEs are not correct here. All you can trust are the parameter estimates.
 jks posted on Friday, October 16, 2009 - 10:52 pm
Hi Muthen,
you said,
"Stay with the MLR estimator. The MLR chi-square multiplied by the scaling correction factor is the ML chi-square."

Instead of this, if I estimate the regular chi-squares (T0 & T1) using ML estimator and then do the difference tests, am I ok?

As you know that CFI is also used in measurement invariance tests (i.e., for metric, scalar, complete invariance).
Is there any correction required to use the difference in CFI values for constrained and unconstrained models?
 Linda K. Muthen posted on Saturday, October 17, 2009 - 12:13 pm
You can do difference tests with ML the regular way. Note that ML is not robust to non-normality.

I am not aware of using CFI for difference testing.
 jks posted on Sunday, October 18, 2009 - 1:12 am
Thanks for your reply.
The comparative fit index (CFI) has also been used to assess measurement equivalence (Yuan, 2005). It is recommended that changes in CFI of 0.01 or less indicate that null hypothesis of measurement equivalence should not be rejected (Bentler, 1990).

So, my concern is can I apply this in my measurement equivalence study where I am using MLR estimator for complex survey data.
 Linda K. Muthen posted on Sunday, October 18, 2009 - 10:21 am
If this is appropriate for ML, it would be appropriate for MLR.
 Camille Ferdenzi posted on Thursday, October 22, 2009 - 8:13 am
Hi,

I want to perform Chi-square difference tests (with MLM estimator) between several models that I tested on one given set of data. I compare two 6 factor models, a 2-factor model, and a 4-factor model. My problem is that the two 6-factor models have the same degrees of freedom, therefore I can't compute the cd index because it requires to divide by the difference of df (zero). Which alternative do I have to determine if the difference of fit between these two models is significant?

Thanks in advance for your answer.
 Linda K. Muthen posted on Thursday, October 22, 2009 - 10:07 am
The two six factor models are not nested so difference testing would not be appropriate.
 Nick Lee posted on Friday, April 09, 2010 - 9:12 am
Dear Bengt/Linda

I am testing a multi-group complex sample model for latent mean invariance across genders, using MLR. As have many others, I have encountered the negative TRd issue using the loglikelihood chisq difference test for MLR. I have read the Satorra/Bentler paper using the modification. However, before putting this into practice I'd like to ask a question.

Specifically, if I can gain the ML chisq values by multiplying the MLR ones by the correction factor given in the MLR output, am I able to use the (rather simpler) method for computing the scaled chisq test, rather than the loglikelihood? Of course, I can't gain the ML chisq by running an ML model, because the COMPLEX model defaults to MLR.

Judging by some of the earlier comments on this thread, it seems implied that I can indeed create ML chisq from the MLR one, which would enable this.

Or have I got something backwards here?

Thanks, Nick
 Linda K. Muthen posted on Saturday, April 10, 2010 - 8:14 am
It is true that the MLR chi-square can be converted to ML using the scaling corrections factor. However, using ML to do the difference test and using the MLR standard errors would not be correct. You should just estimate the model with ML if you want to do the ML difference test.
 Nick Lee posted on Tuesday, April 13, 2010 - 10:08 am
Hi Linda

Thanks for the advice. The problem is that using the complex samples option does not allow me to use ML at all. Thus I can't do this.

So am I correct to draw the conclusion that using complex samples (therefore prohibiting the use of ML) means the only way to do the difference test is to use the loglikelihood method (which in my case gives a negative Trd), even though I can get the ML chi-squares using the correction factor?

Thanks, Nick
 Linda K. Muthen posted on Tuesday, April 13, 2010 - 11:02 am
You can try using the Wald test in MODEL TEST. See the user's guide.
 Edelyn Verona posted on Tuesday, April 13, 2010 - 12:11 pm
I have a nested models problem. I have 3 models:

One factor model
x
x
x
x
x
x
x
x
x

Two factor model
x 0
x 0
x 0
0 x
0 x
0 x
0 x
0 x
0 x

Three factor model
x 0 0
x 0 0
x 0 0
0 x 0
0 x 0
0 x 0
0 x 0
0 0 x
0 0 x

I thought these models are not nested, but a reviewer suggests they are. Can you tell me what am I missing to make these into nested models? Thanks!
 JackBox posted on Thursday, April 15, 2010 - 5:05 am
I’m also having the negative cd problem with chi-square difference test in group invariance testing with MLR. I’ve read the newest Santorra- Bentler (2009) article “Ensuring Positiveness of the Scaled Difference Chi-Square Test Static” and I wonder is it possible to do the new scaled difference test with the present version of Mplus?

The article says that “This can be obtained by creating a model setup M10 that contains the parameterization of M1 with start values taken from the output of model M0. Model M10 is run with zero iterations, so that the parameter values do not change before output including test statistics is produced”. It seems that the issue of the starting values is simple, but what could be the solution in Mplus for the zero iterations? Is it possible to use zero iterations in Mplus and if so which iteration procedure should be constrained to zero in this case?
 Linda K. Muthen posted on Thursday, April 15, 2010 - 10:15 am
We will post a note about how to do this after Version 6 comes out. You can't have zero iterations but instead should have extremely lax convergence criteria so they are fulfilled already by the starting values.
 Hana Shin posted on Sunday, May 23, 2010 - 1:32 pm
Hello Drs. Muthen & Muthen,

I'm new to Mplus and have run a model using ML with BOOTSTRAP, although I have a positively skewed continuous dependent variable. I had hoped that bootstrapping would assist with the observed non-normal distribution, but would MLR be a more appropriate estimator (since MLR is robust to non-normality but does not allow for bootstrapping)?

Many thanks for your support.
 Bengt O. Muthen posted on Sunday, May 23, 2010 - 9:31 pm
I think it is more straightforward to use MLR. This will give you proper standard errors and chi-square test of model fit.
 Jen posted on Tuesday, June 22, 2010 - 10:38 am
Hello,

I wondered if there is any solution when the MLR difference test yields a very large (seemingly unreasonable) chi-square value due to big differences in the SCFs. I am working with a relatively small sample, and this difference test results when I constrain a path that is clearly positive and significant to 0 (I am constraining it purely for purposes of model comparison).

The chi-squares of the two models are
Model 1: 14.011, 4 df, SCF=1.088
Model 2: 2.042, 3 df, SCF=1.368

Which results in cd=.248 and a chi-square difference of 50.20(!). I feel that reporting this 50.20 might not go over so well (and am also suspicious of such a large SCF in the case of Model 2).

Thanks for any advice!
 Bengt O. Muthen posted on Tuesday, June 22, 2010 - 6:09 pm
It may be that the large-sample approximation needed for this difference test is not good in this small-sample setting. Because the models differ by only 1 parameter, you can compare to what you get by the approximate z test printed as Est/SE. The SE here derived by MLR also takes into account the non-normality.
 Syd posted on Monday, July 26, 2010 - 4:12 am
Hi,

I am trying to test the comparative fit of two models. As I have interaction terms in the model, I am using the MLR estimator with numerical integration. Thus, I need to use log-likelihood difference testing.

It is stated on the website that I need to know p0 (number of parameters in the nested model) and p1 (number of parameters in the comparison model). I wasn't sure whether p0 and p1 were referring to the "Number of Free Parameters" noted under the Information Criteria heading, as this is the only parameter noted in the output. I would greatly appreciate it if you could confirm this.

Thank you,
 Linda K. Muthen posted on Monday, July 26, 2010 - 7:41 am
Yes, it is.
 Syd posted on Saturday, July 31, 2010 - 11:34 pm
Hi Linda,

You had noted that in log-likelihood difference testing, the p0 and p1 values referred to the number of free parameters noted in the output.

I was just looking at two nested models estimated using MLR, for which both chi-square and log-likelihood values were provided by Mplus. If I do the chi-square difference testing, TRd=-339.90 with d0-d1 = -192. If I do the log-likelihood difference testing, TRd=-2659.98 with p0-p1 = -21. I thought both tests were used to test change in fit. Yet, using the formulas provided at http://www.statmodel.com/chidiff.shtml, I'm calculating different results for the tests. Am I doing something wrong here?

The nested model is:
f6 ON f5 f4 f2 f1;

The comparison model is:
f3 ON f2 f1;
f6 ON f5 f4 f3 f2 f1;

The values for the two models are as follows.

Nested Model:
MLR Chi-sq=527.965
Chi-sq scaling corr. factor=1.082
df=336
Log-likelihood H0 value=-18915.632
H0 scaling corr. factor=1.522
Number of free parameters=101

Comparison Model:
MLR Chi-sq=863.315
Chi-sq scaling corr. factor=1.013
df=528
Log-likelihood H0 value=-26294.418
H0 scaling corr. factor=2.215
Number of free parameters=122
 Linda K. Muthen posted on Sunday, August 01, 2010 - 10:06 am
It is not clear from the information you give that the models are nested. Please send the two full outputs and your license number to support@statmodel.com.
 Joan W. posted on Thursday, March 10, 2011 - 3:48 pm
Dr. Muthen,

Is the formula for chi-square difference test from http://www.statmodel.com/chidiff.shtml still valid when "algorithm=integration" is used with MLR?

Thank you.
 Bengt O. Muthen posted on Thursday, March 10, 2011 - 5:57 pm
Yes.
 Joe King posted on Thursday, December 01, 2011 - 8:13 pm
Drs. Muthen,

I am running a multiple group comparison, one with a constraint on one of the paths, another model without the constraint. I want to see how the model fit differs (if significant). i know how to do the ratio test, but in the model fit section in the likelihood section theres a value for H0 and H1 for each of the models, but the H1 likelihood doesnt move at all between the models when the parameter is released but H0 does, why is this, and should i use H0 for model comparison?
 Linda K. Muthen posted on Friday, December 02, 2011 - 5:36 am
You should compare the H0 models. The H1 models are the same because they are the unrestricted model.
 Geneviève Taylor posted on Wednesday, March 21, 2012 - 7:58 am
Dear Drs. Muthén,

I would like to compare two models - one nested within the other - in path analysis and I am using MLR as an estimator. I followed the procedure you describe in the web notes no.12 to compute the strictly positive Satorra-Bentler chi-square test in Mplus syntax but I think I am doing something wrong because it says in your text that the log-likelihood value of the M10 model should be the same as that for the M0 model. However, when I look at my outputs, it is rather the log-likelihood of the M1 model that is equal to the M10 value. Would you be able to tell me what I am doing wrong?
Many thanks in advance for your help!
Geneviève
 Bengt O. Muthen posted on Wednesday, March 21, 2012 - 8:55 pm
You would have to send your 3 outputs to Support.
 Geneviève Taylor posted on Wednesday, March 28, 2012 - 12:54 pm
Thank you! I think I have figured out the problem.
 caroline masquillier posted on Monday, November 26, 2012 - 3:21 am
Dear Prof. Muthen,
I want to compare a model without an interaction to a model with an interaction (between a factor and a manifest variable using XWITH option, MLR and algorithm is integration). The only difference in the models is the interaction term.

In this regard I wondered whether these models are nested? Can I use a Loglikelihood ratiotest? Or should I compare the models based on the AIC or BIC?

Thanks in advance,
Kind regards,
 Linda K. Muthen posted on Monday, November 26, 2012 - 8:31 am
The significance of the interaction term is the same as doing a difference test between the model with the interaction and the model without the interaction.
 M Tasavori posted on Monday, October 05, 2015 - 5:48 am
I have run CFA with MLM.
Do I also have to use MLM for SEM model?
If I use MLM, the correction factor is 0,9 (below 1).
The SEM model also fits better with ML, so I am confused.
Does running CFA with MLM and adding some WITH statements changes the normality?
 Bengt O. Muthen posted on Monday, October 05, 2015 - 2:33 pm
Q1. If you have non-normal variables you should use MLM or MLR for both CFA and SEM.

Q2. No.

You may want to ask these general SEM questions on SEMNET.
 Chelsea Derlan posted on Friday, February 12, 2016 - 10:10 am
I am running a multigroup model that uses an MLR estimator because of non-normal data. I am currently adding constraints to test for significant differences based on my grouping variable, and I was looking at both the Satora-Bentler Chi square difference test and Change in CFI difference test. These two tests indicate different results in some cases on whether a path is significantly different. Specifically, sometimes the Satora-Bentler indicated a non significant result and that the path can be constrained to be equal for the two groups, but the CFI significantly decreases and based on the change in CFI test, the constraints are inappropriate. I was wondering if it is okay to just use the Change in CFI difference test (and not the chi-square different test) for testing when an MLR estimator is used?
 Bengt O. Muthen posted on Friday, February 12, 2016 - 5:07 pm
I would a priori trust S-B chi-2 diff testing more than CFI diff testing.
 Sophie posted on Wednesday, May 11, 2016 - 7:23 am
Dear prof. Muthen,

I am running four multigroup latent growth curve models that uses MLR estimator because I controlled for the multilevel structure by specifying type = complex. I am currently checking if model fit improves significantly after adding the quadratic slope.I calculated the scaled difference chi-square test statistic by using the formula of Santorra & Bentler. For three of my models those statistics were fine, however, for one model I get a negative value (-6). Therefore, I tried to follow the steps to calculate the scaled difference chi-square test statistic ensuring positiveness. However, my quadratic model is the nested model as this model has more degrees of freedom than the linear model (due to more negative residual variances causing errors being constrained to zero). When I get the svalues for the quadratic model, I do not know what I need to change to obtain the same loglikelihood value as the linear model. Do you have a suggestion for this?

Thanks in advance!

Kind regards,

Sophie
 Bengt O. Muthen posted on Wednesday, May 11, 2016 - 7:14 pm
When models differ due to some variances being fixed at zero in one model but not the other, the chi-square difference testing is invalid due to parameters at the border of their admissible space (zero variance).

Instead, investigate why you get negative residual variances. Perhaps you can hold them equal across time instead.
 Yoosun Chu posted on Monday, July 31, 2017 - 11:20 am
Hello,
I am wondering about the chi-square difference test when using the factor analysis with ordinal indicators.
Since the chi-square test compares the two nested models, it gives evidence on which model has a better fit with the data.
I am wondering whether there is a way to assess the model fit itself, not comparing with other nested models.
Thanks.
 Bengt O. Muthen posted on Monday, July 31, 2017 - 5:39 pm
You can use TECH10.
 Yoosun Chu posted on Monday, July 31, 2017 - 6:11 pm
Hello Dr. Muthen,
Thanks. But as I mentioned, my indicators are ordinals. I also want to use the MLR. In this case, is tech 10 still applicable?
Thanks.
 Bengt O. Muthen posted on Tuesday, August 01, 2017 - 5:10 pm
I think so - try it.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: