Chi-square difference
Message/Author
 minyuedong posted on Wednesday, December 07, 2005 - 2:14 am
I am doing a two-level SEM by using MLR estimator.
For the chi-square difference anlysis, I refer to the 4-steps suggested by Bentler as shown in the web. I got higher DFs (+7) and higher adjusted chi-square values (+29.04)when compare model 2 (with restrictions) to model 1, so my question is: in order to check if the chi-square difference is significant, shall I refer to the chi-square distribution or something else?

 Linda K. Muthen posted on Wednesday, December 07, 2005 - 6:04 am
You should refer to the chi-square distribution.
 Leif Edvard Aaroe posted on Wednesday, January 18, 2006 - 12:14 am
I have carried out a series of SEM analyses with one dependent variable (latent) and a number of predictors (a combination of latent variables and observed). I am using the complex and cluster options. The output provides the MLR estimator.

Can I use the Satorra-Bentler 4-step procedure (steps 3 and 4) for testing differences between models in this case?

I use the "model constraint" option and compare models by constraining parameters (one by one) to be equal to zero, comparing with the unconstrained model. Does this sound right?

Is it possible to get negative Chi-square values?
 Linda K. Muthen posted on Wednesday, January 18, 2006 - 9:28 am
Yes, you can use the Satorra-Bentler 4-steps.

You can use MODEL CONSTRAINT for fixing parameters to zero but it might be easier to just do theses simple constraints in the model command using @0.

Negative chi-square values are possible and have been discussed by Bentler in the literature.
 Leif Edvard Aarų posted on Tuesday, February 07, 2006 - 11:10 pm
Dear Linda,

I have read carefully all the dialogues on the Satorra-Bentler test for differences between nested models. There is however, one point that is still unclear to me.

The T0 and T1 values that I use when I apply the formula, are these the chi-square values that I obtain when running the analysis with the MLR estimiation procedure, or do I have to multiply the chi square values with their respective scaling correction factors first?

Best regards

Leif
 Linda K. Muthen posted on Wednesday, February 08, 2006 - 9:08 am
Those are ML and yes you need to multiply MLR by the scaling correction factor to obtain ML.
 Sophie Potter posted on Tuesday, October 23, 2018 - 1:01 am
Hi,

I ran a Chi-Square Difference Testing Using the Loglikelihood on two models, one where the residual variances for my latent growth model was set as invariant (nested model) and one where they are set at varying (comparison model).

Nested Model:
Loglikelihood value: -26148.950
No. of parameters: 55
Scaling Correction Factor: 1.1884

Comparison model:
Loglikelihood value: -26130.583
No. of parameters: 61
Scaling Correction Factor: 1.3295

RESULTS:
Test Scaling Correction Difference (CD):2.6229
Chi-Square Difference (TRd): 14.0050
Number of Parameters Difference: 6

I was wondering how to interpret this these results (i.e., how do i tell which model is the best to use)?

Any help is appreciated,
Sophie
 Tihomir Asparouhov posted on Tuesday, October 23, 2018 - 2:39 pm
Since the p-value is 0.03 (<0.05) you would typically reject the nested model. You can get the p-value using a chi-square online calculator or the CHIDIST(14,6) function in excel.
 Sophie Potter posted on Wednesday, October 24, 2018 - 4:38 am
AH thank you Tohomir!
 Sangeeta Mani posted on Wednesday, May 15, 2019 - 10:37 am
Dear Drs. Muthen, I wonder if you could help me with a model fit and chi square diff test problem. I am running an SEM and trying to assess model fit.

Since 2 of the 3 latent variables on my endogenous factor are categorical, Mplus will not produce RMSEA, CFI fit statistics for me. (As a test, when I made the 2 latent variables continuous, I was then able to produce RMSEA and CFI fit statistics, so I think that is the problem).

I am aware that a second way to test model fit is to compare model fit with nested models, using difference testing using chi-square or loglikelihood. Since no chi-square statistic is produced, I thought I would use loglikelihood.

However, I think because I am using multiple imputation, my output produces a loglikelihood value L0, but produces no scaling correction factor. (As a test, I ran it without multiple imputation data, and then a scaling correct factor was produced, so I think multiple imputation is the problem).

I wonder if there are other ideas about how I can asses model fit? Here is my model syntax. Thank you so much for your help!! I sincerely appreciate it.

CATEGORICAL ARE SelfRate4 Functimp4;
CLUSTER = clus;
ANALYSIS: ESTIMATOR = MLR;
TYPE = complex;

MODEL: f1 BY SelfRate4 Functimp4 CountD4;
Zconstr3 ON consc1;
f1 ON Zconstr3;
f1 ON consc1;
f1 ON T12pfc;
ZConstr3 ON sex;
f1 ON health1;
MODEL INDIRECT: f1 ind Zconstr3 consc1;
 Bengt O. Muthen posted on Thursday, May 16, 2019 - 5:01 pm
I assume that when you say

" 2 of the 3 latent variables on my endogenous factor are categorical"

you are referring to the factor indicators, not the factors themselves (that is, you are not talking about latent class variables).

You can use WLSMV or Bayes to get overall fit measures.

Chi-square difference testing with Multiple Imputation is not readily available. See also our Short Course Topic 9.
 Sangeeta Mani posted on Wednesday, June 05, 2019 - 9:37 am
Thank you Dr. Muthen! Yes I'm sorry about my incorrect wording in the last post, I was referring to the indicators! As a follow-up to my question on Wednesday, May 15, 2019 - 10:37 AM in this discussion thread, I have considered using WLSMV to get fit indices. However, in my endogenous latent variable "f1", 2 of my indicators are categorical but one is a count variable ("CountD4"). Since Mplus cannot do multiple imputation on count data but CountD4 has a positive skew, I decided to consider CountD4 to be "non-normal continuous" so I could do multiple imputation. I also prefer this approach since I think it'll be easier to interpret the output. However, I know that WLSMV is not robust to non-normal continuous data.

I think one option may be for me to run the analysis with WLSMV in order to get the fit indices, and then as a "sensitivity analysis," re-run the analysis with MLR estimator to confirm whether the regression coefficients/p values are around the same. So far, they do seem pretty similar!

I am curious if you think this is a good approach? Since the factor loadings/p values seem somewhat similar between WLSMV and MLR, it seems the results are somewhat similar. However, given that WLSMV is not robust to non-normal continuous data, will the fit indices that are produced end up being very incorrect?

Thank you so much!
 Tihomir Asparouhov posted on Friday, June 07, 2019 - 1:05 pm
WLSMV is robust to non-normality and you can get average fit indices across the imputations. Using estimator=wlsmv; parameterization=theta or estimator=MLR; link=probit produce models that are directly comparable, i.e., estimate the same model. Both methods would be suitable.