cd = (d0 * c0 - d1*c1)/(d0 - d1) = (951 x 5.750 - 950 x 5.722)/(951 - 950) TRd = (T0*c0 - T1*c1)/cd = (2728.083 x 5.750 - 2717.094 x 5.722)/cd
COMPARISON MODEL Loglikelihood H0 Value -154318.940 H0 Scaling Correction Factor 5.722 for MLR H1 Value -151645.790 H1 Scaling Correction Factor 2.887 for MLR ... Chi-Square Test of Model Fit Value 2717.094* Degrees of Freedom 950 P-Value 0.0000 Scaling Correction Factor 1.968 for MLR NESTED MODEL: Loglikelihood H0 Value -154323.099 H0 Scaling Correction Factor 5.750 for MLR H1 Value -151645.790 H1 Scaling Correction Factor 2.887 for MLR ... Chi-Square Test of Model Fit Value 2728.083* Degrees of Freedom 951 P-Value 0.0000 Scaling Correction Factor 1.963 for MLR
Thanks for your confirmation. Let me ask a follow-up question. TRd was found to be 4.304958... Since the material says, "For MLM and MLR the products T0*c0 and T1*c1 are the same as the corresponding ML chi-square values," am I supposed to use 3.841 as critical value to determine whether the calculated S-B scaled chi-square difference is significant at the level of .05 or not? That is, is the difference (4.304958...) significant since TRd > 3.841?
I forgot asking another question. The equality constraint was imposed on a single parameter (which measures the effect of child maltreatment on violent offenses) for two ethnic groups, whites and Asian Americans. In the comparison model, the coefficient was found to be .031 (SE = .027) for whites, whereas it was .654 (SE = 1.430). As you can see, both coefficients are not significant, although the S-B scaled chi-square difference is larger than 3.841. As I supposed to say the coefficient is significantly different between whites and Asian Americans even though the coefficient was found to be not significant in each ethnic group?
2nd post: Each coefficient being significantly different from zero or not is not the same as testing that they are the same. Typically, if you use the independent-sample z test of equality using your SEs, you get the same thing as the chi2.
Step 1 on the Mplus website (http://www.statmodel.com/chidiff.shtml) for Difference Testing Using the Loglikelihood is: 1. Estimate the nested and comparison models using MLR. The printout gives loglikelihood values L0 and L1 for the H0 and H1 models, respectively, as well as scaling correction factors c0 and c1 for the H0 and H1 models, respectively.
Does this refer to H0 and H1 values given for the SAME model (i.e., in the same output file); or for DIFFERENT models (estimated in separate runs, with separate output files)?
I ask because while I have seen BOTH H0 and H1 values in some output files, I only see H0 for in a model I estimated using a NBI dependent variable, as seen below. There is no H1 value offered. Can I still utilize the steps on the website to compare the fit of this model with that of another nested model, using the H0 values only (the ones provided for each distinct model--because I did not get H0 and H1 values together in one output file). Thanks.
MODEL FIT INFORMATION Number of Free Parameters 14 Loglikelihood H0 Value -1189.806 H0 Scaling Correction Factor for MLR 1.1902
To do difference testing you need to run two analyses. The first is the least restrictive model. It is referred to as H1 in the writeup. The nested model is referred to as H0 in the write up. In both cases, the H0 values are taken from the output to use in the computations.
EFried posted on Tuesday, April 02, 2013 - 1:55 pm
When comparing 2 models using the MLR estimator, each model provides 3 scaling correction factors and 2 loglikelihoods. I don't find it specified which one to use for model comparison (http://www.statmodel.com/chidiff.shtml).
Looks like the only difference is that model 2 has a direct effect from m to y2.
ri ri posted on Saturday, August 30, 2014 - 4:08 pm
Yes, as far as I know one Needs to do a chi square difference test to compare the two models. In regular way, one just uses the chi square values. But since I have categorical data, I shall do it differently I suppose? I used the difftest command, but could not find the scale correction to calculate the difference with the formula provided at the Website.
With only one parameter difference you can just look at the z-test for that parameter in the model that is less restrictive.
In the general case you use DIFFTEST, first running the less restrictive model and then the more restrictive model. You don't need the scaling correction factors or the computations on the website. DIFFTEST does it for you.
ri ri posted on Tuesday, September 02, 2014 - 12:16 am
I tried the DIFFTEST to compare the contrained and uncontrained model, it worked wonderfully!
Just I have another methodological question. In the user guide multiple Group Analysis, you wrote an example, Fixing the mean of the variables in Group 2 to Zero. If I compare constrained and unconstrained models, is it necessary to fix the mean to Zero? I also saw some People Center the means of the continous variables in order to minimize multicollinearity. If I compare two path models (such as the above mentioned model comparion), I wonder if mean centering is needed?
To compare means across groups, use the model with means zero in all groups versus the model with means zero in one group and free in the others. Centering is not needed.
Ari J Elliot posted on Wednesday, January 07, 2015 - 7:44 pm
Hello Drs. Muthen,
Regarding chi square difference testing with MLR, please confirm that the H0 scaling correction factor that should be used in the calculation is the one listed under Loglikelihood, NOT the one listed under the MLR chi-square test of model fit.
Thus in the following output for the nested model, I would use 1.5862 as the scaling correction factor.
H0 Value -15770.576 H0 Scaling Correction Factor 1.5862 for MLR H1 Value -15733.949 H1 Scaling Correction Factor 1.5549 for MLR ............................. .....Chi-Square Test of Model Fit
Value 50.133* Degrees of Freedom 5 P-Value 0.0000 Scaling Correction Factor 1.4612 for MLR
If you use chi-square for the difference testing, you should use the scaling correction factor under chi-square. If you use the loglikelihood for the difference testing, you should use the scaling correction factor under loglikelihood.
Ari J Elliot posted on Thursday, January 08, 2015 - 11:37 am
Ok thanks. To further clarify, the instructions on the webpage for difference testing using chi-square state: "Be sure to use the correction factor given in the output for the H0 model."
Under the chi square I only see one scaling correction factor, whereas for loglikelihood there are correction factors provided for both H0 and H1. Given that correction factors for both the nested and comparison models are used in the calculation I'm not sure to what the reference to the H0 model in the instructions refers.
You should use the H0 scaling correction factors. If only one is specified, it is the H0. Don't use the one for the baseline model. If you have further questions, send the output and your license number to firstname.lastname@example.org and we can tell you which number to use.
Dear Muthen, I have a SEM model with these relationships (1) one latent and its three observed variables (2) Five observed variables that involved in path relationships with the latent in (1) (3) A correlation in two of the observed variables in (2). I use MLR estimator and would like to know the chi-square p-value. I follow the instruction given in your website on difference testing on chi-square, to compute the scaled difference in chi-square. My question is for the H0 (restricted model), which relationships in (1)to(3) I need to constrain? Should I just constrain the path relationships in (2) which is my main interest? Our goal in this test is to get a non significant p-value right? like the ML estimator's chi-square result for model fit test?
is there any information how the Satorra-Bentler scaled chi-square difference test is influenced by high N´s (e.g. n > 30 000)? Is it influenced at all? I am investigating measurement invariance with large subpopulation-samples and complex survey data (students nested in teachers) and therefore I use the SB-chi-square to test nested models.
Dear Linda, In ESEM invariance test for 2 groups, when I compare the metric (loading) invariance model with the configural (baseline) model, by using MLR estimator, can I still calculate the chi-square difference in Mplus using MLR? I have a problem to determine the constraint/more restrictive model for my configural and metric invariance ESEM models. The same thing for other invariance test (scalar, error variance). Can you give some advises. Thanks.
Jiangang Xia posted on Thursday, November 02, 2017 - 9:41 am
Dear Linda, I am confused by the instructions from the website and your previous responses to some above questions regarding the "Difference Testing Using Chi-square".
For example, for the very first question in above, you said "Yes, you are doing this correctly." However, in the question, the "H0 Scaling Correction Factor for MLR" (5.722 and 5.750) under "Logliklihood" was used, not the "Scaling Correction Factor for MLR" (1.968 and 1.963)under "Chi-Square Test of Model Fit".
In another question in above that was posted by Ari J Elliot posted on January 07, 2015, you responded that "If you use chi-square for the difference testing, you should use the scaling correction factor under chi-square."
So I am not sure which "scaling correction factor" should we use for the "Difference Testing Using Chi-square".
In order to avoid the confusion, here I want to use the output from the first question. If I want to compare the two models using chi-square, should I use
H0 Scaling Correction Factor 5.722 for MLR ?
or should I use
Scaling Correction Factor 1.968 for MLR ?
A related question: when should we use chi-square and when should we use Loglikelihood?
The "Scaling Correction Factor for MLR" should be used with the "Difference Testing Using Chi-square".
There should be no confusion about this since that correction factor is printed just below the chi-square value.
In every case you can use chi-square or the log-likelihood. Both should give you exactly the same result (subject to round off error).
Derek Boy posted on Saturday, January 20, 2018 - 9:44 am
Dear Dr. Muthen In order to compare the two models as below, I have tried to do the deviance test using loglikelihood. However, I could not figure out the numbers of parameters (i.e., p0 and p1) which are required by the formula for computing the scaling correction (cd) = (p0*c0 - p1*c1)/(p0 - p1), where p0 is the number of parameters in the nested model and p1 is the number of parameters in the comparison model. Would you kindly help, please? Best regards. ------------------------------- MODEL A Number of Free Parameters 17 Loglikelihood H0 Value -215.987 H0 Scaling Correction Factor 0.4637 for MLR H1 Value -215.987 H1 Scaling Correction Factor 0.4637 for MLR … … … Chi-Square Test of Model Fit Value 0.000* Degrees of Freedom 0 P-Value 0.0000 Scaling Correction Factor 1.0000 for MLR ------------------------------- MODEL B Number of Free Parameters 19 Loglikelihood H0 Value -214.709 H0 Scaling Correction Factor 0.4195 for MLR H1 Value -214.715 H1 Scaling Correction Factor 0.4195 for MLR … … … Chi-Square Test of Model Fit Value 0.000* Degrees of Freedom 0 P-Value 1.0000 Scaling Correction Factor 1.0000 -------------------------------
You have zero degrees of freedom for both models so neither model is testable and no model comparison can be done.
Derek Boy posted on Saturday, January 20, 2018 - 6:37 pm
Dear Dr. Muthen So, my two models are saturated and having zero degrees of freedom. Many many thanks for your pointing it out to me. But, may I also ask whether fitting a saturated model is a bad thing to do? Should I do something to make it unsaturated? What is that something you would suggest me to do? Please kindly advice. With best regards.
I intended to compare two nested model with the Satorra-Bentler scaled chi-square difference test (both models used MLR as the estimator) and based on the formula, the computed SB-chi-square is a negative number. I wonder if this is possible and how should I resolve this issue?
I am comparing models in a TWOLEVEL MIXTURE (MLR) analysis to test if particular covariates should be included in the model. I am using the Loglikelihood to calculate the TRd value, as per the website (the Chi-square values do not appear in my output for some reason).
I can do the calculation, however, how do I calculate the degrees of freedom in order to obtain the critical value of the TRd value?
I comuted two CFAs, testing a three-fator model and a three-factor model with a general factor using Mplus editior 7.3. Now I'd like to do a DIFF test to see which model is better. However, my outputs do not give me a correction factor. How do I get the correction factor?
A quick look says that the only difference between the models is that you put a second-order factor behind the 3 first-order factors. If that's the case, the models are the same because 3 indicators of 1 factor is a just-identified model - you are not restricting the covariance matrix for the first-order factors. You need 4 or more first-order factors for that.
Q1: There is no way in Mplus or any software. The 2 models are the same when you have only 3 first-order factors. "The same" means that they produce the exact same covariances among the observed variables. The models have the same number of parameters and when you have estimated one, you can transform its parameter values to the parameter values of the other.
Q2: There is no change that you can make - just accept the fact that this can't be tested - you can present the second-order factor model but you can't say that it fits better or worse.
I am comparing two Path Analysis models using the chi-square difference testing (Satorra-Bentler scaled chi-square difference test) - I got the formula for it from this website: https://www.statmodel.com/chidiff.shtml - the first set of tests listed on the page.
The nested model has 8 degrees of freedom, whereas the comparison model is saturated and has 0 degrees of freedom and a chi square value of 0.
Is it okay to carry out the chi-square difference testing between the two models, given that one of the models is saturated?
If it is indeed okay, I wonder if there is a citation you are aware of that I could cite to justify doing this (I am getting pushback on this from others).
Hello, I am doing a Factor Analyisis and my goal is to test whether a hierachical model with a general factor or just a three-factor model fits my data best.
When performing the hierachical model the input reading terminates normally, however, my factor loadings and the std errors leave me with questions. Why are the loadings and errors so high? Is there a problem in my code?
Code: Data: FILE is Dr_E.inp; Variable: NAMES ARE EP1 EP2 EP3 EP4 EP5 EP6 EP7 EP8 EP9 EP10; USEVARIABLES ARE EP1 EP2 EP3 EP4 EP5 EP7 EP8 EP9 EP10; CATEGORICAL ARE EP1 EP2 EP3 EP4 EP5 EP7 EP8 EP9 EP10; MODEL: F1 BY EP1* EP2; F2 BY EP3* EP4 EP5; F3 BY EP7* EP8 EP9 EP10;[F1-F3@0];F1-F3@1; Y BY F1* F2 F3;[Y@0];Y@1; OUTPUT: SAMPSTAT tech1 MODINDICES; PLOT: TYPE=PLOT2; Output: Y BY F1 1.059 0.170 6.218 0.000 F2 5.140 7.062 0.728 0.467 F3 2.544 0.931 2.732 0.006 Means Y 0.000 0.000 999.000 999.000 Intercepts F1 0.000 0.000 999.000 999.000 F2 0.000 0.000 999.000 999.000 F3 0.000 0.000 999.000 999.000 Thank you for your insights!
I need to compare nested models. I am using ML and I got the following loglikelihood statistics for the two models I want to compare. As I understood correctly for ML I don't need the scalling correction factors:
Model 1: -6369.028 (67) Model 2: -6368.841 69
Loglikelihood ratio: .374 Difference in Degree of Freedom: 2
How do I now know if the difference is significant or not?
Thank you very much for the help. Best regards Dinah