Message/Author 


Please let me know whether I'm using the formulas correctly, found in "Difference Testing Using ChiSquare," http://www.statmodel.com/chidiff.shtml). cd = (d0 * c0  d1*c1)/(d0  d1) = (951 x 5.750  950 x 5.722)/(951  950) TRd = (T0*c0  T1*c1)/cd = (2728.083 x 5.750  2717.094 x 5.722)/cd COMPARISON MODEL Loglikelihood H0 Value 154318.940 H0 Scaling Correction Factor 5.722 for MLR H1 Value 151645.790 H1 Scaling Correction Factor 2.887 for MLR ... ChiSquare Test of Model Fit Value 2717.094* Degrees of Freedom 950 PValue 0.0000 Scaling Correction Factor 1.968 for MLR NESTED MODEL: Loglikelihood H0 Value 154323.099 H0 Scaling Correction Factor 5.750 for MLR H1 Value 151645.790 H1 Scaling Correction Factor 2.887 for MLR ... ChiSquare Test of Model Fit Value 2728.083* Degrees of Freedom 951 PValue 0.0000 Scaling Correction Factor 1.963 for MLR 


Yes, you are doing this correctly. 


Thanks for your confirmation. Let me ask a followup question. TRd was found to be 4.304958... Since the material says, "For MLM and MLR the products T0*c0 and T1*c1 are the same as the corresponding ML chisquare values," am I supposed to use 3.841 as critical value to determine whether the calculated SB scaled chisquare difference is significant at the level of .05 or not? That is, is the difference (4.304958...) significant since TRd > 3.841? 


I forgot asking another question. The equality constraint was imposed on a single parameter (which measures the effect of child maltreatment on violent offenses) for two ethnic groups, whites and Asian Americans. In the comparison model, the coefficient was found to be .031 (SE = .027) for whites, whereas it was .654 (SE = 1.430). As you can see, both coefficients are not significant, although the SB scaled chisquare difference is larger than 3.841. As I supposed to say the coefficient is significantly different between whites and Asian Americans even though the coefficient was found to be not significant in each ethnic group? 


1st post: Right. 2nd post: Each coefficient being significantly different from zero or not is not the same as testing that they are the same. Typically, if you use the independentsample z test of equality using your SEs, you get the same thing as the chi2. 


Hi Linda and Bengt, Step 1 on the Mplus website (http://www.statmodel.com/chidiff.shtml) for Difference Testing Using the Loglikelihood is: 1. Estimate the nested and comparison models using MLR. The printout gives loglikelihood values L0 and L1 for the H0 and H1 models, respectively, as well as scaling correction factors c0 and c1 for the H0 and H1 models, respectively. Does this refer to H0 and H1 values given for the SAME model (i.e., in the same output file); or for DIFFERENT models (estimated in separate runs, with separate output files)? I ask because while I have seen BOTH H0 and H1 values in some output files, I only see H0 for in a model I estimated using a NBI dependent variable, as seen below. There is no H1 value offered. Can I still utilize the steps on the website to compare the fit of this model with that of another nested model, using the H0 values only (the ones provided for each distinct modelbecause I did not get H0 and H1 values together in one output file). Thanks. MODEL FIT INFORMATION Number of Free Parameters 14 Loglikelihood H0 Value 1189.806 H0 Scaling Correction Factor for MLR 1.1902 


To do difference testing you need to run two analyses. The first is the least restrictive model. It is referred to as H1 in the writeup. The nested model is referred to as H0 in the write up. In both cases, the H0 values are taken from the output to use in the computations. 

EFried posted on Tuesday, April 02, 2013  1:55 pm



When comparing 2 models using the MLR estimator, each model provides 3 scaling correction factors and 2 loglikelihoods. I don't find it specified which one to use for model comparison (http://www.statmodel.com/chidiff.shtml). Thank you 


The one for the H0 model  what is posted above. 

ri ri posted on Friday, August 29, 2014  3:21 am



I would like to compare two Mediation models: model 1: xmy1y2y3 xmy1y4 model 2: xmy1y4 xmy2y3 y3 and y4 are binary. Both models seem to work in Terms of Mediation. SO I Need to prove one model is better than the other. Can I follow the instruction of doing the chi square test for WLSMV? I did not find any description regarding H0 and H1 models, scale correction info in Output. 


Looks like the only difference is that model 2 has a direct effect from m to y2. 

ri ri posted on Saturday, August 30, 2014  4:08 pm



Yes, as far as I know one Needs to do a chi square difference test to compare the two models. In regular way, one just uses the chi square values. But since I have categorical data, I shall do it differently I suppose? I used the difftest command, but could not find the scale correction to calculate the difference with the formula provided at the Website. 


With only one parameter difference you can just look at the ztest for that parameter in the model that is less restrictive. In the general case you use DIFFTEST, first running the less restrictive model and then the more restrictive model. You don't need the scaling correction factors or the computations on the website. DIFFTEST does it for you. 

ri ri posted on Tuesday, September 02, 2014  12:16 am



I tried the DIFFTEST to compare the contrained and uncontrained model, it worked wonderfully! Just I have another methodological question. In the user guide multiple Group Analysis, you wrote an example, Fixing the mean of the variables in Group 2 to Zero. If I compare constrained and unconstrained models, is it necessary to fix the mean to Zero? I also saw some People Center the means of the continous variables in order to minimize multicollinearity. If I compare two path models (such as the above mentioned model comparion), I wonder if mean centering is needed? Thank you very much! 


To compare means across groups, use the model with means zero in all groups versus the model with means zero in one group and free in the others. Centering is not needed. 

Ari J Elliot posted on Wednesday, January 07, 2015  7:44 pm



Hello Drs. Muthen, Regarding chi square difference testing with MLR, please confirm that the H0 scaling correction factor that should be used in the calculation is the one listed under Loglikelihood, NOT the one listed under the MLR chisquare test of model fit. Thus in the following output for the nested model, I would use 1.5862 as the scaling correction factor. Loglikelihood H0 Value 15770.576 H0 Scaling Correction Factor 1.5862 for MLR H1 Value 15733.949 H1 Scaling Correction Factor 1.5549 for MLR ............................. .....ChiSquare Test of Model Fit Value 50.133* Degrees of Freedom 5 PValue 0.0000 Scaling Correction Factor 1.4612 for MLR Thank you! 


If you use chisquare for the difference testing, you should use the scaling correction factor under chisquare. If you use the loglikelihood for the difference testing, you should use the scaling correction factor under loglikelihood. 

Ari J Elliot posted on Thursday, January 08, 2015  11:37 am



Ok thanks. To further clarify, the instructions on the webpage for difference testing using chisquare state: "Be sure to use the correction factor given in the output for the H0 model." Under the chi square I only see one scaling correction factor, whereas for loglikelihood there are correction factors provided for both H0 and H1. Given that correction factors for both the nested and comparison models are used in the calculation I'm not sure to what the reference to the H0 model in the instructions refers. 


You should use the H0 scaling correction factors. If only one is specified, it is the H0. Don't use the one for the baseline model. If you have further questions, send the output and your license number to support@statmodel.com and we can tell you which number to use. 

Cheng posted on Sunday, April 05, 2015  6:37 pm



Dear Muthen, I have a SEM model with these relationships (1) one latent and its three observed variables (2) Five observed variables that involved in path relationships with the latent in (1) (3) A correlation in two of the observed variables in (2). I use MLR estimator and would like to know the chisquare pvalue. I follow the instruction given in your website on difference testing on chisquare, to compute the scaled difference in chisquare. My question is for the H0 (restricted model), which relationships in (1)to(3) I need to constrain? Should I just constrain the path relationships in (2) which is my main interest? Our goal in this test is to get a non significant pvalue right? like the ML estimator's chisquare result for model fit test? 


These are choices the researcher has to make. You may want to discuss it on SEMNET. 

Cheng posted on Monday, April 06, 2015  5:51 pm



Thank you. 

Noa Cohen posted on Tuesday, June 23, 2015  5:26 am



I am trying to calculate the values for a chi square difference test with ML. However, the output does not include the scaling correction factor. What can I do? Thank you. 


With ML you don't need a scaling correction factor. 

Noa Cohen posted on Tuesday, June 23, 2015  8:31 am



Thank you. Other than that I should use the instructions that are stated on the website? because they do not specify ML. 


See pages 486487 of the user's guide. 


Dear MPlusTeam, is there any information how the SatorraBentler scaled chisquare difference test is influenced by high N´s (e.g. n > 30 000)? Is it influenced at all? I am investigating measurement invariance with large subpopulationsamples and complex survey data (students nested in teachers) and therefore I use the SBchisquare to test nested models. Thanks a lot in advance 


The performance improves with larger sample sizes, See Table 3, under MLR http://statmodel.com/download/webnotes/mplusnote72.pdf 

Cheng posted on Tuesday, March 29, 2016  6:41 pm



Dear Linda, In ESEM invariance test for 2 groups, when I compare the metric (loading) invariance model with the configural (baseline) model, by using MLR estimator, can I still calculate the chisquare difference in Mplus using MLR? I have a problem to determine the constraint/more restrictive model for my configural and metric invariance ESEM models. The same thing for other invariance test (scalar, error variance). Can you give some advises. Thanks. 


Please send the output that shows the problem and your license number to support@statmodel.com. 

Jiangang Xia posted on Thursday, November 02, 2017  9:41 am



Dear Linda, I am confused by the instructions from the website and your previous responses to some above questions regarding the "Difference Testing Using Chisquare". For example, for the very first question in above, you said "Yes, you are doing this correctly." However, in the question, the "H0 Scaling Correction Factor for MLR" (5.722 and 5.750) under "Logliklihood" was used, not the "Scaling Correction Factor for MLR" (1.968 and 1.963)under "ChiSquare Test of Model Fit". In another question in above that was posted by Ari J Elliot posted on January 07, 2015, you responded that "If you use chisquare for the difference testing, you should use the scaling correction factor under chisquare." So I am not sure which "scaling correction factor" should we use for the "Difference Testing Using Chisquare". In order to avoid the confusion, here I want to use the output from the first question. If I want to compare the two models using chisquare, should I use H0 Scaling Correction Factor 5.722 for MLR ? or should I use Scaling Correction Factor 1.968 for MLR ? A related question: when should we use chisquare and when should we use Loglikelihood? Thank you! 


The "Scaling Correction Factor for MLR" should be used with the "Difference Testing Using Chisquare". There should be no confusion about this since that correction factor is printed just below the chisquare value. In every case you can use chisquare or the loglikelihood. Both should give you exactly the same result (subject to round off error). 

Derek Boy posted on Saturday, January 20, 2018  9:44 am



Dear Dr. Muthen In order to compare the two models as below, I have tried to do the deviance test using loglikelihood. However, I could not figure out the numbers of parameters (i.e., p0 and p1) which are required by the formula for computing the scaling correction (cd) = (p0*c0  p1*c1)/(p0  p1), where p0 is the number of parameters in the nested model and p1 is the number of parameters in the comparison model. Would you kindly help, please? Best regards.  MODEL A Number of Free Parameters 17 Loglikelihood H0 Value 215.987 H0 Scaling Correction Factor 0.4637 for MLR H1 Value 215.987 H1 Scaling Correction Factor 0.4637 for MLR … … … ChiSquare Test of Model Fit Value 0.000* Degrees of Freedom 0 PValue 0.0000 Scaling Correction Factor 1.0000 for MLR  MODEL B Number of Free Parameters 19 Loglikelihood H0 Value 214.709 H0 Scaling Correction Factor 0.4195 for MLR H1 Value 214.715 H1 Scaling Correction Factor 0.4195 for MLR … … … ChiSquare Test of Model Fit Value 0.000* Degrees of Freedom 0 PValue 1.0000 Scaling Correction Factor 1.0000  


You have zero degrees of freedom for both models so neither model is testable and no model comparison can be done. 

Derek Boy posted on Saturday, January 20, 2018  6:37 pm



Dear Dr. Muthen So, my two models are saturated and having zero degrees of freedom. Many many thanks for your pointing it out to me. But, may I also ask whether fitting a saturated model is a bad thing to do? Should I do something to make it unsaturated? What is that something you would suggest me to do? Please kindly advice. With best regards. 


No, it is not a problem to have saturated models. To learn more about this you may want to post on SEMNET. 


Dear Drs Muthen I intended to compare two nested model with the SatorraBentler scaled chisquare difference test (both models used MLR as the estimator) and based on the formula, the computed SBchisquare is a negative number. I wonder if this is possible and how should I resolve this issue? Chong. 


Drs. Muthen Please ignore my question above; I have figured out the problem. Thanks, Chong. 


Hi all, I am comparing models in a TWOLEVEL MIXTURE (MLR) analysis to test if particular covariates should be included in the model. I am using the Loglikelihood to calculate the TRd value, as per the website (the Chisquare values do not appear in my output for some reason). I can do the calculation, however, how do I calculate the degrees of freedom in order to obtain the critical value of the TRd value? Thanks in advance. 


The df is obtained as the difference in the number of parameters. That is also discussed on that website page. 


Dear all, I comuted two CFAs, testing a threefator model and a threefactor model with a general factor using Mplus editior 7.3. Now I'd like to do a DIFF test to see which model is better. However, my outputs do not give me a correction factor. How do I get the correction factor? Thanks for any help! 


You get them with Estimator = MLR. 


My output uses WLSMV as estimator. These are the two models I'd like to do a DIFF test with. Do I have to run the models in a different analysis to get MLR and be able to do a DIFF test? Title: CFA 1 Data: FILE is File1 Variable: NAMES ARE S1 S2 S3 S4 S5 S6 S7 S8 S9 S10; USEVARIABLES ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; CATEGORICAL ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; MODEL: F1 BY S1* S2; F2 BY S3* S4 S5; F3 BY S7* S8 S9 S10;[F1F3@0];F1F3@1; OUTPUT: SAMPSTAT tech1 MODINDICES; Plot: TYPE = PLOT2; Title: CFA 2 Data: FILE is File1 Variable: NAMES ARE S1 S2 S3 S4 S5 S6 S7 S8 S9 S10; USEVARIABLES ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; CATEGORICAL ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; MODEL: F1 BY S1* S2; F2 BY S3* S4 S5; F3 BY S7* S8 S9 S10;[F1F3@0];F1F3@1; Y BY F1 F2 F3; OUTPUT: SAMPSTAT tech1 MODINDICES; Plot: TYPE = PLOT2; 


There are 2 different things here. DIFFTEST is for WLSMV and for this testing you don't need to use a scaling correction factor  see the UG for how to do DIFFTEST. For chisquare difference testing with MLR you use scaling correction factors to get the test as we describe on our website (see left column). 


Thank you Bengt! Thus, I am doing a DIFFtest. However, when running it I get the error message THE CHISQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL. Title: CFA Data: FILE is Dr.E.dat.inp; Variable: NAMES ARE S1 S2 S3 S4 S5 S6 S7 S8 S9 S10; USEVARIABLES ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; CATEGORICAL ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; MODEL: F1 BY S1* S2; F2 BY S3* S4 S5; F3 BY S7* S8 S9 S10;;Y BY F1 F2 F3;[F1Y@0];F1Y@1; OUTPUT: SAMPSTAT tech1 MODINDICES; SAVEDATA: DIFFTEST IS dtest.dat; Title: CFA Data: FILE is Dr.E.dat.inp; Variable: NAMES ARE S1 S2 S3 S4 S5 S6 S7 S8 S9 S10; USEVARIABLES ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; CATEGORICAL ARE S1 S2 S3 S4 S5 S7 S8 S9 S10; ANALYSIS: DIFFTEST=dtest.dat MODEL: F1 BY S1* S2; F2 BY S3* S4 S5; F3 BY S7* S8 S9 S10;[F1F3@0];F1F3@1; OUTPUT: SAMPSTAT tech1 MODINDICES; Why do I get the error message? I appreicate your help! 


A quick look says that the only difference between the models is that you put a secondorder factor behind the 3 firstorder factors. If that's the case, the models are the same because 3 indicators of 1 factor is a justidentified model  you are not restricting the covariance matrix for the firstorder factors. You need 4 or more firstorder factors for that. See also the NESTED testing option in http://www.statmodel.com/download/Version%208.1%20Language%20Addendum.pdf If I am right, it will tell you that the 2 models are equivalent. 


Hi Bengt, Thank you! The link says models are eqivalent. Is there thus, no way in mplus to check if the hierachical model is better than the 3factor model? Do i perhaps need to change something in the covariance matrix? 


Q1: There is no way in Mplus or any software. The 2 models are the same when you have only 3 firstorder factors. "The same" means that they produce the exact same covariances among the observed variables. The models have the same number of parameters and when you have estimated one, you can transform its parameter values to the parameter values of the other. Q2: There is no change that you can make  just accept the fact that this can't be tested  you can present the secondorder factor model but you can't say that it fits better or worse. 


Hello! I am comparing two Path Analysis models using the chisquare difference testing (SatorraBentler scaled chisquare difference test)  I got the formula for it from this website: https://www.statmodel.com/chidiff.shtml  the first set of tests listed on the page. The nested model has 8 degrees of freedom, whereas the comparison model is saturated and has 0 degrees of freedom and a chi square value of 0. Is it okay to carry out the chisquare difference testing between the two models, given that one of the models is saturated? If it is indeed okay, I wonder if there is a citation you are aware of that I could cite to justify doing this (I am getting pushback on this from others). Thank you so much for your help!! 


When one of the models is the saturated one, Mplus automatically provides this test in the regular Model Fit section of the output. 


Hello, I am trying to perform DIFFTEST between two CFA models with all categorical indicators. Model 1: Unconstrained model f1 BY x1 x2 x3; f2 BY x4 x5 x6; f3 BY x7 x8 x9; f4 BY x10 x11 x12; Model 2: Restricted Model f1 BY x1 x2 x3; f2 BY x4 x5 x6; f3 BY x7 x8 x9 x10 x11 x12; In Model 1 the correlation between f3 and f4 was 0.75 thus I wanted to test if a threefactor model has a good fit. I have run the DIFFTEST in Mplus. My question: While the DIFFTEST is executed in Mplus, if my assumptions are correct and if Model 2 is nested within Model 1 to perform the comparison. 


Check if the two models are nested using the new NESTED option introduced in Mplus version 8.1. Read about it here (see Papers, SEM or Recent Papers): Asparouhov, T. & Muthén, B. (2018). Nesting and equivalence testing in Mplus. Technical Report. Version 2. August 13, 2018. (Download scripts). 


Hello, I am doing a Factor Analyisis and my goal is to test whether a hierachical model with a general factor or just a threefactor model fits my data best. When performing the hierachical model the input reading terminates normally, however, my factor loadings and the std errors leave me with questions. Why are the loadings and errors so high? Is there a problem in my code? Code: Data: FILE is Dr_E.inp; Variable: NAMES ARE EP1 EP2 EP3 EP4 EP5 EP6 EP7 EP8 EP9 EP10; USEVARIABLES ARE EP1 EP2 EP3 EP4 EP5 EP7 EP8 EP9 EP10; CATEGORICAL ARE EP1 EP2 EP3 EP4 EP5 EP7 EP8 EP9 EP10; MODEL: F1 BY EP1* EP2; F2 BY EP3* EP4 EP5; F3 BY EP7* EP8 EP9 EP10;[F1F3@0];F1F3@1; Y BY F1* F2 F3;[Y@0];Y@1; OUTPUT: SAMPSTAT tech1 MODINDICES; PLOT: TYPE=PLOT2; Output: Y BY F1 1.059 0.170 6.218 0.000 F2 5.140 7.062 0.728 0.467 F3 2.544 0.931 2.732 0.006 Means Y 0.000 0.000 999.000 999.000 Intercepts F1 0.000 0.000 999.000 999.000 F2 0.000 0.000 999.000 999.000 F3 0.000 0.000 999.000 999.000 Thank you for your insights! 


To make the model identified, you need to add f1f3 with y@0; 


Dear Bengt, thank you! I have added this code line. The factor loadings and errors are still the same. Am I do something incorrectly? Data: FILE is Dr_E.inp; Variable: NAMES ARE EP1 EP2 EP3 EP4 EP5 EP6 EP7 EP8 EP9 EP10; USEVARIABLES ARE EP1 EP2 EP3 EP4 EP5 EP7 EP8 EP9 EP10; CATEGORICAL ARE EP1 EP2 EP3 EP4 EP5 EP7 EP8 EP9 EP10; MODEL: F1 BY EP1* EP2; F2 BY EP3* EP4 EP5; F3 BY EP7* EP8 EP9 EP10;[F1F3@0];F1F3@1; Y BY F1* F2 F3;[Y@0];Y@1; F1F3 with Y@0; OUTPUT: SAMPSTAT tech1 MODINDICES; Y BY F1 1.059 0.170 6.218 0.000 F2 5.140 7.062 0.728 0.467 F3 2.544 0.931 2.732 0.006 


Send your output to Support along with your license number. 


Hello, I need to compare nested models. I am using ML and I got the following loglikelihood statistics for the two models I want to compare. As I understood correctly for ML I don't need the scalling correction factors: Model 1: 6369.028 (67) Model 2: 6368.841 69 Loglikelihood ratio: .374 Difference in Degree of Freedom: 2 How do I now know if the difference is significant or not? Thank you very much for the help. Best regards Dinah 


With df=2, the 5% critical value for Chisquare is 5.991, so the difference is not significant at the 5% level. You know that 0.374 is small for df=2 because the expected value is 2 (the df). 

Back to top 