Chi-Square Diff Testing Using the Sat... PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 Leonard Burns posted on Friday, May 17, 2002 - 9:12 am
I have a copy of the instructions from your web site for chi-square difference testing using the Satorra-Bentler Scaled Chi-Square. I would like to perform such tests with nested models in the context of CFA. I am currently using EQS version 6.0. With this version, the robust estimation option provides the SB Scaled Chi-Square along with the robust CFI and the robust RMSEA measures of fit. This is my question. Is the MLM procedure in M-Plus 2 (maximum likelihood parameter estimates with robust standard errors and a mean-adjusted chi-square test statistic) the same estimation procedure as the robust procedure in EQS 6.0?

If this is so, then I can follow the instructions in the paper on your WEB site to calculate the
correct values for the SB Scaled Chi Square for comparing nested models.

Thanks for your feedback.

Len Burns
 Linda K. Muthen posted on Friday, May 17, 2002 - 10:02 am
MLM is the Satorra-Bentler chi-square test statistic, so it stands to reason that the directions would work for the EQS Satorra-Bentler chi-square as well as MLM.
 Wim Beyers posted on Monday, September 30, 2002 - 7:15 am
Hi there,
I'm just asking a question here, because I did not receive any helpfull feedback on SEMNET. And here I find a specific forum on the topic, so...

I calculated the Satorra-Bentler Scaled Chi-Square Difference Test by hand several times, but I sometimes come out with results that seem strange to me...

For instance (n = 600):
Comparison Model (79 df):
- nonscaled chisquare = 249.18
- scaled chisquare = 220.42
Nested Model (82 df):
- nonscaled chisquare = 337.18
- scaled chisquare = 308.24

So, a traditional chisquare diff test results favours the Comparison Model, with chi-square-diff = 88.00 (df = 3). OK
A SBS scaled diff test also favours the comparison Model, but following the calculations on http://www.statmodel.com/chidiff.html, the test-value is 675.14. It's so huge I really hesitate to report it. Am I doing something wrong?

Thanks for all help,
--
Wim Beyers
Belgium
 bmuthen posted on Monday, September 30, 2002 - 9:42 am
It looks like you have done the calculations correctly according to our web site. The test value of 675 is certainly much larger than 88, but the p values are both very small and therefore similar so this could all be ok.

Two other thoughts:

You might have run into a local optimum in one of your 4 runs.

You can run this in Mplus to get the scaling correction factors directly to see if they agree.
 Kaja LeWinn posted on Thursday, June 23, 2005 - 3:05 pm
Hello,
We are trying to use the Satorra-Benter scaled chi-squared test using the instructions on this site. We have found that when we run our comparison model in MLM and ML we get different degrees of freedom (the same model, using two different estimation techniques gives us two different dfs). This does not happen in the nested model. We are confused by this. One characteristic of our model that might be of significance is that we are looking for measurement invariance using the grouping command.
We were hoping this could be explained to us, as well as how we should approach the equation under point 3 of the online instructions (i.e. which degrees of freedom to we choose, those from the ML or MLM model). Thanks for your help,
Kaja
 Linda K. Muthen posted on Friday, June 24, 2005 - 1:48 am
You need to send your outputs, data, and license number to support@statmodel.com. What you are describing has not been seen by us. There may be something else going on.
 Leif Edvard Aaroe posted on Friday, January 06, 2006 - 2:59 pm
Dear colleagues,

I need to test out differences between nested models, and wonder if I can use the Satorra-Bentler scaled chi square in this particular case:

I have a model with a set of predictors that are all regarded as metric (although a couple of them are dichotomies). Some of them could be analysed as latent variables, but I have chosen to start out using simple sumscores. It is therefore simply a path analysis that I am doing. I have two outcome variables, the most dependent one is a dichotomy. The other one, which is also regarded as a mediator, is an ordered categorical variable. Since the sample is based on clusters, I am using the cluster option in order to adjust for the design effect.

I understand that the S-B formula is based on ML and MLM estimators. The programme, however, does not permit ML and MLM estimators with this analysis. It only provides WLSMV estimation.

Can my commands be changed in such a way that it allows for ML and MLM estimation? Or is there an alternative procedure that can be used for testing the difference between models?

Most grateful for any suggestions.

Leif
 Linda K. Muthen posted on Friday, January 06, 2006 - 3:43 pm
MLM is the Satorra-Bentler chi-square. It is available only when all outcomes are continuous.
 Leif Edvard Aaroe posted on Friday, January 06, 2006 - 9:15 pm
Thanks for quick response.

How do I test differences between nested path analysis models when I have categorical outcome variables and clustered data? Is this described in the Mplus manual or anywhere else?

If such testing is not possible: What procedure would you recommend for identifying a "good" model?

Leif
 Linda K. Muthen posted on Saturday, January 07, 2006 - 6:53 am
In Chapter 15 under the ESTIMATOR option, there is a table that shows the estimators available for TYPE=COMPLEX and TYPE=TWOLEVEL. I'm not sure which you want to use for your clustered data. With the WLS estimator, difference testing is done in the usual way. With WLSM and MLR, you need to use a scaling correction factor which is given in the output and how to use it is shown on the website. With WLSMV, use the DIFFTEST option. See a description in the Mplus User's Guide. When you do not obtain chi-square, you can use -2 times the loglikelihood difference.
 Sophie van der SLuis posted on Thursday, February 02, 2006 - 5:52 am
I'm performing a CFA with type=complex as independency assumption is violated in my data set [gathered within families].

I used the Satorra-Bentler scales chi-square corrections for MLR as described on the Mplus website.

My restricted model has an unscaled chi-square of 19.696 with df=13, and scaling correction factor of 1.123.

My less restricted model has an unscaled chi-sqaure of 10.704 with df=12, and scaling correction factor of 1.213.

Calculating the diff test scaling correction:
(13*1.123-12*1.213)/1=.043

the corrected chi-sq diff test would then be:
(19.696-10.704)/.043=209.116

I do not think this is correct.
Can someone help me out?

Thanks
Sophie
 bmuthen posted on Friday, February 03, 2006 - 9:47 am
It looks like you have done this correctly. The asymptotics of this correction does not always work out in small samples as has been noted by the authors, although note that the p value will be zero for both uncorrected and corrected chi-square difference testing. In Mplus Version 4 you will also have access to a Wald test which avoids these problems.
 Sophie van der SLuis posted on Monday, February 06, 2006 - 2:23 am
Thank you for the swift response.
I'll see if I can lay my hands on Mplus 4.

I presume however that I can also report the CFI, RMSEA, standardized residuals etc. to substantiate the improvement in model fit?

Kinds regards
Sophie
 Anna Kryziek posted on Monday, February 06, 2006 - 7:50 am
Once you have computed the Satorra-Bentler scaled chi-square difference test, how can you determine whether the two models are significantly different or not?
Thanks,
Anna
 Linda K. Muthen posted on Monday, February 06, 2006 - 8:44 am
The other fits measures should be okay.
 Linda K. Muthen posted on Monday, February 06, 2006 - 8:45 am
You compare it to the chi-square table value for the number of degrees of freedom in the difference test.
 Anna Kryziek posted on Monday, February 06, 2006 - 10:08 am
Thank you!
Anna
 Chris Aberson posted on Monday, February 20, 2006 - 2:16 pm
I've done a series of these calculations and I get some CD values that are negative (do*co is less than d1*c1).

What should I do in this case? Use the absolute value of the CD?
 Linda K. Muthen posted on Monday, February 20, 2006 - 5:43 pm
In this case, the difference testing is not working and should not be interpreted.
 Chris Aberson  posted on Monday, February 20, 2006 - 6:22 pm
What is my option here?

Just a straight test based on the ML values?

Also, to be clear -- if the test is not working -- is that attributable to some data characteristics or user error? I'm confident I'm doing the test correctly (as is comes out positive for about 50% of my comparisons). Thanks!
 bmuthen posted on Monday, February 20, 2006 - 6:30 pm
The asymptotics of the test fails to kick in - this has been observed among the creators of the test (Satorra-Bentler) in several applications. No user error, and no fault of the data (apart from perhaps not having a large enough n). An alternative is to use a Wald test which is part of the upcoming Mplus Version 4 (see new announcement on the home page). This test is robust to the same type of violations as MLR/MLM.
 Chris Aberson posted on Tuesday, February 21, 2006 - 9:09 am
Would the Wald result here be similar to what EQS calls the Lagrange Multiplier test?

For example if I were to take my model and constrain a covariance to one - would the Lagrange value associated with freeing that parameter give me a test of the difference between a comparison model and a model with that constraint (given that is the only constraint I'm testing?

Thanks again - great comments.
 bmuthen posted on Tuesday, February 21, 2006 - 3:16 pm
No, Wald testing pertains to restrictions on a given H0 model, whereas LM tests (= Mod Ind) pertains to relaxing restrictions on a given H0 model.

Yes. The Wald test, however, would make it unnecessary for you to run that second run with your covariance restricted.
 jennybr posted on Monday, February 27, 2006 - 6:56 pm
HI,
I want to compute a chi-square difference test, however, my two df's are the same. Does this mean that my models are not nested?
 Linda K. Muthen posted on Monday, February 27, 2006 - 7:26 pm
What estimator are you using?
 anna kryzicek posted on Wednesday, March 22, 2006 - 10:01 am
There was some discussion above about the conditions under which the Satorra-Bentler chi-square test fails. Could you direct me to papers that discuss this further.

Thanks,
Anna
 Linda K. Muthen posted on Wednesday, March 22, 2006 - 12:32 pm
I think Peter Bentler has written about this. I don't know of the exact references. If you can't find it doing a literature search, you can contact Peter Bentler.
 henry nyabuto posted on Monday, July 17, 2006 - 12:27 pm
I have run a multi-group (4 groups) CFA using LISREL. The output indicates that the factor loadings are equivalent across the four groups - only one set of loadings in the output. However, a chi-square difference test (Santorra-Bentler Chi-square) is significant - contrained model minus unconstrained model. Does this sometimes happen? If so, what is the explanation and how does one proceed?

Thanks
Henry
 Linda K. Muthen posted on Monday, July 17, 2006 - 1:20 pm
This would indicate that constraining the factor loadings to be equal across groups significantly worsens the fit of the model. There must be some factor loadings that are not invariant across groups. You can read about testing for measurement invariance in Chapter 13 of the Mplus User's Guide which is on the website. It is at the end of the multiple group discussion.
 Christine McWayne posted on Thursday, August 10, 2006 - 1:20 pm
I am searching for the formula for calculating degrees of freedom when using WLSMV, but cannot find the correct info in the latest User's Guide or in the Technical Appendices. I am using MPlus version 3.0 for this CFA. Can I interpret the degrees of freedom as accurate in the output?
 Linda K. Muthen posted on Thursday, August 10, 2006 - 2:57 pm
It is formula 110 in Technical Appendix 4 on the website. The only interpretable value for WLSMV is the p-value for chi-square. The degrees of freedom are not calculated in the regular way. Difference testing of nested models can be carried out using the DIFFTEST option.
 Jason Prenoveau posted on Thursday, July 24, 2008 - 9:39 am
I know that when using the Satorra-Bentler chi-square (MLM), you must calculate a corrected chi-square difference test. It appears from the discussion above that this SAME procedure should be used for the Yuan-Bentler T2 test statistic (MLR). Is this true, or when using MLR (Yuan-Bentler T2 test statistic) is it possible to just use a regular difference test?

Thank you for your help!
 Linda K. Muthen posted on Thursday, July 24, 2008 - 10:21 am
MLM, MLR, and WLSM all need to use a scaling correction factor for difference testing.
 M C Schilpzand posted on Wednesday, October 22, 2008 - 2:11 pm
Dear colleagues,

I would like to use the Chi-square difference test but I am unsure if my two models are nested. I would like to compare a CFA with conflict as 1 scale (items 1,2,3,4,5,6,7,8) and a CFA with conflict as 2 scales( task conflict items 3,6,7 and relationship conflict items 1,2,5). The last model has 1 df less. Could I consider these models nested? If so could I use the Satorra-Bentler Scaled Difference Test?

Thanks so much!
Maria

P.S. great forum, I have learned a lot from it!
 Linda K. Muthen posted on Thursday, October 23, 2008 - 9:30 am
If you don't have the same set of observed dependent variables, the models are not nested.
 M C Schilpzand posted on Thursday, October 23, 2008 - 1:30 pm
Thanks for your quick reply.
I think I understand. So if I would make a second order factor (conflict) of the two first order factors (task conflict and relationship conflict)and compare this to a CFA of one factor (conflict) the models would be nested and thus I can use the S-B Scaled Difference Test?

Maria
 Linda K. Muthen posted on Thursday, October 23, 2008 - 5:49 pm
A second-order factor model with two indicators is not identified. To be nested, each model would need to include the same set of dependent variables. I don't think if identified, your suggestion has that.
 Janine Neuhaus posted on Wednesday, January 14, 2009 - 2:46 am
I ran a multilevel CFA with continous variables. I would like to compare two models: (1) 2-factors within and 1 factor between vs. (2) 2-factors within and 2 factors between. As much as I understand Model 2 is not nested within model 1, so I can't use the S-B Scaled Difference Test. Regarding my chi-square value and the fit indices I cannot decide which model fits better, because they are nearly the same. My question:
Is there another way to test if they are different? If not, can I conclude both models are equivalent although I couldn't really prove it?
Thanks very much!
Janine
 Bengt O. Muthen posted on Wednesday, January 14, 2009 - 8:05 am
BIC could be helpful to balance parsimony against improving the loglikelihood. If the fit information is about the same, I would decide based on the interpretation and usefulness of the model; I wouldn't be so concerned about testing. If the model with 2 between factors doesn't add interpretational value, parsimony speaks for the 1-factor model on between.
 Janine Neuhaus posted on Thursday, January 15, 2009 - 2:21 am
Thank you very much for your advice - very helpful!
Janine
 Sunny Liu posted on Monday, March 09, 2009 - 12:28 am
I am a little confused with the Satorra-Bentler Scaled Chi-Square.

For example,

Baseline model:
Chi-square Test of Model Fit:
Value: 2
Degree of Freedom: 2
Scaling Correction Factor for MLR: 2

Model 1:
Chi-square Test of Model Fit:
Value: 1
Degree of Freedom: 1
Scaling Correction Factor for MLR: 1

Then

# Compute the difference test scaling correction where d0 is the degrees of freedom in the nested model and d1 is the degrees of freedom in the comparison model.

cd = (d0 * c0 - d1*c1)/(d0 - d1)
= (2*2 - 1*1)/(2-1) = 3

Note that the ML chi-square is equal to the MLM or MLR chi-square times the scaling correction factor.

# Compute the Satorra-Bentler scaled chi-square difference test (TRd) as follows:

TRd = (T0 - T1)/cd
= (2*2-1*1)/3 = 1

The X-square difference test is X-square equal to 1 with 1 df, then it is not significant.

Or should it be

TRd = (T0 - T1)/cd
= (2-1)/3 = 1/3

The X-square difference test is X-square equal to 1/3 with 1 df, then it is not significant.

Which way is correct?
 Andrea Vocino posted on Monday, November 09, 2009 - 5:28 pm
Is the S-B chi^2 in Mplus going to be changed according to latest estimation provided by Satorra nd Bentler (2009)? see e.g.,
http://www.springerlink.com/content/k716217434q71737/fulltext.pdf

Thanks

Andrea
 Bengt O. Muthen posted on Monday, November 09, 2009 - 5:44 pm
We have on our list to include that twist in our next version.
 Christoph Weber posted on Friday, January 29, 2010 - 6:57 am
Dear Drs. Muthen,

I'm doing a chi-square difference test for MLR. I get a negative cd-value (=-.016). I think this is due to the complex model (M1: df=1085 M2: df=1086, M1: Scale Cor=1.070 M2:1.069). It follows that also the scaled chi-square difference is negative (-240,639). Is this result interpretable?

How should I interpret it?

Do I have to use the modulus of cd?
Or does it mean, that the model with df=1086 fits better??

best regards
Christoph Weber
 Linda K. Muthen posted on Friday, January 29, 2010 - 8:45 am
The result is not interpretable. This is a flaw with the method.
 Michael Spaeth posted on Friday, February 05, 2010 - 7:20 am
I have the same problems with MLR-testing as described above (negative chi-square). In this situation, is it o.k. to use the Wald-test in combination with MLR and COMPLEX/CLUSTER in order to test some simple 1df constraints?
 Linda K. Muthen posted on Friday, February 05, 2010 - 9:09 am
Yes.
 kathrin weidacker posted on Sunday, September 19, 2010 - 9:36 am
Dear Drs. Muthen,
I am trying the formula for positive SB-difference statistic values
at invariance testing across groups (3). since the method requires
estimating a M10 model, with the M0 output as starting values, I do
not know which starting values to use in case of several groups.
can you help me further, please?

Cheers,
kathrin
 Bengt O. Muthen posted on Sunday, September 19, 2010 - 10:04 am
You want to study examples 3 and 4 and their Mplus scripts at

http://www.statmodel.com/examples/webnote.shtml#web12

You would use the M0 output for each group to form the multi-group M10 start values.
 kathrin weidacker posted on Sunday, September 19, 2010 - 6:07 pm
Thank you for the fast response! I will check those notes.

Cheers,
kathrin
 Samantha Anders posted on Wednesday, October 20, 2010 - 9:51 am
Hi there -

I am trying to run a fully unconstrained multiple group CFA model to compare to my constrained model, but I am not able to figure out the syntax for this. For the constrained model, I am using the syntax below. What do I need to add to make it unconstrained? Thank you!

Title: CFA; low.hi; current; King
DATA: FILE IS low.hi.current.csv;
VARIABLE: NAMES ARE b1-b5 c1-c2 c3-c7 d1-d5 g;
CATEGORICAL ARE b1-b5 c1-c2 c3-c7 d1-d5;
GROUPING IS g (1 = low 2 = high);
MODEL: f1 BY b1-b5;
f2 BY c1-c2;
f3 BY c3-c7;
f4 BY d1-d5;
 Linda K. Muthen posted on Wednesday, October 20, 2010 - 10:03 am
See Slides 169 and 170 of the Topic 2 course handout on the website.
 Samantha Anders posted on Wednesday, October 20, 2010 - 12:54 pm
Thank you for your prompt response! I realize this is probably a really easy question, but the syntax I posted above - is this for a fully unconstrained model and then syntax on slides 169 and 170 for a fully constrained model?

Thank you so much!
 Linda K. Muthen posted on Wednesday, October 20, 2010 - 3:07 pm
The syntax above is for a model with factor loadings and intercepts constrained to be equal across groups. This is the Mplus default. See Chapter 14 of the user's guide for a full description of multiple group analysis in Mplus.


On Slides 169 and 170, the first syntax is for a fully constrained model. The second is for an unconstrained model. And the third is for a partially constrained model.
 Samantha Anders posted on Saturday, October 23, 2010 - 2:45 pm
Thanks again. We got a bit closer, but now are running into this problem -

When the fully unconstrained model is run, we're getting error messages like this:

THE MODEL ESTIMATION TERMINATED NORMALLY

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE
COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL.
PROBLEM INVOLVING PARAMETER 81.

THE CONDITION NUMBER IS -0.244D-17.

We're not able to get fit statistics because the parameter estimates couldn't be computed. When I check the output, I can find out that, in this model, parameter 81 is d5. So then I delete the line of code freeing d5 the model will run. I'm not sure what the condition number is yet. Can you tell us what the condition number indicates?

Thanks!
 Linda K. Muthen posted on Saturday, October 23, 2010 - 3:29 pm
The condition number is related to identification. Please send the full output and your license number to support@statmodel.com.
 Jak posted on Tuesday, November 02, 2010 - 4:02 am
Dear Linda or Bengt,

Is the method described in Webnote 12 ("Computing the Strictly Positive
Satorra-Bentler Chi-Square Test in Mplus") also applicable to the Yuan-Bentler T2 statistic obtained when using MLR estimation?

Thanks in advance,

Suzanne
 Tihomir Asparouhov posted on Tuesday, November 02, 2010 - 8:45 am
Yes.
 Ian Clara posted on Tuesday, December 07, 2010 - 6:39 am
Good morning. I am trying to conduct a chi-square difference test using the WLSMV estimation and type=complex. I have used the difftest option and for the two models that I want to test it says that they are not nested (although there is a single path that is different -- set to zero in one model and freely estimated in the other). I wanted to try to conduct the chi-square difference test by hand, but I can't determine how to get the required output in Mplus v5. How can I obtain the scaling correction factors, or the log likelihood?

Warm regards,
Ian
 Linda K. Muthen posted on Tuesday, December 07, 2010 - 8:00 am
Difference testing using WLSMV cannot be done by hand. If you use WLSM, you will obtain a scaling correction factor.
 Fred Mueller posted on Wednesday, February 22, 2012 - 6:08 am
Dear Linda and Bengt,

I would like to compare two nested models and I am using MLR as an estimator. Instead of using the (Santorra-Bentler Scaled) Chi-Square difference test, I would like to use the small difference in fit test by MacCallum, Browne, and Cai (2006, Psych Methods).
For this test, I also have to indicate the Chi-Square of both models. How do I have to proceed? Can I just multiply the (regularI Chi-Square with the scaling correction factor to get the correct Chi-Square or is it more complicated?

Thank you very much in advance!
Best,
Fred
 Bengt O. Muthen posted on Wednesday, February 22, 2012 - 1:51 pm
I don't know if the MacCallum et al approach needs the usual, uncorrected, chi-square or can use the MLR chi-square. You may want to approach the authors with that question.
 Fred Mueller posted on Wednesday, February 22, 2012 - 5:22 pm
Thank you very much for your quick reply!
 Malte Jansen posted on Monday, March 19, 2012 - 6:23 am
Dear Mplus Team,

i am comparing 2 measurement models for 18 items using MLR. It would be great if you could help me with some answers:

The first model is a one-factor model where all items are explaining by the same factor and the second model is a three factor model where each factor explains 6 items.

(1)Is it right to say that these models are nested as the 3-factor-model with the correlations between the factors set to 1 would equal the 1-factor-model?

(2) When i compute the S-B scaled Chi square difference test, the result is negative. Therefore i tried computing the strictly positive S-B scaled Chi square difference test. Thus i tried to estimate the more strict M0 model first which would be the 3-factor-model with between-factor-correlations set to one. However, the estimation did not converge. What am i doing wrong? Heres the input:

MODEL:
factor1 by f10-f16;
factor2 by f20-f26;
factor3 by f30-f36;
factor1 WITH factor2 @1;
factor2 WITH factor3 @1;
factor3 WITH factor1 @1;

Best regards and thank you in advance.
 Bengt O. Muthen posted on Tuesday, March 20, 2012 - 1:57 pm
The M0 model is just the one-factor model, right?
 Malte Jansen posted on Tuesday, March 27, 2012 - 5:33 am
Yes, but from the examples in the Webnote describing the strictly positive S-B Test i thought in order to compare the models i would need the same notation in the syntax (i.e. 3 correlations set to one instead of just "factor1 by f10-f36") so that i can save the start values for the M10 Model and then free the correlations between the factors.
 Tihomir Asparouhov posted on Tuesday, March 27, 2012 - 6:57 pm
If you are using the MLR estimator with categorical data you should use the unscaled likelihood ratio test. The S-B is designed to be used for the case when you are treating the variables as continuous.

Strictly speaking there is a bit of a problem in using LRT for this purpose (overfactoring) see

http://www.statmodel.com/download/Schmitt%202011-Jour%20of%20Psychoed%20Assmt%20-%20EFA%20and%20CFA.pdf

and

Hayashi, K.., Bentler, P. M., & Yuan, K. –H. (2007). On the likelihood ratio test for the number of factors in exploratory factor analysis.

You might want to consider using BIC as well.
 Malte Jansen posted on Wednesday, March 28, 2012 - 8:17 am
Dear Tihomir,

thanks for your detailed reply. I am using MLR because the complex sampling option (type=complex) requires it. Aside from that i am not treating the data (5-point-scales) as categorical (yet).

Best regards,

Malte
 Linda K. Muthen posted on Wednesday, March 28, 2012 - 10:26 am
We need to see the relevant files and your license number at support@statmodel.com to help you further.
 Johan Korhonen posted on Monday, December 03, 2012 - 5:33 am
Hi
I have been working with ESEM models with MLR estimator. To compare nested models I have been using S-B x^2 and it has worked fine until I got a negative estimate when comparing two models. I found your web note on how to compute the strictly positive S-B x^2 but when I tried to run the M0 model with the svalues command to get the starting values for the M10 model I got the following warning text in Mplus:

*** WARNING in MODEL command
The SVALUES option in the OUTPUT command is not available with the use of
EFA factors (ESEM). Request for SVALUES will be ignored.

Do you have any advice for me how to proceed?

Best regards
Johan
 Tihomir Asparouhov posted on Monday, December 03, 2012 - 3:09 pm
You can just use the final results in the M0 output and put these values (manually) as starting value for M10 run. The SVALUES option is just a convenience feature that does this for you but you can do it manually as well.

The strictly positive S-B chi-square is not very easy to do with ESEM. You have to work with the unrotated model. We may eventually update the web note to include a step by step computation for the strictly positive S-B for ESEM. Consider using Model Test as a simpler alternative.
 Johan Korhonen posted on Monday, December 03, 2012 - 10:30 pm
Thank you for your quick response. I think I will use the Model Test command.

Have a nice day
 Paula Vagos posted on Friday, December 07, 2012 - 9:20 am
Hello,
I have been trying to compute the SB chi-square difference test, bu my cd is negative, and so I need to create the M10 model. But I dont understand how to do it
Can anyone help me?
Thanks
 Linda K. Muthen posted on Sunday, December 09, 2012 - 5:39 pm
Send the outputs from the m0 and m1 analyses along with your attempt at the m10 model. Include your license number.
 Sabrina posted on Saturday, April 27, 2013 - 3:04 pm
Hello. I computed the chi-square difference test for 2 nested models and it was negative so I followed the instructions in WebNote 12, but I received the following error message and can't figure out what to do next?

THE MODEL ESTIMATION TERMINATED NORMALLY

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 131.

THE CONDITION NUMBER IS -0.494D-04.
THE ROBUST CHI-SQUARE COULD NOT BE COMPUTED.
 Linda K. Muthen posted on Sunday, April 28, 2013 - 10:26 am
Please send the output and your license number to support@statmodel.com.
 Jacqueline Homel posted on Monday, June 24, 2013 - 3:43 pm
Hello,

I have a longitudinal cross-lagged path model with four outcomes over three assessment points. Three of the outcomes are continuous and the fourth is a count, so I am using MLR. I also am using the knownclass method to compare invariance across males and females. My analysis command looks like this:

Type = mixture ;
Algorithm = integration ;
Integration =montecarlo ;

I want to test whether two parameters in the model are equal to each other (although they are invariant across sex), so I constrained them to be equal and compared this to a model where they were free. However, I get a negative scaled chi-square difference value. I read webnote #12 and tried to estimate model 10, but when I pasted the start values from model 0 in, the new model does not converge. I'm sure I'm doing something wrong but am not sure what.
 Linda K. Muthen posted on Tuesday, June 25, 2013 - 10:21 am
Use MODEL TEST. Label the parameters in the MODEL command.

MODEL TEST:

0 - p1 -p2;
 jtw posted on Friday, September 20, 2013 - 5:52 am
Hi there,

I am comparing the fit of nested models (i.e., bifactor, correlated traits, higher-order, uni-dimensional) using difftest. I have clustered data and obtain different conclusions when type=COMPLEX is used as compared to when I do not adjust for clustering. Specifically, the bifactor model is preferred when I do not account for clustering, whereas the correlated traits model seems to be preferred when I account for clustering.

It may be helpful to know that I obtain all chi-square difference test statistics when not adjusting for clustering. However, I do not obtain the chi-square the difference test statistic for the correlated traits model when adjusting for clustering (because the chi-square value in this case is actually lower than the value for the bifactor model; i.e., the models are not technically nested). Also, it is worth noting that there are no real substantive differences with respect to any of the factor loadings in either case; so, the only issue at hand is which model is preferred.

Any recommendations of which results should be reported in this case? Thank you in advance for your time.
 Linda K. Muthen posted on Friday, September 20, 2013 - 11:58 am
If you have clustered data, you need to do the analysis taking that into account.
 jtw posted on Tuesday, October 08, 2013 - 9:07 am
Hi there,

I understand nested models require two things: 1) more restrictions (higher degrees of freedom) AND 2) worse fit (which can be examined via Tech 5 output).

When a model is more restricted yet has slightly better fit, MPlus does not execute the DIFFTEST and returns the error indicating that the models are not nested. In this specific situation, is the conclusion to be drawn: 1) the more restricted model (e.g., correlated traits) is preferred over the less restricted model (e.g., bifactor model); or 2) no firm conclusions should be drawn because the DIFFTEST is not executed. I'm thinking the answer is #1. Am I right? However, if it is #2, what can be done?

Thank you in advance for your time.
 Linda K. Muthen posted on Tuesday, October 08, 2013 - 9:40 am
You cannot compare the chi-square values of the estimators that end in V, WLSMV and MLMV. You would need to compare their fitting functions which you will find in TECH5 at the bottom of the first column. The lower the fitting function the better.

The conditions that you mention are necessary but not sufficient conditions for nesting. Please send the outputs and your license number to support@statmodel.com for further information.
 Ariane Descheneaux-Buffoni posted on Tuesday, July 08, 2014 - 10:07 am
Hello,

I have a longitudinal cross-lagged path model with six outcomes over three assessment points. All outcomes are continuous and non normally distributed, so I used MLR. I am also using the grouping method to compare invariance across males and females, therefore doing Satorra-Bentler scaled chi-square difference testing. Of the 141 parameters tested for invariance, 19 resulted in negative values. I read the articles and web notes about computing the strictly positive Satorra-Bentler chi-square difference test and I have a few questions:

1- When I use the SVALUES option to get all the final parameter estimates from model M0, estimates regarding the structural links (the "ONs") are not provided. Is that normal? Does this mean we do not have to include these estimates in the model M10?

2- I read that we must halt iterations in model M10 (i.e., the number of iterations have to be equal to 0) and that we have to increase convergence in order to do so. I did that and looked at the TECH5 part of the output to verify that iterations are null. I was wondering which type of iterations to look at? Is it the Quasi-Newton type? These seem to be at 0 whereas the other ones (for instance, EM algorithm iterations) never are at 0.

Best regards and thank you in advance,

Ariane Descheneaux-Buffoni
 Linda K. Muthen posted on Wednesday, July 09, 2014 - 9:18 am
1-2. Please send the outputs and your license number to support@statmodel.com.
 Kaisa Perko posted on Monday, January 12, 2015 - 1:25 am
Hello
I'm a novice in SEM and would highly appreciate any expert advice. I'm struggling with a discrepancy between the chi-square test and a path coefficient.

I'm comparing two nested models with the scaled Satorra-Bentler chi-square difference test. In the restricted model, two paths between latent variables (from one IV to two DVs) are fixed at zero. In the unrestricted model, these paths are freely estimated.(For information, there are also other IVs in the model).

The scaled chi-square test favors the restricted model (nonsignicant result). However, in the unrestricted model, one of the freed paths is significant (p<.05) . Thus, the chi-square test seems to reject a significant path. Accordingly, the conclusions are different depending on whether I estimate all the hypothesized paths at once, or step by step using nested models. I wonder if I'm making a mistake if I follow the chi-square test and accordingly consider the restricted model as the final model?
 Bengt O. Muthen posted on Monday, January 12, 2015 - 11:14 am
These tests can give different results, particularly when the sample isn't large. As a check, you also want to use the Wald test of both coefficients being zero - you do that using the Model Test command.
 Kaisa Perko posted on Tuesday, January 13, 2015 - 4:34 am
Thank you very much for responding.

The sample size is 549. Wald test yielded a significant result (p=0.0365) for the path which is significant in the unrestricted model, and a nonsignificant result for the path which is nonsignificant in the unrestricted model.

As the discrepancy with the scaled chi-square remains, may I ask your opinion, would I stay on safer ground if I 1) follow the scaled chi-square test which suggests the restricted model as the final one or
2) adopt a "all at once" approach (all paths freely estimated) and ignore the chi-square test?
My aim is to evaluate whether one of the IVs is redundant to the other IVs, and the conclusions differ between alternatives 1 and 2.
 Bengt O. Muthen posted on Tuesday, January 13, 2015 - 7:57 am
You should test both paths jointly when doing the Wald Model Test. If you only do one at a time you are not doing anything differently than the printed z test (chi-2 = z*z).
 Kaisa Perko posted on Wednesday, January 14, 2015 - 5:29 am
Of course, thank you. Testing both paths jointly yielded a nonsignificant result in accordance with the scaled chi-square difference test. I additionally found out that also the chi-square difference test suggests the unconstrained model if I fix and free the single path instead both of the paths. So the original discrepancy appears more understandable now, and I need to rethink my analysis strategy against this background.

I am grateful for your advice and patience!
 Tim Powers posted on Sunday, May 31, 2015 - 2:56 am
Hello,
I'm computing a chi-square diff test using MLR for the first time in Mplus. The website instructions seem pretty straightforward. In calculating the difference test scaling correction cd, it warns "Be sure to use the correction factor given in the output for the HO model."
Should I use the 'Scaling Correction Factor for MLR' found under Chi-Square Test Model Fit in my calculations? There is no mention of HO here. Alternatively, should I be using the 'HO Scaling Correction Factor for MLR' found under the Loglikelihood section?
 Linda K. Muthen posted on Sunday, May 31, 2015 - 11:52 am
If you are doing the test using chi-square, use the one under chi-square. If you are doing the difference testing using the loglikelihood, use the one under the loglikelihood.
 Tim Powers posted on Sunday, May 31, 2015 - 6:39 pm
Many thanks, Linda.
Kind regards,
Tim
 Simon L Chrétien posted on Tuesday, February 09, 2016 - 12:08 pm
Hello all,

I'm having a hard time figuring how to calculate the SB chi-square even though it seems simple. I can't seem to find what are the chi-square and df values for the comparison and nested models in my output. Here is my output:

MODEL FIT INFORMATION

Number of Free Parameters 65

Loglikelihood

H0 Value -12117.514
H0 Scaling Correction Factor for MLR 1.0140

H1 Value -11921.932
H1 Scaling Correction Factor for MLR 1.0794


Chi-Square Test of Model Fit

Value 352.740*
Degrees of Freedom 144
P-Value 0.0000
Scaling Correction Factor for MLR 1.1089




Chi-Square Test of Model Fit for the Baseline Model

Value 5109.169
Degrees of Freedom 171
P-Value 0.0000

Thank you.
 Linda K. Muthen posted on Tuesday, February 09, 2016 - 2:18 pm
You need two outputs to do the test. One for the less restricted model and one for the model nested in the less restricted model. You use the H0 values from those two outputs.
 Simon L Chrétien posted on Thursday, February 11, 2016 - 12:01 pm
If I understand correctly, according to the examples given on the website, I'd have to use those two inputs:
title: M0 model
variable: NAMES ARE y1-y5;
data: file=1.dat;
analysis: estimator=mlr;
model: f1 BY y1-y5; y1 with y2;
output: svalues;

title: M1 model
variable: NAMES ARE y1-y5;
data: file=1.dat;
analysis: estimator=mlr;
model: f1 BY y1-y5; y1 with y2; y3 with y2;

I'm not sure what to do if I have more than 1 factor and 2 factors that have 2 variables...
Here is my model:

ESC BY esc01 esc02 esc03;
EXP BY exp01 exp02;
GS BY gs01 gs02;
RS BY rs01 rs02 rs03;
FG BY fg01 fg02 fg03;
DEV BY dev01 dev02 dev03;
RD BY rd01 rd02 rd03;
GRAND BY EXP GS FG;
VULN BY ESC RD RS DEV;

Thank you,
 Bengt O. Muthen posted on Thursday, February 11, 2016 - 6:22 pm
As you see on our web page

http://www.statmodel.com/chidiff.shtml

you only need to work with the degrees of freedom, the scaling correction factor, and the log likelihood or chi-2 values from your two models. The details of the model are not needed.
 Eric Deemer posted on Tuesday, February 16, 2016 - 12:59 pm
Hello,
I'm trying to compute the strictly positive SB chi-square value for a test of the difference between a 1- and 2-factor model. I'm following web note 12 but I'm not sure if my models are correct. The model statement for my restrictive model is:

MODEL: F1 BY y1-y6;
F2 BY y7-y10;
F1 with F2@1;

So I obtain starting values from this model and plug them into a model with these specifications but also with a freely estimated F1-F2 covariance? Thank you.

Eric
 Bengt O. Muthen posted on Tuesday, February 16, 2016 - 6:39 pm
Why bother with chi-2? You can just look at the z-test for F1 WITH F2 being different from 1 (subtract 1 from the estimate and divide by the SE). With 1 df the squared z-test should be close to the chi-2 and it does take the non-normality into account in the SEs if you use MLR or MLM.
 Eric Deemer posted on Tuesday, February 16, 2016 - 6:57 pm
Ah, I see. Thank you, Bengt!

Eric
 Tim Powers posted on Friday, February 19, 2016 - 4:29 pm
Hello Eric and Bengt,
I have also struggled with this particular issue of comparing 1 & 2 factors differences.
With a large sample size (and resultant small SE), using the z-test, everything is significantly different than 1. My chi-square values (df=1) are much higher in this approach than the corrected chi-square for non-normality.
I have struggled with the 'strictly positive SB chi-square approach'; but may have to come back to this.
In the meantime, I have used the Fornell and Larcker AVE (average variance extracted) approach to analyse difference b/w factors. But this seems somewhat unsophisticated.

Wondering on best approach.

Tim
 Bengt O. Muthen posted on Friday, February 19, 2016 - 6:17 pm
BIC is always useful. It doesn't test but it guides.
 Vaiva Gerasimaviciute posted on Tuesday, March 21, 2017 - 5:29 am
Dear Muthen,

I am running a 3-wave cross-lagged model with categorical observed variables using WLSMV estimator, theta parameterization.
Chi-square of the baseline model is 541.317. When I add second order path (from T1 to T3), chi-square is 485.851.
To my understanding, the difference in chi-square should be 55.466(541.317-485.851). However, difftest in Mplus gives 75.560. Why could that be?
 Bengt O. Muthen posted on Tuesday, March 21, 2017 - 3:01 pm
See the intro in UG ex13.12. WLSMV chi-square behaves differently.
 Daniel Lee posted on Saturday, December 23, 2017 - 10:30 pm
Hello Dr. Muthen, would you still recommend doing an LRT or wald test for a multigroup model if regression coefficient in one group is not significant, while the regression coefficient in another group is significant? Thank you!
 Bengt O. Muthen posted on Sunday, December 24, 2017 - 4:26 pm
Yes.
 Nicky de Vries posted on Tuesday, March 19, 2019 - 1:32 am
Hello,

I'm performing a multilevel CFA with the MLR estimator, following the five step approach (Muthén, 1994). So, I have been using the SB scaled chi square difference test in the way that was explained on this website. However, I got some negative chi square values, which is why I want to perform the SB strictly positive chi square for these tests. The webnote about this was already very useful, but I'm not sure yet how to obtain the M10 value in my case, because I'm comparing models with different numbers of factors instead of just adding a parameter.

For example, for a one-level model my nested model looks like this:
C BY y1-y27;

Whereas, my comparison model looks like this:
C BY Y1 Y2 Y4 Y5 Y6 Y9 Y11 Y13 Y17 Y18 Y20 Y22 Y25 Y27;
NC BY Y3 Y7 Y8 Y10 Y12 Y14 Y15 Y16 Y19 Y21 Y23 Y24 Y26;

I've read in the webnote that have to use SVALUES to obtain the Mplus language for M10 and that I need to add the command for the extra parameters in the comparison model. I just do not understand what that would be in this case. Can you help me with that?

Or, would it be better not to compare these models with the chi square difference, buth only with the AIC for example?

Thanks in advance!
 Tihomir Asparouhov posted on Tuesday, March 19, 2019 - 2:41 pm
The easiest way to do that is to fix the scale of the factor by fixing the variance of the factor to 1.

So if the M0 model is
C BY y1-y27*1; C@1;

and the M1 model is
C BY Y1*1 Y2 Y4 Y5 Y6 Y9 Y11 Y13 Y17 Y18 Y20 Y22 Y25 Y27; C@1
NC BY Y3*1 Y7 Y8 Y10 Y12 Y14 Y15 Y16 Y19 Y21 Y23 Y24 Y26; NC@1;

then the M10 model is
C BY Y1* Y2 Y4 Y5 Y6 Y9 Y11 Y13 Y17 Y18 Y20 Y22 Y25 Y27; C@1
NC BY Y3* Y7 Y8 Y10 Y12 Y14 Y15 Y16 Y19 Y21 Y23 Y24 Y26; NC@1;
C with NC*1;
and the rest of the starting values from the loadings have to be picked up from the M0 run as well as the intercepts and the residual variance.


In your particular case the easiest way to test the two models is to look at the Z test for the factor correlation if it is significantly different from 1, or equivalently use
Model test: 0=1-corr;
where
C with NC (corr);

You can also use BIC or the SB test. Multiple tests can be useful if one is not very decisive.
 Olev Must posted on Tuesday, April 16, 2019 - 7:57 am
Dear Mplus team,

I am conducting the invariance testing in multigroup situation: binary items, 2 groups, WLSMV. Configural model was ok. Difftest showed that the metric model was significantly worse than configural. I decided to free one loading (and after some thresholds). I tried to estimate the difference between Metric and revised metric model (MetricA), but I got the message: THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL.
I understand that Metric model is more constrained than MetricA. But is it impossible to estimate the differences between those models? My aim is to estimate the differences in latent means. I would use the partial invariance model as evidently some loadings and intercepts are not invariant.
Please advice for needed analytical steps how to free loadings and thresholds and to estimate the consequences of this freeing steps.

Thank you!

Olev
 Bengt O. Muthen posted on Tuesday, April 16, 2019 - 5:43 pm
We need to see your full output for the run with the message - send to Support along with your license number.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: