Use the WLS estimator for difference testing. Use the WLSMV estimator for the final model.
Daniel posted on Monday, March 03, 2003 - 12:12 pm
For a theoratical model with all categorical indicators, fit indices are:
CFI=0.996 TLI=0.996 RMSEA=0.046 SRMR=0.070
which are acceptable. But Chi-square difference test between theoratical model and measurement model is significant. In this case, Is Chi-square difference test realy matter? Is it a crucial index among all fit indices such that the testing model must meet it first of all? Any suggestions?
bmuthen posted on Monday, March 03, 2003 - 2:41 pm
Both of your models fit the data reasonably well. However, if the two models differ in important ways substantively, the result of the chi-square difference testing is important. The difference testing is a powerful way to distinguish between models. Note, however, that you should make sure that the two models are nested.
These two model are indeed nested. Chi-square difference test is not siginificant only after allowing residual covariance between the error terms of some indicators which belong to same latent variable. Can I modify the model in this way? what is the concerns about this practice?
Daniel posted on Friday, March 07, 2003 - 11:41 am
Introducing residual covariances between the error terms of some indicators belonging to the same latent variable can greatly improve Chi-square differece test. However, these residual covariances of these error terms are not significant. As I understand, In the output of final model which one deems it acceptable, all factor loadings and path coefficents should be significantly different from zero. However, we are allowed to just keep residual covariacnes of error terms in the model if it can offer better fit, no matter whether they are significant or not. Am I right?
bmuthen posted on Monday, March 10, 2003 - 7:35 am
Typically, if a residual covariance gives an improvement in chi-square it is also significant. A good practice is to include in your model only residual covariances that are both significant and make substantive sense.
Daniel posted on Sunday, March 16, 2003 - 10:14 am
According to your suggestion, use the WLS estimator for Chi-square difference testing. Use the WLSMV estimator for the final model. More general, however, there are also many fit indices which are linked with chi-square value or degrees of freedom, e.g.,
NFI--Normed-fit index PR--Pasimonious ratio PNFI---Pasimonious mormed-fit index RNFI---Relative normed-fit index RPR----Relative pasimony ratio RPFI---Relative parsimonious-fit index .
Should I use all chi-square values and degrees of freedom from WLSMV estimate or all of that from WLS estimate?
If I use that from WLSMV, a problem occurs when calculating PR:
where dfj is the degrees of freedom derived from testing model while df0 is that derived from baseline model. Theoretically PR should smaller than one but real calculation shows it greater than one.
If we should use those chi-square values from WLS estimate, in what circumstances can the values of that from WLSMV be used?
For difference testing, use WLS. For all else, I would use WLSMV. RMSEA, CFI, and TLI have been investigated for WLSMV by one of Bengt's students. They seem to work well. The other measures have not been studied as far as I know. I would definitely not use PR with WLSMV because of the fact that the degrees of freedom are not calculated in the regular way and do not have the same meaning. I'm sure that PR was developed for degrees of freedom calculated in the regular way.
Anonymous posted on Thursday, October 07, 2004 - 8:22 am
Sorry, but what does chi-square difference test mean? WLSMV is not accepted in this test. Does the diefference test mean that if you look perhaps with WLS to the quality of your model you compare the difference of (Chi-sq.0Model-df of 0model)- (chi-sq.hyp.model-df of hyp.Model) / (chi-sq0model-df0model) to compute CFI? If the values of the hyp.Model are lower than the ones of the 0model, this means a high CFI. Is it not allowed to use WLSMV for a difference test because the df are computed in another way (User's Guide: 358)? But what does the CFI if this is the case, mean if WLSMV is used? Or does chi-square difference test mean something completely different from that i wrote?
Mplus Version 3 now has a procedure for doing chi-square difference testing using WLSMV. Yes, the problem was that the degrees of freedom are not computed in the regular way.
Anonymous posted on Wednesday, October 13, 2004 - 12:38 am
So, does the difference test mean the comparison between 0-Model and hyp.-Model and the Comparative Fit Index to check the quality of your model is such a difference test?
bmuthen posted on Thursday, October 14, 2004 - 11:40 am
The difference test refers to a comparison between H0 and H1, where H0 is nested within H1. Here, H0 is the model you are focusing on. H1 can be any less restricted model. With CFI, H1 is the completely unrestricted model.
Anonymous posted on Monday, December 06, 2004 - 10:44 am
I want to use the difftest command in M-Plus 3 to perform a difference of two nested models (with categorical data). Can you point me to the reference that describes the analytics of the procedure (I will need to cite it). Has any simulation work been done that I can cite as well? Thank you.
bmuthen posted on Monday, December 06, 2004 - 11:38 am
This method is equivalent to the difference testing that was previously implemented in Mplus for H1/H0 model testing (see Muthen, DuToit, Spisic). We have conducted many simulations but there is no reference yet. Simulations of this kind are unfortunately not very easy to do currently in Mplus.
I am using the chi² difference test under WLSMV. Can you explain shortly, why the two step procedure is necessary? Are there any references about this?
My second question concers invariance testing in MGA with categorial outcomes under WLSMV. For correct interpretation of group means invariance of intercepts and loadings must be given, right? But what about the thresholds? Can the thresholds be free while the loadings are constant over groups? In other words, is it possible to test separately on invariance of thresholds and invariance of loadings? Millsap & Tein (2004) are talking about a minimum of thresholds beeing constant due to identification while others let all thresholds vary over groups. What do you think, is the best way of invariance testing? And in terms of identification is it possible to set all thresholds free?
Technical appendix 4 discusses estimators in Mplus. You can request the Muthen, DuToit, and Spisic paper from firstname.lastname@example.org.
We recommend invariance testing where thresholds and intercepts are held equal or not in tandem. One reason is that item characterisitc curves are based on both of these parameters. Not all thresholds can be freed.
I have added a section to Chapter 13 in the Version Mplus User's Guide that describes the steps we recommend for testing measurement invariance for continuous and for categorical outcomes.
When I report a chi-square for a CFA (or other kinds of model), I routinely uses the little correction of "chi-square value divided by its df." (Bollen, 1989). Although purists think its not the best practice, to me it is a little better than the "always" significant chi-square.
My question is: can this correction be applied to the DIFFTEST in Mplus. Because it is a difference test, this may militates against such correction? I'm not so sure how it is calculated, so I wonder what you guys would think about this practice. Thanks.
Let me backtrack on the last part of my answer. In DIFFTEST, the df printed is not the difference between the number of parameters in the two models compared. Nevertheless, the value printed is chi-square for the df printed (so the p value is right as a function of those two) -this would suggest that the CS/df descriptive approach you mention is equally motivated here, although I am not saying that I particularly endorse it.
When I tried to use the DIFFTEST procedures for WLSMV to compare 1- and 2-factor EFA models, I got this error: "THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE FILE CONTAINING INFORMATION ABOUT THE H1 MODEL HAS INSUFFICIENT DATA."
Should the WLSMV diff testing method work for EFA?
The DIFFTEST option cannot be used with TYPE=EFA; You would need to do an EFA in a CFA framework to use DIFFTEST.
kc blackwell posted on Friday, September 01, 2006 - 11:35 am
When comparing two nested models (as in a series of invariance analyses) using the DIFFTEST option with WLSMV, is it possible that the chi-square value for the more restrictive model (i.e., with loadings constrained across groups) will be smaller than the chi-square value for the less restrictive model (i.e., a baseline model without these loading constraints), or does this indicate an error on my part?
I have been doing some multiple group analyses using WLSMV and have run into some instances where, according the the Satorra-Bentler scaled (mean-adjusted) chi-square, the CFI, the TLI and the RMSEA, the constrained model fits better than the unconstrained model. In addition, the estimated df for constrained model is less than the estimate for the unconstrained model. The diff test runs -- that is, it accepts that the models are nested - and shows a nonsignificant change in chi-square. It is also the case that the estimates from the unconstrained models (i.e., thresholds and factor loadings) are very close to one another across groups. Here are the results for a CFA model with 4 groups, 3 latent variables and 2 measured variables.
To clarify, the Satorra-Bentler scaled (mean-adjusted) chi-square is part of the MLM estimator not WLSMV. If you are using WLSMV, the chi-square values cannot be used for difference testing without using the DIFFTEST option which it appears you are using. With WLSMV, you don't expect the chi-square value and the degrees of freedom to behave as with ML, for example.
Thank you for clearing up my confusion about the Santorra-Bentler scaled chi-square. Where can I find an explanation for the calculation of chi-square and degrees of freedom when using the WLSMV estimator? The fact that the the df do not seem to correspond in a straightforward way with the number of measured variables and the number of estimated associations among variables is confusing to me.
I am helping prepare a manuscript that reports on some of the analyses described in the post above using WLSMV to accommodate categorical variables in an SEM. In the methods section of the paper, we say that we used WLSMV estimates of chi-square and the derivatives difference test for change in model fit with nested models. Given that some of our reported results will still seem odd to readers familiar with ML estimates, I am considering adding the following footnote:
"Degrees of freedom for the model fit chi-square test is itself mean- and variance- adjusted when using the WLSMV estimator and does not correspond in a straightforward way with the numbers of measured variables and estimated parameters. This leads to some values that may appear counter-intuitive (e.g., nested models where the estimated degrees of freedom for the constrained model are the same or fewer than for the unconstrained model). Also, the difference in model fit for nested models that is based on the derivatives difference test does not correspond directly with the differences in estimated chi-square and degrees of freedom between the constrained and unconstrained models."
Is this accurate? Even after reading the technical appendix and the Muthen, du Toit, and Spisic paper, I am still a little fuzzy on what is going on with the DIFFTEST and the chi-square and df estimates.
I would like to use the DIFFTEST to examine the chi-square difference for two models with all categorical data and using WLSMV estimation. The HO model has five first-order correlated factors and the H1 model is the more restricted model with a second-order factor (see below). Technically these models are nested, but the software keeps giving me the message that "THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL". Is there a solution for calculating the chi-square difference between these two models when using WLSMV estimation?
Here are the two models I would like to compare: H1 MODEL: ANALYSIS: ESTIMATOR = WLSMV; MODEL: F1 by y2 y5 y6 y7; F2 BY y8 y11 y15 y16; F3 BY y27 y32; F4 BY y18 y21 y23 y24; F5 BY y35 y36 y37 y38; F by family friends living school self; SAVEDATA: DIFFTEST IS test.dat;
HO MODEL: ANALYSIS: ESTIMATOR = WLSMV; DIFFTEST IS test.dat; MODEL: F1 by y2 y5 y6 y7; F2 BY y8 y11 y15 y16; F3 BY y27 y32; F4 BY y18 y21 y23 y24; F5 BY y35 y36 y37 y38;
Linda, thanks for pointing that out. I had indeed incorrectly saved the derivatives of the more restrictive model first. The difftest works fine when based on the derivatives of the less restrictive model. Thanks again.
I am performing a DIFFTEST to assess the extent to which there are gender differences in individual regression parameters in a path model. My H1 model is a saturated model. In my H0 model, I have constrained 1 parameter to be equal across groups by including "y1 ON x1 (1);" in the model command. My output reads:
Chi-square test for difference testing
Value .566 Degrees of freedom 1** P-value .4519
Does the p-value of .4519 indicate that this individual regression parameter is not significantly different by gender?
The interpretation of difference testing is described in Chapter 13 under Model Difference Testing. A non-significant result indicates that the constraining the parameter to be equal in both groups does not significantly worsen model fit. This indicates that the parameter is not different in both groups.
I have a path model with 6 continuous endogenous variables being estimated using ML. I am interested in testing the extent to which there is a significant improvement in model fit with the addition of a single dichotomous mediating variable. I run into a problem because my nested model is estimated using ML while my comparison model must be estimated using WLSMV because of the categorical mediating variable. Which chi-square difference testing procedure would be appropriate in this situation?
On a related note, is it possible to test the significance of the change in R square of the outcome variables individually rather than testing the change in the overall fit of the model ?
A necessary condition for models to be nested is that they have the same set of observed variables. You should include the same set of observed variables in both models and fix the regression of the distal to zero in one and allow it to be estimated in the other. With WLSMV, note that the DIFFTEST option is needed for chi-square difference testing. See Example 12.12 in the Mplus user's guide.
R-square is not a test of model fit so I would not use it in that way.
I have a data set of dichotomous variables, have been analysing them using WLMSV and have compared the goodness of fit of a number of different models relative to a baseline model using the Chi-squared difference test. Following your comments to Julien March 20-21 2006 I have been using the DIFFTEST Chi-square divided by its degrees of freedom as the comparison value when comparing fit of models. You suggested in your response that the DF for this measure makes this division valid. (It also makes a difference to my results whether or not the Chi squared value is reduced in this way.)
I have a couple of questions:
- Could you please confirm that the Difference Test Chi-square values could be validly compared when divided by the DF. A reviewer of our paper has strongly criticised our using this measure.
- Is it possible to derive an AIC or similar value to allow us to compare the relative fit of pairs of non-nested models?
- Is there any way in which I could obtain a 95% confidence for the RMSEA when doing these analyses?
Following on from your response, when I use the difference test to compare the goodness of fit of two non-nested models relative to the baseline model, the DIFFTEST Chi square values are associated with different numbers of degrees of freedom. Can these Chi square values (both highly significant) be compared without any adjustment? That’s where I had thought the division by degrees of freedom was appropriate. Is there any other way I can make a quantitative statement about their relative goodness of fit? (The variables in the data set are all dichotomous)
If you are using the DIFFTEST option, you must be comparing two nested models and using WLSMV. With WLSMV, only the p-value is meaningful. The chi-square and degrees of freedom are adjusted to obtain a correct p-value. The is why you need to use the DIFFTEST option for chi-square difference testing.
There are more fit measures than chi-square to consider. I would look at those in addition to chi-square. If you have a very large sample, it may be that chi-square is sensitive to model misfit.
doing a mgfa using wlsm (delta) to test invariance across 6 groups. responses are all categorical. i am doing a chi-square difference test comparing a baseline model (thresholds, factor loadings, factor variance and covariance freed across groups) with a restricted model (parameters constrained equal across groups). # of groups=6, # of observations: Group SE=321; Group GN=166; Group PN=196; Group PT=161; Group DM=160; Group PPP=251, # of dependent variables=36; # of independent variables =0, # of continuous latent variables =3. chi-square for baseline model=6478.376(3561) and the chi-square for the restricted model=6465.057(3726). is this possible to have a baseline model with a higher chi-square with lower number of df than the restricted model? it seems strange to have a negative chi-square and positive df for a chi-square difference test.
to continue from the above message... i am having the same issue with only 2 groups (same data set - i thought it might be the complexity of the model). my chi-square for the baseline model is greater than chi-square for the restricted model, but the df for the baseline model is less than the df for the restrictive model. i don't think it is my syntax or coding, because i use the exact same syntax on a different data set and i don't have the issue. any ideas would be greatly appreciated.
Hi I'm fitting a CFA model with binairy indicators and I test nested models using the DIFFTEST option.
I have two questions:
1. MPlus prints an overall fit of the model including a chi-square. In my case, chi(128)=155.389, p=.05.
Can I interpreted this overall fit as I would in the case of continuous data [even though I can't use it for chi-diff testing]? I mean: for continuous data, I would conclude that, given that my sample size is large, this chi-square indicates a well fitting model.
(Note that CFI=.999, RMSEA=.013; this indicates good fit as well, but I particularly want to know whether the overall chi-square fit can be interpreted as usual).
2. I struggle with how to report the results of the chi-square difference tests that I get when I use the DIFFTEST option because the degrees of freedom are not equal to the number of parameters that I constrain...
Hi, Im fitting genetic models with binary data: I use WLSMV and the model constraint option. I cant interpret the chi-square or use it for model comparison, but I also cant use the DIFFTEST option because it is incompatible with the model contraint option.
The sample size is not that large for categorical outcomes. I would need to see the whole picture to comment further. If you send the output and your license number to email@example.com, I can take a look at it.
Hi Linda and/or Bengt, I have run into a problem conducting difference tests for models including categorical indicators. We have compared a number of nested models successfully (that use identical measurement models but differ in terms of the paths included in the structural model) but with one comparison in particular Mplus tells us that the models are not nested and we are certain that they are (the one simply frees up 4 paths in the structural model to be estimated that aren't included in the comparison model). Please let me know if you need me to send you the outputs from each of the two models, and the data file and the derivatives. Thanks! Rick
You should run a model where what you want the H1 model to be is the H0 model using DIFFTEST in the SAVEDATA command. Then run the H0 model using DIFFTEST in the ANALYSIS command.
The folloiwng paper may discuss the saturated model in Mplus:
Prescott, C.A. (2004). Using the Mplus computer program to estimate models for continuous and categorical data from twins. Behavior Genetics, 34, 17-40.
Jason Bond posted on Tuesday, December 08, 2009 - 12:42 pm
I'm attempting to assess whether the addition of a single dichotomous indicator of a factor to a number of other dichotomous indicators improves model fit. Alternatively, I guess this question could be formulated as "is such a question answerable using a typical chi-squared difference test?"
If so, then from the first post above, your suggestion is to use WLS. But you've also mentioned in a post above that, when using chi-squared difference testing (specifically when using the DIFFTEST option which only applies to the WLSMV estimator), the same set of variables should be in the model
when considering the single additional craving variable or to simply exclude it from consideration (i.e., not include it in the Usevar list or the Categorical list or the Model statement)?
My concern is that, when I do the latter, the Ha model chi-squared produced is larger than the H0 model chi-squared and has more degrees of freedom (due to all of the covariances excluded I imagine plus the excluded path), whereas in the traditional chi-squared difference testing from nested modeling exhibits the reverse. But in doing the former, should I also fix all other model parameters associated with craving to zero as well (i.e., Tau)?
I would take the former approach and but still include parameters for the mean (or threshold) and variance (if any) for the variable.
Jason Bond posted on Wednesday, December 09, 2009 - 10:45 am
So when you refer to variance for the variable I'm assuming that you are referring to the additional dichotomous manifest variable (i.e., craving) and not the factor variance? However, in looking through the output and TECH1 output parameters, I don't see anywhere variances for the dependent dichotomous indicators (i.e., the THETA matrix). Is this something that can be allowed?
My goal is to assess the contribution of an additional variable above and beyond the other variables in the model. As it is correlated with the other variables already in the model, the chi-squared produced by fixing its factor loading to 0 is massive. However, the question of whether the additional variable contributes anything to the model fit above and beyond the other variables doesn't seem to quite be answered by this approach.
Others considering this question have performed analyses with and without the additional variable included and compared the usual fit measures (BIC, RMSEA, information curves, etc.) across the two models. Which do you think might be more relevant? Thanks much again Bengt.
In this case there is no variance - that's what I meant by "if any".
Depends on the question - see below.
I think the problem is formulated in an awkward way - I don't think one should think about whether or not adding an indicator improves model fit. Instead, think about whether or not it adds important information for the factor (assuming the model still fits). That question could be answered by information functions. And could be answered by reduction of SEs in structural relations that the factor is involved in.
I read the comments regarding calculations of degrees of freedom for chi-square difference testing using WLSMV. I understand that the chi-square and degrees of freedom are adjusted to obtain a correct p-value, and (they are not what one would perhaps expect to see).
However, what is formula for calculating the degrees of freedom for chi-square difference testing using WLSMV? (The Technical Appendices do not state it, and neither does Satorra and Bentler , unless I missing something; nor do they represent the difference in the degrees of freedom of the two nested models.)
I understand that the degrees of freedom for chi-square for model fit are calculated according to Appendix 4, (110).
When you stated on Monday, June 19, 2006 that "The DIFFTEST option cannot be used with TYPE=EFA; You would need to do an EFA in a CFA framework to use DIFFTEST," does that mean that I should run the resulting EFA models in CFA in order to get the correct MLR scaling factor to use in the Satorra-Bentler modification?
-OR- Can I use the MLR scaling factor that automatically displayed below the chi-square in my categorical EFA with default estimator WLSM?
The technical appendix at http://statmodel.com/chidiff.shtml does not specify what to do for WLSM estimation although the output warning states that "MLM, MLR and WLSM chi-square difference testing is described in the Mplus Technical Appendices at www.statmodel.com. See chi-square difference testing in the index of the Mplus User's Guide."
Hi, I want to test measurement invariance between nested models using DIFFTEST. The problem is, I have a massive dataset (250+K), so the tiniest differences are going to come up as significant. Is there any way I can specify the amount the models need to be different by? E.g. test the probability that the models differ by more than say, 10%?
Or alternatively, would any other tests (say change in CFI, TLI) be useful? (and how does one request these in MPlus syntax).
My other option is to take random samples from my whole sample, but I'd like to attempt it on the whole sample if possible. Thanks!
I have 19 ordinal items as indicators of 3 latent factors using WLSMV (as determined by EFA, promax rotation). I would like to test this model against a different sample, to see if the 3 factor structure holds. From the postings and the Mplus manual, it seems that I would run one CFA using both groups, specifying 3 factors, and would then run the same model but specify that the factor covariances across both groups are equivalent [i.e., f1 WITH f2 f3 (1)]. I would then use the difftest function to determine if the two models are significantly different. Is this correct? I am very new to Mplus, so I have also included a snippet of my code:
For H0 model: GROUPING IS group (0=n561 1=n562) MODEL: F1 BY B10_in B19_in B28_in B54_in B61_in B66_in B71_in B79_in B80_in; F2 BY B9_sb B18_sb B27_sb B36_sb; F3 BY B3_wm B39_wm B48_wm B63_wm B73_wm B78_wm; ANALYSIS: ESTIMATOR = WLSMV; SAVEDATA: DIFFTEST IS deriv_561_562_9in_6wm_4sh.dat;
For H1 model: GROUPING IS group (0=n561 1=n562) MODEL: !same as above plus next line
F1 WITH F2 F3 (1);
ANALYSIS: DIFFTEST IS deriv_561_562_9in_6wm_4sh.dat;
This looks correct. You can also consider comparing variances. See multiple group analysis in the Topic 1 and Topic 2 course handouts for measurement invariance and the comparison of structural parameters.
Thank you for your reply. For this same data set, I would like to test the invariance of factor loadings between these two independent samples (19 categorical indicators, 3 continuous latent variables). From the course handouts you mention, it seems that the default in Mplus is to hold the factor loadings equal between groups. In order to test the factor loadings between groups, it seems that I would create an overall analysis model for both groups (as I did in my H0 model above), and then run this as a CFA saving the residuals (using difftest). For my second CFA (and subsequently chi square difference test), I would include a group model in addition to the overall analysis model, but the group model would need to specify that the factor loadings are to be freely estimated. According to the manual, it looks like I would list each indicator with a * to allow the factor loadings to be freely estimated. However, on the slides (#212, topic 1) it appears that I should list the indicators in brackets. Can you please explain?
I went ahead and modeled my code after slides #212-213. However I receive the following error when attempting to run the second model and difftest: THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL.
Hello. I am testing measurement and structural invarance across two groups using WLMSV. DIFFTEST works for the measurement invariance tests, but as I move from scalar to factor variance invariance, the DIFFTEST error states: the chi-square difference test could not be computer because the HO model is not nested in the H1 model. Could you please advise?
I have just started using mplus (and doing sem & path analysis). I am a bit confused with the fit of the model. My model consists of categorical outcome variable and mostly categorical variables with only 1 continuous variable.
Usevariables are x1 x2 x3 x4 x5 x6 y1 x7 x8;
Categorical are x1 x2 x3 x4 x5 x6;
Model: aliend by x4 x5 x6; alienp by x1 x2; alienp with y1; alienp with aliend; y1 with aliend; alienp y1 aliend on x7 x8; x3 on alienp x8 y1 aliend;
Chi Square test of model: 105.518* df: 16 p: 0.000
RMSEA Estimate: 0.022 90% C.I.: 0.018 0.026
CFI: 0.983 TLI: 0.963
Chi-Square Test of Model fit for the Baseline Model: 5274.423 df: 35 p: 0.000
Since the chi-square test shows significance, does that mean that the model is not fitting well? However, I thought the CFI & TLI was showing good fit.
CFI is a less stringent fit statistic than chi-square. If you are new to both Mplus and SEM, I suggest listening to our Topic 1 course video on the website and getting an SEM book. A good one for beginners is the one by Rex Kline.
Tanya posted on Tuesday, September 27, 2011 - 9:58 am
will do that... thanks a lot!
Kathy posted on Tuesday, October 04, 2011 - 2:08 pm
In conducting a MGFA I found non-invariance of the factor loadings/thresholds across groups (p<.001) but the CFI and RMSEA values were unchanged between the baseline model and the loading/threshold model. In other words, the difference test indicated that constraining the loadings/threshold equal across groups resulted in a decrease in the fit of the model, but the goodness-of-fit values suggest no such decrease in model fit. The same thing has happened in several other analyses. Why would the goodness-of-fit indicate no change? Which values do you pay attention to, i.e. is there really a decrease in model fit?
The default in Mplus is for the thresholds and factor loadings to be held equal across classes. So you should be relaxing, not imposing, these constraints. See the Topic 2 course handout under multiple group analysis to see how to do this. See also the multiple group discussion in Chapter 14 of the user's guide.
Kathy posted on Tuesday, October 04, 2011 - 5:21 pm
In accordance with topic 14 my baseline model has the loadings/thresholds freed across groups, and in what I called the "loading/threshold" model the parameters were made equal (mplus default). Is this not right? At any rate, I found non-invariance between these two models, according to the DIFFTEST (p<.001), but the CFI and RMSEA values were unchanged between the two model. My question pertains to the discrepancy between the DIFFTEST and the CFI and RMSEA. That is, the DIFFTEST suggests that constraining the loadings/thresholds to be equal decreased the fit of the model while the CFI and RMSEA suggest that the fit of model did not change. My question is why would the goodness-of-fit values indicate no change when the DIFFTEST suggest that the model fit decreased? Which values do you pay attention to?
I have been asked by a reviewer to explain how the df are calculated for the chi square difference test (in assessing invariance between a less restrictive CFA model using ordered categorical data and a more restrictive model). I have read the technical appendix for chi-square difference testing on the website, but I am afraid that I do not completely understand it. I have two questions about it. First, I do not see the scaling correction factor for either the less restrictive model (c0) or for the more restrictive model (c1) as part of my Mplus output. Second, I am hoping you can clarify how the scaling correction factor is estimated or calculated. My current understanding is that using the scaling correction is helpful for ensuring that the obtained chi square difference test value approximates a chi square distribution. But I am not entirely sure that I am correct or how the scaling correction is obtained.
The degrees of freedom for a chi-square difference test is the difference in degrees of freedom between the two models. If you don't find the scaling correction factor, you must be using an old version of the program. The formula for the scaling correction factor is in Technical Appendix 4. This cannot be computed by hand.
David Kosson posted on Thursday, November 10, 2011 - 1:22 pm
Linda, Thanks. I am guessing you are saying that this is the case even if I am using the WLSMV estimator (which i am). But this does not seem to be the case -- For my less restrictive model (allowing the groups to differ on all loadings and thresholds, using nomean structure),the Chi Square Value = 251.196* Degrees of Freedom = 79**
For my more restrictive model (allowing the groups to differ on loadings but not thresholds, no mean structure), the chi square value = 226.604* Degrees of Freedom = 76**
But for the chi square difference test, chi square value = 19.033 Degrees of Freedom = 9** P-Value = 0.0249
In case it helps, there were 13 indicators, all latent factor means were set at 0 and all scale factors (or indicators) were fixed at 1.
If you are using a version before Version 6, the degrees of freedom for WLSMV are not calculated in the regular way. Both chi-square and the degrees of freedom are adjusted to obtain a correct p-value. Neither chi-square nor the degrees of freedom should be interpreted. To do difference testing with WLSMV, you must use the DIFFTEST option. There is no scaling correction factor involved. The difference in the number of free parameters can be used instead of the difference in degrees of freedom.
Eric Chen posted on Wednesday, December 14, 2011 - 12:54 am
Dear Dr. Muthen,
I conduct a multiple group categorical CFA using WLSMV as estimator.
I wonder how to carry out the chi-square difference test when the difference between my H0 and H1 models is a nonliear constraint.
When completing EFA on categorical data using the WLSMV estimator in the output there is a section titled 'FACTOR STRUCTURE'. What rotation is used to ascertain this output? Are they a recalculation of the Geomin rotated loadings also provided?
Dear Mplus Team, if I treat my variables as categorical in multiple group models, taking MLR (not WLSMV) as estimator (type = mixture) and using the likelihood ratio test (LRT) for model comparisons: Should I use (in the case of categorical outcomes and MLR) the formulas on http://www.statmodel.com/chidiff.shtml - “Difference Testing Using the Loglikelihood with MLR”) based on loglikelihood values and scaling correction factors? Or is it better with categorical outcomes to use ML and the ordinary likelihood ratio test for model comparisons? My question is triggered by a posting from Tihomir Asparouhov: “If you are using the MLR estimator with categorical data you should use the unscaled likelihood ratio test. The S-B is designed to be used for the case when you are treating the variables as continuous.” ( http://www.statmodel.com/discussion/messages/9/189.html ) Does it mean: only take the MLR likelihood values and calculate: -2*(L0 - L1), without difference test scaling correction?
What he meant is that MLR should be used when categorical variables are treated as continuous. If categorical variable are treated as categorical then ML should work fine. If you use MLR, the scaling correction factor is always required.
we are trying to conduct the difftest for a model using WLSMV, and are getting this error message:
THE MODEL ESTIMATION TERMINATED NORMALLY THE CHI-SQUARE COMPUTATION COULD NOT BE COMPLETED BECAUSE OF A SINGULAR MATRIX.
in a search of the discussion forum, it looks like this problem usually leads to y'all asking to see the input and data, but in this case we wouldn't be able to do that because some of the data can't be shared due to a legal agreement. are there any other options here that we could try to pursue?
thanks, bengt. we managed to resolve that specific problem but can't get past messages that the models aren't nested (when as far as we can tell they are)--so we will send the outputs in case you can help.
I am having the same problem as a few other people in this thread where I'm trying to conduct a chi-square difference test between two models that I believe are nested, where the indicators are categorical and the estimator is ULSMV. The error message is the following: THE MODEL ESTIMATION TERMINATED NORMALLY THE CHI-SQUARE COMPUTATION COULD NOT BE COMPLETED BECAUSE OF A SINGULAR MATRIX.
I am using the method described on your site for ULSMV/WLSMV estimators.
To compare nested models with MLR and categorical dependent variables, there is no scaling correction factor in the output (Mplus V 7.2). A "chi-square test of model fit for the binary and ordered categorical outcomes" is provided. Is it allowable to do a traditional chi-square difference test to compare nested models without the scaling correction factor?
Please send the full output to firstname.lastname@example.org along with your license number so we can see your exact situation.
Lois Downey posted on Thursday, September 03, 2015 - 7:41 am
I am using WLSMV and DIFFTEST in an exploratory investigation of whether there are regional differences in various categorical outcomes. Region is a 11-category nominal scale variable, and each model uses 10 dummy indicators as predictors of one of the outcomes of interest. However, the p-value of the chi-square difference test differs considerably, depending upon which region I use as the reference group.
For my final models, I've been using the category with the lowest coefficient as the reference group, thus ensuring that the coefficient estimates are all positive. Is this a reasonable strategy? Or is there a better rule of thumb for selecting the reference group in an exploratory study, given that the result depends on which region is selected?
You want to test that all of them are zero jointly, so
0 = b1; ... 0 = b6;
you can do that using a DO loop:
Model Test: DO(1,6) 0 = b#;
Lois Downey posted on Thursday, September 03, 2015 - 10:29 pm
Oh, I see. Thanks!
Although this method gives p-values for the Wald tests that are similar when the reference category is altered, they don't match exactly. For example, looking at one outcome, I get the following p-values for omnibus tests for 5 sets of dummy indicators, depending upon the reference group selected:
0.6501 vs. 0.6523 0.4873 vs. 0.5016 0.4788 vs. 0.4786 0.3385 vs. 0.3534 0.0446 vs. 0.0447
(I perhaps should have mentioned that these are complex regressions, although I don't know whether that's relevant.)
If I use the MLR estimator rather than WLSMV, and the log likelihood and scaling factor to compute the p-value for the omnibus test, I get the following values for the 5 predictors above (irrespective of which category is used as the reference group): 0.0670 0.5917 0.4712 0.2420 0.0703
The discrepancies between the results with MLR (which is the estimator I've typically used in the past) and WLSMV are of concern, making me think that I should use MLR for my current analyses. Do you agree?
Please send input, output, and data for a relevant WLSMV vs MLR comparison so we can take a look at it. Send as little as possible to pinpoint their differences.
Daniel Lee posted on Friday, April 22, 2016 - 6:58 pm
Hello Dr. Muthen,
I used the modification indices for categorical EFA (WLSMV) and removed an item that was contributing to a lot of model misfit. After removing the item, I would like to conduct a Difftest (as you normally would for two models w/ categorical indicators) but the deriv.dat would not save. The error message I get when I try to "SAVEDATA: Difftest is deriv.dat" for the baseline EFA model is:
*** WARNING in SAVEDATA command The DIFFTEST option is not available for TYPE=EFA. Note that the DIFFTEST option is available with the use of EFA factors (ESEM). Request for DIFFTEST will be ignored.
I would appreciate your guidance and resources for conducting difftests in categorical EFA models.
I am running a cross-lagged autoregressive model with two main categorical variables and some continuous covariates. To handle missing data I am using ML estimator with Montecarlo integration. I would like to compare nested models. However, DIFFTEST is not allowed with ML estimator. Should I use different estimator (WLS?) just for the model comparison?
1) In my cross-lagged autoregressive models all variables are categorical (estimator ML), so I assume I do not need to calculate correction factor when I calculate chi-square from loglikelihoods? 2)could you recommend a reference paper for calculating chi-square using log likelihoods of nested models? 3)Also, output does not give any fit indices. Is there a way to know if my baseline model fits the data well?
Not really because you no longer have a frequency table to test the model against. Instead, you can think about the restrictions that the model imposes - such as only lag-1 relationships - and free up those restrctions to see if that model has a better logL.
Is there a way to inspect for outliers under the Bayes estimator in mplus? I see in Lee's (2007) text on bayesian structural equation modeling there is a suggestion to inspect the residuals for outliers and a qq-plot for normality to check the fit of the model. Thanks!
I am doing multiple group CFAs to test for configural and metric invariance of a scale across three groups. I want to compare the configural and the metric model using chi-square difference testing. Since we are using the MLR estimator, we have to calculate the Satorra-Bentler scaled chi-square difference test (TRd) as indicated on the website. In order to do so we need to use the scaling correction factor. We are however not sure which scaling correction factor to use, as the output reports several ones: On the one hand the output reports two scaling correction factors (H0 and H1) under the heading 'loglikelihood' and on the other hand the output reports a scaling correction factor under the heading 'Chi-Square Test of Model Fit'. Which one should we use?
Thank you for your answer. However, as far as I know, no chi-square difference test is printed. I calculated the difference test using the formulas on your website for an MLR estimator: https://www.statmodel.com/chidiff.shtml.
I would like to compare a second-order factor model with a two-factor model. However, I always get the error: THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL.
My syntax is: Run 1:
MODEL: CI BY G_1 G_3 G_5 G_6; PE BY G_2 G_4 G_7 G_8;
SAVEDATA: DIFFTEST IS X2.dat;
ANALYSIS: DIFFTEST IS X2.dat;
MODEL: CI BY G_1 G_3 G_5 G_6; PE BY G_2 G_4 G_7 G_8; GR BY CI* PE; GR@1; CI PE (1);
I also tried running the second order factor model first, but got the same error message.
Hi again, I just realized that the second order factor does not influence the model fit and the models show exactly the same model fit indices. Is there another way to find out which model fits the data better, the second order factor model or the two factor model?
A second-order factor model is not testable unless you have at least 4 first-order factors. What the second-order factor model does is to put restrictions on the factor covariance matrix of the first-order factors. With 3 first-order factors this is the same as the 3 elements in that covariance matrix so fit is the same. With only 2 first-order factor indicators the model is not identified - one factor covariance cannot identify a loading and a factor variance, nor two loadings as in your case.