Sally Czaja posted on Friday, August 18, 2006 - 11:09 am
I am using procedures for WLSMV to compare models that I feel sure are nested (exactly the same except one path is removed in the H0), but MPLUS is telling me that the H0 model is not nested in the H1 model. It reports the same degrees of freedom for both models, which I also find puzzling. Any advice? These are the models:
H1 ANALYSIS: TYPE=general missing h1; MODEL: y4 ON y1 y2 y3 x1; y1 ON cont1 cont2 x1; y2 ON cont3 x1; y3 on cont1 cont3 x1; y1 WITH y2; y2 WITH y3; y1 WITH y3;
H0 ANALYSIS: TYPE=general missing h1; MODEL: y4 ON y1 y2 y3 x1@0; y1 ON cont1 cont2 x1; y2 ON cont3 x1; y3 on cont1 cont3 x1; y1 WITH y2; y2 WITH y3; y1 WITH y3;
In using DIFFTEST, are you sure you are not putting the H0 model in the place that the H1 model should be? To check nesting, Mplus simply compares the fitting function value at the optimum (lower is better) - the model with a lower value cannot be nested within a model with a higher value. A model with one parameter fixed cannot have a lower (better) fitting function value than the corresponding model with that parameter free. The fitting function values can be seen in Tech5, left column.
Sally Czaja posted on Monday, August 21, 2006 - 6:33 am
Thank you for your response. I feel certain that I am not switching the models but to clarify, I am saving the data file when I run the full model (which should be the better fit) and then running the DIFFTEST on the trimmed model with the fixed parameter. Is this correct?
Sounds right - see ex 12.12 in the User's Guide. Also check TECH1 to see the parameters used. If that doesn't help, you need to send your input, output, data, and license number to email@example.com.
Sally Czaja posted on Monday, August 21, 2006 - 7:18 am
Thank you! Ex 12.12 solved the problem. I was using the FILE IS command for saving the data file (rather than DIFFTEST IS).
I am using WLSMV to compare a model with dichotomous covariate (begenl) including direct effect (H1), with the same model without the covariate (H0). MPLUS does not report the Chi-sq comparison and it says that the H0 model is not nested in the H1 model. Any help on this would be very much appreciated. The models are: H1:
MODEL: ROLEF by fd4 fd7 fd8 fd9; COGNIT by fd11a fd11b fd11c fd11d ; MOBILT by fd13a fd13b fd13c; SLFCARE by fd15a fd15b fd15c; SOCIAL by fd17a fd17b fd17c fd17d fd17e; PARTICI by fd18b fd18c fd18d fd18e fd20 fd21 fd22;
ROLEF COGNIT MOBILT SLFCARE SOCIAL PARTICI ON begenl; fd20 ON begenl;
SAVEDATA: DIFFTEST IS modelh1.dat;
H0: MODEL: ROLEF by fd4 fd7 fd8 fd9; COGNIT by fd11a fd11b fd11c fd11d ; MOBILT by fd13a fd13b fd13c; SLFCARE by fd15a fd15b fd15c; SOCIAL by fd17a fd17b fd17c fd17d fd17e; PARTICI by fd18b fd18c fd18d fd18e fd20 fd21 fd22;
I am using WLSMV to test mediation and I wish to compare models with DIFFTEST. I am comparing a model with two IV, one mediator, ond (binary) DV:
1) MIN WITH SYM; alc ON MIN; alc ON SYM; alc ON EXP; EXP ON MIN; EXP ON SYM; MODEL INDIRECT: alc IND EXP MIN; alc IND EXP SYM;
with a nested model without the IV MIN: 2) MIN WITH SYM@0; alc ON MIN@0; alc ON SYM; alc ON EXP; EXP ON SYM; EXP ON MIN@0; MODEL INDIRECT: alc IND EXP SYM; alc IND EXP@0MIN@0;
In order to show that adding MIN makes the model better.
My problem is that the nested model 2) has really bad fit indices, compared with the same model calculated without adding MIN and then constraining coeffiecients to 0 (and thus non-nested with 1)):
3) EXP ON SYM; alc ON SYM; alc ON EXP; MODEL INDIRECT: alc IND EXP SYM;
Nevertheless, when I am describing the fits of my models I suppose I have to take the fit indices from 3) because those in 2) are 'artificially worstened'. But then, why am I allowed to calculate the DIFFTEST on the base of 2), which is of course worst?
Thank you. Now the fit in 2)=nested-one-VI has become better but it is still not as good as in 3)=non-nested-one-VI.
2) alc ON MIN@0; alc ON SYM; alc ON EXP; EXP ON SYM; EXP ON MIN@0; MODEL INDIRECT: alc IND EXP SYM; alc IND EXP@0MIN@0;
Chi-Square Test of Model Fit Value 78.147* Degrees of Freedom 32 P-Value 0.0000 CFI 0.938 TLI 0.913
3) EXP ON SYM; alc ON SYM; alc ON EXP; MODEL INDIRECT: alc IND EXP SYM;
Chi-Square Test of Model Fit Value 19.108* Degrees of Freedom 12 P-Value 0.0000 CFI 0.986 TLI 0.975
So I still have my previous doubt: 1. Is it right to describe fit of the model with only one VI using the indices from model 3)=non-nested and not from 2)=nested? 2. If so, then the question arise if it is right to compute the DIFFTEST between model 1)=two-VIs and 2), as this last has worst fit then 3) and thus the DIFFTEST is more likely to confirm my hypothesis that 1) is better.
1. Model fit with one IV should have only one IV on the USEV list, otherwise you are also testing the zero restrictions for the other IV.
2. DIFFTEST can only be used when the same USEV variables are used in both models - so model 2) is the correct comparison model to the model with MIN having effects because this tests whether MIN has effects.
I assume that a two factor model with perfect correlation specified between the two factors is then equivalent to a one factor model and the difference between the models' fit can then be tested (using DIFFTEST for WLSMV).
In that case what is the best way to specify perfect factor correlation? would it be, say: f1 ON f2@1; ?
First, you will have to set the metric in the 2-factor model using factor variances @1. Then you say f1 with f2@1. See how that works - it gives a non-pos def factor covariance matrix. Note also that you can't have any cross-loadings in the 2-factor model.
This approach did not seem to work in my case. However, there is some debate amongst methodologists whether models with varying numbers of factors are truly nested. Therefore it may be better to compare models using the BIC (ie.derived using MLR with montecarlo integration- the indicators are ordinal).
Is there any way of deriving a significance test for improvement in model fit using the BIC?
Dear Mplus team, I would like to compare the following models containing the same set of observed variables: 1. ERA by inq deg pla col irr peu tri joy des fie sur amu sou int; irr with col; des with tri; sou with pla; joy with fie;
2. POS by pla joy fie amu sou int ; NEG by inq deg col irr peu tri des; irr with col; des with tri; sou with pla; joy with fie; sur with POS; sur with NEG;
Are these models nested? If not, why? In this case, how can I use the BIC to compare the models if there is no significance test for this index? Thank you very much!
Hi, I am using WLSMV to fit a model with a binary dependent. Now i am trying to compare another three models with my research model. ALl models are based on the same indicators, and i increase some paths in one model, decrease some in another, and using a mediating in a third. I am trying to compare those three models with my research model. I have used Difftest but i got a message saying that difference can not be computed because models are not nested. So how can i compare these NON nested models? Thanks, Mohamed
BIC is for maximum likelihood not weighted least squares. I would think some of your models are nested. Perhaps you are using DIFFTEST incorrectly. You can send the relevant outputs and your license number to firstname.lastname@example.org if you want to check this out. Otherwise, I would see which model seems to have the best overall fit taking all fit indices into account.
Thanks Linda, if i am going to use the last option of yours (I would see which model seems to have the best overall fit taking all fit indices into account),Can i have a reference to support this point of view?
Hello, I am having the same problem Gemma has detailed above (Gemma vilagut posted on Tuesday, May 08, 2007 - 10:21 am). My models differ by the removal of one path. When I simply remove the path from the input and try to run it, I get a message that the models are not nested. When I constrain the path to 0 as suggested in the response above (Linda K. Muthen posted on Tuesday, May 08, 2007 - 7:52 am), I get the diff test in my output but my fit indices and parameters estimates are slightly different than they would be if I ran a model that just had the path removed from the input. Could you help me clarify the source of this trouble? Thanks for your time.
Which parameters are the most appropriate to report; the ones that result from removing the path or the ones that result from constraining the path to zero? My interest is in the former, but I'm not sure it's appropriate to report those if the difftest is associated with the latter. Thank you.
ri ri posted on Wednesday, May 06, 2015 - 11:31 am
I am comparing a full v.s partial Mediation model. Here are two Syntax forms. In Syntax 1 I added two direct paths into the full Mediation model and fixed them as 0 while comparing to the partial model.In Syntax 2 I used the original full medation model.is Syntax 1 the right one as nested model shall have the same set of variables and paths? Thanks!
H1 (partial): USEVARIABLES ARE SL1 SL2 NSC NSA EE1 EE2 TOI1 TOI2 TOA Sick3; CATEGORICAL = TOA Sick3; ANALYSIS: DIFFTEST IS deriv3.dat; ESTIMATOR = WLSMV;ARAMETERIZATION=THETA; MODEL: SLw BY SL1 SL2; NSW BY NSC NSA; BOW BY EE1 EE2; TOIW BY TOI1 TOI2; TOA ON TOIW; Sick3 ON BOW; TOIW ON NSW; BOW ON NSW; ! NSW ON SLw;! TOA Sick3 ON NSW; SAVEDATA: DIFFTEST IS deriv3.dat;
H0 (full) Syntax 1: USEVARIABLES ARE SL1 SL2 NSC NSA EE1 EE2 TOI1 TOI2 TOA Sick3; CATEGORICAL = TOA Sick3; ANALYSIS: DIFFTEST IS deriv3.dat; ESTIMATOR = WLSMV; PARAMETERIZATION=THETA; MODEL: SLw BY SL1 SL2; NSW BY NSC NSA; BOW BY EE1 EE2; TOIW BY TOI1 TOI2; TOA ON TOIW; Sick3 ON BOW; TOIW ON NSW; BOW ON NSW; ! NSW ON SLw; TOA ON NSW @0; Sick3 ON NSW @0;
Syntax 2:same as Syntax 1 but without TOA ON NSW @0 and Sick3 ON NSW @0;
USEVARIABLES ARE SL1 SL2 NSC NSA EE1 EE2 TOI1 TOI2 TOA Sick3; CATEGORICAL = TOA Sick3; ANALYSIS: DIFFTEST IS deriv3.dat; ESTIMATOR = WLSMV;ARAMETERIZATION=THETA; MODEL: SLw BY SL1 SL2; NSW BY NSC NSA; BOW BY EE1 EE2; TOIW BY TOI1 TOI2; TOA ON TOIW; Sick3 ON BOW; TOIW ON NSW; BOW ON NSW; ! NSW ON SLw;!
I checked again what Linda posted earlier, that if I compare a full v.s partial Mediation model, I shall ensure the two models have the same set of paths and variables. In this case I shall use Syntax 1 instead of Syntax 2, right? I tested my models with both Syntax 1 and 2, the results are slightly different. Would like to have a final check with you. Thanks!
You can compare models using WLSMV as long as you have the same IVs and DVs which your Syntax 2 and H1 models have, right? So syntax 1 and 2 are equally good; I don't see off-hand why they would give different results.
Dear Mplus team, I have a cross-lagged analysis with 3 variables measured in 2 time points. I want to test the effect of two variables on one another (x1 and x2) and the moderating effect of a third (x3) variable on them. I have a theoretical reason to believe that the interaction between x1 in time 1 and x3 in time 2 influence x2 in time 2.
My question is: Do I need to specify all the possible interactions in the usevariables command in every model that I am comparing, even though these variables don't appear in every model?
I am asking this because when I enter only the variables with theoretical significance to the usevariables command, I get great model fit indexes. When I ask for the same model but specify all the possible interactions (and there are a lot!!)in the usevariables command, I get very bad model fit indexes!
Thank you for your reply. I now realize that I may had a misunderstanding regarding the meaning of nested models and comparing chi fit tests, and I would like to make sure:
In order for a model to be nested within another model,or in order for two models to be comparable in a chi fit test, do the usevariable commands need to be identical in both models? or is the only requirement is that the nesting model will contain all the paths of the nested model+other paths?
I have a question regarding SEM multigroup analysis DIFFTEST. I ran a multigroup analysis comparing the chi-square model fit between unconstrained vs. constrained model. Is it possible to output 95%CI of these chi-square model fit diff test statistics?
As I'm using a Mac-version of Mplus v.7, "SAVEDATA" command didn't seem to work. So I ran two models in separate runs and manually computed diff of chi-square test (e.g, diff of chi square, p values, etc).
However, I'm not sure if Mplus would allow me to output 95% CI corresponding to this DIFFTEST, which I will need to report in my manuscript.
I have a question about comparing two multi group sem models. I compare a model that imposes no equality constraints on 3 structural paths with a model that constrains these 3 paths to equality to determine if these 3 paths really differ between the 2 groups (all three are significant in one group and non-significant in the other). The constrained model does not decrease model fit that much <delta> CFI=.002 and <delta> RMSEA=.001. However, when I test for differences in regression slopes (individually using (b1-b2)/sqrt(SEb1^2+SEb2^2)) I get a significant difference in 2 of the paths. Is it reasonable to conclude that in complex models (my model has df=600; N=1100 and many structural paths), small improvements might not be visible in overall fit indices? (I'm having a hard time finding some reference for this line of thought)
I don't think the test formula you show is right because if you have equalities across groups you have a violation of independence of parameter estimates. Instead, express the difference in Model Constraint or use Model Test. Or, you can use chi-square difference testing.
But you are probably right that the fit indices may not be able to pick up these differences.
hello there are two ESEM models, both have two factors. The difference is that two of the indicators are left out in Model 2 (as a way of shortening the scale). I read the whole thread, and got the impression that the two models are not nested. Do you agree? How can I compare these two models? given that the observed variables are different in the two models, can I use BIC or AIC? if not, could you suggest any other way to compare them?
Thanks for clarifying. some books suggest that BIC and AIC can be used to compare non-nested models. However, I read some comments from the mplus team that with different variables, the metric will be different for BIC and AIC. So, can I conclude that with different dependent variables, BIC and AIC could not be used for model comparison?
Thanks for your quick reply, Bengt. I have some follow-up questions. 1) What exactly would not be chi2 distributed - de LogLH, the BIC or the differences of the LogLhs (I am interested in the latter in order to calculate a test distribution)? 2) What distribution would the logLH difference follow if not chi2? 3) Is it possible to calculate a corresponding test distribution by bootstrapping? 4) Otherwise, which index would you suggest for deciding between nested bayes models? Thanks again Sofie
1) The "Bayesian BIC" that is printed is based on the Bayes estimates, not ML estimates so the logL is not an ML-maximized logL but a logL computed with Bayes estimates. Due to this, taking the approach of a likelihood ratio chi-square difference test isn't right when based on this Bayesian BIC; it doesn't give a chi-square.
2) This is unknown.
3) Perhaps; that is a research question.
4) I would look at the significance of the extra parameters in the less restrictive model.
I wonder if the DIFFTEST option is a good way to confirm that the H0 model is indeed nested in the H1 model. That is, will DIFFTEST run iff the H0 model is nested within the H1 model?
I am comparing two models that I believe are nested, and DIFFTEST is running with no error, but I'm wondering if there is any case where DIFFTEST would run when the H0 model was not actually nested within the H1 model.
I have a more pointed question than the one I've asked above. I'm experiencing a problem because I have two models that I believe should be nested, but their results suggested they are not. This is a multigroup CFA model with categorical indicators using WLSMV (theta param). Model H1 has invariant residual variances and loadings, and Model H0 just has invariant residual variances.
In theory these models should be nested, but the model chi-square for H0 is higher, as is the function minimum.
In H1, the loadings are constrained this way: Group 1 Model: f1 BY y1* y2-y4(L1-L4); Group 2 Mode: f1 BY y1* y2-y4 (L1-L4);
In H0, the loadings are constrained this way: Group 1 Model: f1 BY y1* y2-y4; Group 2 Model: f1 BY y1@1 y2-y4;
Do you know why these models do not appear to be nested in the results?
Your last message has me puzzled on 2 accounts. You say "but the model chi-square for H0 is higher" - that's how it should be for H0 because it is a stricter model. Also, your H0 model input does not have its loadings constrained. It is probably better if you send the relevant outputs to Support along with your license number.
We are running competing CFA models with WLSMV, including correlated factors, bifactor and S-1 (a modified bifactor in which one less specific factor is specified so that g is defined by the items of this missing group factor). We understand how to nest the correlated factors model in the bfiactor solution but are having difficulty nesting the correlated model in the S-1. S-1 has more parameters than correlated but the fit function seems to be worse so the models cannot be nested with WLSMV. We have tried constraining the H1 model in 2 different ways but had no luck. We also have some residual correlations across all models and these seem to be causing a singular matrix. 1. Can the correlated model be nested in S-1? 2. How should we interpret the singular matrix warning when the residual correlations are included? 3. Why would the fit function be worse for a model with more parameters?
thank you very much for this- it was very helpful. Having read the paper and implemented the procedure we wondered if you could just clarify something:
Is the rule about the product of the two larger correlations a theoretical principle that should be followed even if the NET procedure suggests models are nested?
Our correlated model seems to violate this criterion (depending on whether correlated errors are included) so we were expecting that NET would find it to be not nested in all of our bifactor solutions (classical and S-1). This is the case even when we make sure all correlations are positive. Can we check that we should not perform the difftest for any of these models even though NET suggests the correlated model is nested in the classical bifactor?
The rule about the product of the two larger correlations is not a general rule - it only applies when you are looking at a 3x3 matrix equivalence to a one factor analysis model. The rule doesn't apply to other situations. If the NET procedure concludes that the models are nested I would trust that.