Message/Author 

Sally Czaja posted on Friday, August 18, 2006  11:09 am



I am using procedures for WLSMV to compare models that I feel sure are nested (exactly the same except one path is removed in the H0), but MPLUS is telling me that the H0 model is not nested in the H1 model. It reports the same degrees of freedom for both models, which I also find puzzling. Any advice? These are the models: H1 ANALYSIS: TYPE=general missing h1; MODEL: y4 ON y1 y2 y3 x1; y1 ON cont1 cont2 x1; y2 ON cont3 x1; y3 on cont1 cont3 x1; y1 WITH y2; y2 WITH y3; y1 WITH y3; H0 ANALYSIS: TYPE=general missing h1; MODEL: y4 ON y1 y2 y3 x1@0; y1 ON cont1 cont2 x1; y2 ON cont3 x1; y3 on cont1 cont3 x1; y1 WITH y2; y2 WITH y3; y1 WITH y3; Thank you! 


In using DIFFTEST, are you sure you are not putting the H0 model in the place that the H1 model should be? To check nesting, Mplus simply compares the fitting function value at the optimum (lower is better)  the model with a lower value cannot be nested within a model with a higher value. A model with one parameter fixed cannot have a lower (better) fitting function value than the corresponding model with that parameter free. The fitting function values can be seen in Tech5, left column. 

Sally Czaja posted on Monday, August 21, 2006  6:33 am



Thank you for your response. I feel certain that I am not switching the models but to clarify, I am saving the data file when I run the full model (which should be the better fit) and then running the DIFFTEST on the trimmed model with the fixed parameter. Is this correct? 


Sounds right  see ex 12.12 in the User's Guide. Also check TECH1 to see the parameters used. If that doesn't help, you need to send your input, output, data, and license number to support@statmodel.com. 

Sally Czaja posted on Monday, August 21, 2006  7:18 am



Thank you! Ex 12.12 solved the problem. I was using the FILE IS command for saving the data file (rather than DIFFTEST IS). 


I am using WLSMV to compare a model with dichotomous covariate (begenl) including direct effect (H1), with the same model without the covariate (H0). MPLUS does not report the Chisq comparison and it says that the H0 model is not nested in the H1 model. Any help on this would be very much appreciated. The models are: H1: MODEL: ROLEF by fd4 fd7 fd8 fd9; COGNIT by fd11a fd11b fd11c fd11d ; MOBILT by fd13a fd13b fd13c; SLFCARE by fd15a fd15b fd15c; SOCIAL by fd17a fd17b fd17c fd17d fd17e; PARTICI by fd18b fd18c fd18d fd18e fd20 fd21 fd22; ROLEF COGNIT MOBILT SLFCARE SOCIAL PARTICI ON begenl; fd20 ON begenl; SAVEDATA: DIFFTEST IS modelh1.dat; H0: MODEL: ROLEF by fd4 fd7 fd8 fd9; COGNIT by fd11a fd11b fd11c fd11d ; MOBILT by fd13a fd13b fd13c; SLFCARE by fd15a fd15b fd15c; SOCIAL by fd17a fd17b fd17c fd17d fd17e; PARTICI by fd18b fd18c fd18d fd18e fd20 fd21 fd22; ANALYSIS: DIFFTEST IS modelh1.dat; 


Nesting requires the same set of observed variables. You should add the following to the H0 model: ROLEF COGNIT MOBILT SLFCARE SOCIAL PARTICI ON begenl@0; fd20 ON begenl@0; 


Thanks Linda. I have tried your suggestion, but now the estimated parameters are not the same as those for the initial H0 model (whithout fixing the coefficients of begenl to 0). Thanks very much! Gemma 


You will need to send your inputs, data, outputs, and license number to support@statmodel.com. 


I am using WLSMV to test mediation and I wish to compare models with DIFFTEST. I am comparing a model with two IV, one mediator, ond (binary) DV: 1) MIN WITH SYM; alc ON MIN; alc ON SYM; alc ON EXP; EXP ON MIN; EXP ON SYM; MODEL INDIRECT: alc IND EXP MIN; alc IND EXP SYM; with a nested model without the IV MIN: 2) MIN WITH SYM@0; alc ON MIN@0; alc ON SYM; alc ON EXP; EXP ON SYM; EXP ON MIN@0; MODEL INDIRECT: alc IND EXP SYM; alc IND EXP@0 MIN@0; In order to show that adding MIN makes the model better. My problem is that the nested model 2) has really bad fit indices, compared with the same model calculated without adding MIN and then constraining coeffiecients to 0 (and thus nonnested with 1)): 3) EXP ON SYM; alc ON SYM; alc ON EXP; MODEL INDIRECT: alc IND EXP SYM; Nevertheless, when I am describing the fits of my models I suppose I have to take the fit indices from 3) because those in 2) are 'artificially worstened'. But then, why am I allowed to calculate the DIFFTEST on the base of 2), which is of course worst? Where I am doing something wrong? Thank you. 


You should not have MIN WITH SYM; in model 1) or model 2) because they are exogenous variables and should be correlated as the default. 


Thank you. Now the fit in 2)=nestedoneVI has become better but it is still not as good as in 3)=nonnestedoneVI. 2) alc ON MIN@0; alc ON SYM; alc ON EXP; EXP ON SYM; EXP ON MIN@0; MODEL INDIRECT: alc IND EXP SYM; alc IND EXP@0 MIN@0; ChiSquare Test of Model Fit Value 78.147* Degrees of Freedom 32 PValue 0.0000 CFI 0.938 TLI 0.913 3) EXP ON SYM; alc ON SYM; alc ON EXP; MODEL INDIRECT: alc IND EXP SYM; ChiSquare Test of Model Fit Value 19.108* Degrees of Freedom 12 PValue 0.0000 CFI 0.986 TLI 0.975 So I still have my previous doubt: 1. Is it right to describe fit of the model with only one VI using the indices from model 3)=nonnested and not from 2)=nested? 2. If so, then the question arise if it is right to compute the DIFFTEST between model 1)=twoVIs and 2), as this last has worst fit then 3) and thus the DIFFTEST is more likely to confirm my hypothesis that 1) is better. 


When you say "VI", I think you mean "IV". 1. Model fit with one IV should have only one IV on the USEV list, otherwise you are also testing the zero restrictions for the other IV. 2. DIFFTEST can only be used when the same USEV variables are used in both models  so model 2) is the correct comparison model to the model with MIN having effects because this tests whether MIN has effects. 


Dear Mplus team, Am I right in assuming that a CFA with two postulated factors would not strictly be nested in a model with one factor, even if they had the same indicators? 


A onefactor model can be nested within a 2factor model, not the other way around. 


Thanks Bengt, I assume that a two factor model with perfect correlation specified between the two factors is then equivalent to a one factor model and the difference between the models' fit can then be tested (using DIFFTEST for WLSMV). In that case what is the best way to specify perfect factor correlation? would it be, say: f1 ON f2@1; ? Your help is appreciated. Many thanks Paul 


First, you will have to set the metric in the 2factor model using factor variances @1. Then you say f1 with f2@1. See how that works  it gives a nonpos def factor covariance matrix. Note also that you can't have any crossloadings in the 2factor model. 


thanks that makes senses best wishes Paul 


Dear Mplus team, This approach did not seem to work in my case. However, there is some debate amongst methodologists whether models with varying numbers of factors are truly nested. Therefore it may be better to compare models using the BIC (ie.derived using MLR with montecarlo integration the indicators are ordinal). Is there any way of deriving a significance test for improvement in model fit using the BIC? Many thanks Paul 


Not that I know of. 


Dear Mplus team, I would like to compare the following models containing the same set of observed variables: 1. ERA by inq deg pla col irr peu tri joy des fie sur amu sou int; irr with col; des with tri; sou with pla; joy with fie; 2. POS by pla joy fie amu sou int ; NEG by inq deg col irr peu tri des; irr with col; des with tri; sou with pla; joy with fie; sur with POS; sur with NEG; Are these models nested? If not, why? In this case, how can I use the BIC to compare the models if there is no significance test for this index? Thank you very much! 


We do believe these models are nested. The lower BIC is the best BIC. You can do a statistical test by 2 times the loglikelihood difference which is distributed as chisquare. 


Hi, I am using WLSMV to fit a model with a binary dependent. Now i am trying to compare another three models with my research model. ALl models are based on the same indicators, and i increase some paths in one model, decrease some in another, and using a mediating in a third. I am trying to compare those three models with my research model. I have used Difftest but i got a message saying that difference can not be computed because models are not nested. So how can i compare these NON nested models? Thanks, Mohamed 


Using BIC may be a good idea. 


Thanks but BIC does not appear in my output when i use WLSMV? how to calculate it? Many thanks, 


BIC is for maximum likelihood not weighted least squares. I would think some of your models are nested. Perhaps you are using DIFFTEST incorrectly. You can send the relevant outputs and your license number to support@statmodel.com if you want to check this out. Otherwise, I would see which model seems to have the best overall fit taking all fit indices into account. 


Thanks Linda, if i am going to use the last option of yours (I would see which model seems to have the best overall fit taking all fit indices into account),Can i have a reference to support this point of view? Thanks indeed. 


I don't have a reference to support this point of view. It's simply the only alternative I can think of given you don't have BIC with WLSMV. You can probably get more opinions on SEMNET. 


Hi Linda, i am trying to test the following models:they base on the same indicators: Variable: names are x1x39 u1; usevariable are x1x39 u1; Categorical is u1; First Model: f1 by x1x4; f2 by x5x9; f3 by x10x15; f4 by x16x21; f5 by x22x25; f6 by x26x30; f7 by x31x36; f8 by x37x39; f9 by f1f3; f10 by f5f8; f9 on f4; u1 on f9 f10; Second Model: f1 by x1x4; f2 by x5x9; f3 by x10x15; f4 by x16x21; f5 by x22x25; f6 by x26x30; f7 by x31x36; f8 by x37x39; f9 by f1f3; f10 by f5f8; f10 on f4; u1 on f9 f10; Third model: f1 by x1x4; f2 by x5x9; f3 by x10x15; f4 by x16x21; f5 by x22x25; f6 by x26x30; f7 by x31x36; f8 by x37x39; f9 by f1f3; f10 by f5f8; f9 f10 on f4; u1 on f9 f10; Do you think these are nested models? what is puzzling me is that chisquare value for all the models is different while df is the same in all the three models? why? Thanks, Mohamed 


If the degrees of freedom are the same, the models are not nested. Having the same degrees of freedom does not mean that chisquare will be the same. You may be interested in the following article: Bentler, P.M. and Satorra, A. (2010). Testing model nesting and equivalence. Psychological Methods, Vol. 15, No. 2, 111123. 


Hello, I believe that my models are nested but I receive the message warning that they are not H1  MultiGrp (11grp) ANALYSIS: ESTIMATOR=WLSMV; PARAMETERIZATION=theta; MODEL: F BY w* (L1) s* (L2) wg* (L3) e* (L4); [F@0]; F@1; we@1; MODEL S: F BY w* (L1) s* (L2) wg* (L3) e* (L4); [F*]; F*; we@1; SAVEDATA: difftest IS scal.dat; H0  Free FL and threshold for S ANALYSIS: DIFFTEST=scal.dat; MODEL: F BY w* (L1) s* (L2) wg* (L3) e* (L4); [F@0]; F@1; we@1; MODEL S: F BY w* !free s* (L2) wg* (L3) e* (L4); [F*]; F*; [w$1*]; !free we@1; Any help with would be really appreciated. I was wondering if I was getting a negative difference? Cheers, Bellinda 


I believe you need to remove we@1; from the first model. 


Hello, I am having the same problem Gemma has detailed above (Gemma vilagut posted on Tuesday, May 08, 2007  10:21 am). My models differ by the removal of one path. When I simply remove the path from the input and try to run it, I get a message that the models are not nested. When I constrain the path to 0 as suggested in the response above (Linda K. Muthen posted on Tuesday, May 08, 2007  7:52 am), I get the diff test in my output but my fit indices and parameters estimates are slightly different than they would be if I ran a model that just had the path removed from the input. Could you help me clarify the source of this trouble? Thanks for your time. 


The difference is that the set of variables used in the analysis differs when you remove the path rather than fixing it at zero. 


Which parameters are the most appropriate to report; the ones that result from removing the path or the ones that result from constraining the path to zero? My interest is in the former, but I'm not sure it's appropriate to report those if the difftest is associated with the latter. Thank you. 


You should report the models used in DIFFTEST. 


Dear Dr. Muthen, I am trying to conduct a model comparison between two model with MLR estimation. My questions: Q1. Are these two models nested with each other?" Q2. If yes, is SatorraBentler Scaled ChiSquare applicable in this case? Model 1: MODEL: %Within% fw1 BY y1@1 y2 y3; fw2 BY y4@1 y5 y6; y1y6; fw1; fw2; fw1 WITH fw2; %Between% y1y6 with y1y6; Model 2: MODEL: %Within% y1y6 WITH y1y6 @0; %Between% y1y6 WITH y1y6; Thanks for your reply in advance. Best, HsienYuan 


Yes and yes. 

ri ri posted on Wednesday, May 06, 2015  11:31 am



I am comparing a full v.s partial Mediation model. Here are two Syntax forms. In Syntax 1 I added two direct paths into the full Mediation model and fixed them as 0 while comparing to the partial model.In Syntax 2 I used the original full medation model.is Syntax 1 the right one as nested model shall have the same set of variables and paths? Thanks! H1 (partial): USEVARIABLES ARE SL1 SL2 NSC NSA EE1 EE2 TOI1 TOI2 TOA Sick3; CATEGORICAL = TOA Sick3; ANALYSIS: DIFFTEST IS deriv3.dat; ESTIMATOR = WLSMV;ARAMETERIZATION=THETA; MODEL: SLw BY SL1 SL2; NSW BY NSC NSA; BOW BY EE1 EE2; TOIW BY TOI1 TOI2; TOA ON TOIW; Sick3 ON BOW; TOIW ON NSW; BOW ON NSW; ! NSW ON SLw;! TOA Sick3 ON NSW; SAVEDATA: DIFFTEST IS deriv3.dat; H0 (full) Syntax 1: USEVARIABLES ARE SL1 SL2 NSC NSA EE1 EE2 TOI1 TOI2 TOA Sick3; CATEGORICAL = TOA Sick3; ANALYSIS: DIFFTEST IS deriv3.dat; ESTIMATOR = WLSMV; PARAMETERIZATION=THETA; MODEL: SLw BY SL1 SL2; NSW BY NSC NSA; BOW BY EE1 EE2; TOIW BY TOI1 TOI2; TOA ON TOIW; Sick3 ON BOW; TOIW ON NSW; BOW ON NSW; ! NSW ON SLw; TOA ON NSW @0; Sick3 ON NSW @0; Syntax 2:same as Syntax 1 but without TOA ON NSW @0 and Sick3 ON NSW @0; 


Isn't your Syntax 2 the same as your H1(partial) model? Syntax 1 looks correct. 

ri ri posted on Wednesday, May 06, 2015  3:11 pm



The Syntax 2 is a full Mediation model as follow: USEVARIABLES ARE SL1 SL2 NSC NSA EE1 EE2 TOI1 TOI2 TOA Sick3; CATEGORICAL = TOA Sick3; ANALYSIS: DIFFTEST IS deriv3.dat; ESTIMATOR = WLSMV;ARAMETERIZATION=THETA; MODEL: SLw BY SL1 SL2; NSW BY NSC NSA; BOW BY EE1 EE2; TOIW BY TOI1 TOI2; TOA ON TOIW; Sick3 ON BOW; TOIW ON NSW; BOW ON NSW; ! NSW ON SLw;! I checked again what Linda posted earlier, that if I compare a full v.s partial Mediation model, I shall ensure the two models have the same set of paths and variables. In this case I shall use Syntax 1 instead of Syntax 2, right? I tested my models with both Syntax 1 and 2, the results are slightly different. Would like to have a final check with you. Thanks! 


You can compare models using WLSMV as long as you have the same IVs and DVs which your Syntax 2 and H1 models have, right? So syntax 1 and 2 are equally good; I don't see offhand why they would give different results. 

ri ri posted on Wednesday, May 06, 2015  3:40 pm



The result using Syntax 1: ChiSquare Test for Difference Testing Value 4.172 Degrees of Freedom 2 PValue 0.1242 The result using Syntax 2: ChiSquare Test for Difference Testing Value 1.803 Degrees of Freedom 1 PValue 0.1794 Although neither result is significant (p>.05), ¦¤¦Ö2(df) are different. Which one shall I Report? 

ri ri posted on Wednesday, May 06, 2015  4:06 pm



I analyzed again. The previous results might be wrong. This time: With Syntax 1: ChiSquare Test for Difference Testing Value 1.777 Degrees of Freedom 2 PValue 0.4112 With Syntax 2: ChiSquare Test for Difference Testing Value 1.440 Degrees of Freedom 2 PValue 0.4868 The difference now is smaller. Which value/df shall I Report? 1.78(2) or 1.44(2)? 


Please send the two outputs and your license number to support@statmodel.com. 

ri ri posted on Wednesday, May 06, 2015  4:28 pm



I probably knew what caused the mistakes. In the H1, The second last line was TOA Sick3 ON NSW; In the H0, I wrote: TOA ON NSW @0; Sick3 ON NSW @0; After I changed the H1 into: TOA ON NSW; Sick3 ON NSW; the results of two syntaxes were exactly the same. I suppose there is a difference between writing the two regressions separately and together? 


There is no difference between TOA Sick3 ON NSW; and TOA ON NSW; Sick3 ON NSW; 

ri ri posted on Wednesday, May 06, 2015  5:11 pm



If in H0 I wrote TOA ON NSW @0; Sick3 ON NSW @0; shall I Keep them separately written in H1, i.e., TOA ON NSW; Sick3 ON NSW; or it does not matter if the form in H1 and H0 differs? 


It does not matter if how you write it differs. 


Dear Mplus team, I have a crosslagged analysis with 3 variables measured in 2 time points. I want to test the effect of two variables on one another (x1 and x2) and the moderating effect of a third (x3) variable on them. I have a theoretical reason to believe that the interaction between x1 in time 1 and x3 in time 2 influence x2 in time 2. My question is: Do I need to specify all the possible interactions in the usevariables command in every model that I am comparing, even though these variables don't appear in every model? I am asking this because when I enter only the variables with theoretical significance to the usevariables command, I get great model fit indexes. When I ask for the same model but specify all the possible interactions (and there are a lot!!)in the usevariables command, I get very bad model fit indexes! Thank you so much for your help in advance 


I wonder if X3 is a dependent variable. Model fit for models with interactions involving DVs can be distorted. See e.g. Model 3 of Preacher et al (2007). For general analysis advice you may want to contact SEMNET. 


Thank you for your reply. I now realize that I may had a misunderstanding regarding the meaning of nested models and comparing chi fit tests, and I would like to make sure: In order for a model to be nested within another model,or in order for two models to be comparable in a chi fit test, do the usevariable commands need to be identical in both models? or is the only requirement is that the nesting model will contain all the paths of the nested model+other paths? Thanks! 


A nested model at a minimum must use the same set of dependent variables. 


Dear Mplus team  I have a question regarding SEM multigroup analysis DIFFTEST. I ran a multigroup analysis comparing the chisquare model fit between unconstrained vs. constrained model. Is it possible to output 95%CI of these chisquare model fit diff test statistics? As I'm using a Macversion of Mplus v.7, "SAVEDATA" command didn't seem to work. So I ran two models in separate runs and manually computed diff of chisquare test (e.g, diff of chi square, p values, etc). However, I'm not sure if Mplus would allow me to output 95% CI corresponding to this DIFFTEST, which I will need to report in my manuscript. Your advices will be much appreciated!!! Thanks! 


No, this is not available. 


Hello I have a question about comparing two multi group sem models. I compare a model that imposes no equality constraints on 3 structural paths with a model that constrains these 3 paths to equality to determine if these 3 paths really differ between the 2 groups (all three are significant in one group and nonsignificant in the other). The constrained model does not decrease model fit that much <delta> CFI=.002 and <delta> RMSEA=.001. However, when I test for differences in regression slopes (individually using (b1b2)/sqrt(SEb1^2+SEb2^2)) I get a significant difference in 2 of the paths. Is it reasonable to conclude that in complex models (my model has df=600; N=1100 and many structural paths), small improvements might not be visible in overall fit indices? (I'm having a hard time finding some reference for this line of thought) 


I don't think the test formula you show is right because if you have equalities across groups you have a violation of independence of parameter estimates. Instead, express the difference in Model Constraint or use Model Test. Or, you can use chisquare difference testing. But you are probably right that the fit indices may not be able to pick up these differences. 


hello there are two ESEM models, both have two factors. The difference is that two of the indicators are left out in Model 2 (as a way of shortening the scale). I read the whole thread, and got the impression that the two models are not nested. Do you agree? How can I compare these two models? given that the observed variables are different in the two models, can I use BIC or AIC? if not, could you suggest any other way to compare them? many thanks in advance 


Nested models must at a minimum have the same set of dependent variables. Factor indicators are dependent variables so the models would not be nested. 


Thanks for clarifying. some books suggest that BIC and AIC can be used to compare nonnested models. However, I read some comments from the mplus team that with different variables, the metric will be different for BIC and AIC. So, can I conclude that with different dependent variables, BIC and AIC could not be used for model comparison? Your help is much appreciated. 


BIC and such statistics cannot be used with models that have different sets of dependent variables. 

Rick Borst posted on Tuesday, November 15, 2016  7:07 am



Dear professor Muthen, I want to compare the significance of the addition of two latent variable interactions by looking at the loglikelihood difference. Is it then correct to run the basic model: A BY A1A6; O BY q54_7 q54_8 q54_9 q54_10 q54_12 q54_13 q54_14; I BY q54_3 q54_4 q54_5; U BY U1U6; U ON Agecat; U ON Tenure; U ON Educat; U ON Gender; U ON I; U ON O; U ON A; Versus the model with interactions: I@1; O@1; A@1; U ON Agecat; U ON Tenure; U ON Educat; U ON Gender; U ON O; U ON I; U ON A; A; OA  O XWITH A; IA  I XWITH A; U ON IA; U ON OA; I ask because the amount of free parameters is the same and the R square decreases. 


Yes, you can use a loglikelihood ratio chi2 test for this. I don't know why you add I@1; O@1; A@1; when you add the interactions. That throws off the test. 

Rick Borst posted on Tuesday, November 15, 2016  11:49 pm



Thank you for the quick Response. Is that only necessary if you want to create a loopplot? I looked at the faq about latent interaction and I saw that it is done there as well. 


You shouldn't set the metric twice (loading and factor variance). 

Rick Borst posted on Wednesday, November 16, 2016  11:46 pm



Dear prof Muthen, Oke so I should use an Asterix for the loading and @1 for the factor variance in the loopplot as well as in the loglikelihood comparison?, ie: A BY A1*A6; O BY q54_7* q54_8 q54_9 q54_10 q54_12 q54_13 q54_14; I BY q54_3* q54_4 q54_5; U BY U1U6; I@1; O@1; A@1; U ON Agecat; U ON Tenure; U ON Educat; U ON Gender; U ON O; U ON I; U ON A; A; OA  O XWITH A; IA  I XWITH A; U ON IA; U ON OA; 


Try it and see if you get what you expect in the output. 


Hello, I am running several nested bayes models. Would it be appropriate to calculate the logLH from the BIC in order to compare the models with each other using a chi2 test? Thanks in advance Sofie 


It won't be distributed as a chi2, but perhaps it is a useful descriptive index. 


Thanks for your quick reply, Bengt. I have some followup questions. 1) What exactly would not be chi2 distributed  de LogLH, the BIC or the differences of the LogLhs (I am interested in the latter in order to calculate a test distribution)? 2) What distribution would the logLH difference follow if not chi2? 3) Is it possible to calculate a corresponding test distribution by bootstrapping? 4) Otherwise, which index would you suggest for deciding between nested bayes models? Thanks again Sofie 


1) The "Bayesian BIC" that is printed is based on the Bayes estimates, not ML estimates so the logL is not an MLmaximized logL but a logL computed with Bayes estimates. Due to this, taking the approach of a likelihood ratio chisquare difference test isn't right when based on this Bayesian BIC; it doesn't give a chisquare. 2) This is unknown. 3) Perhaps; that is a research question. 4) I would look at the significance of the extra parameters in the less restrictive model. 


Hello Professors, I wonder if the DIFFTEST option is a good way to confirm that the H0 model is indeed nested in the H1 model. That is, will DIFFTEST run iff the H0 model is nested within the H1 model? I am comparing two models that I believe are nested, and DIFFTEST is running with no error, but I'm wondering if there is any case where DIFFTEST would run when the H0 model was not actually nested within the H1 model. As always, thanks in advance for your help. 


Difftest can't make sure that the models are nested. It checks only 2 necessary things: H0 should have fewer parameters than H1 H0 should have a higher final F value than H1, where F is printed by TECH5 and refers to the fitting function that is optimized and where low values mean better fit to the data. 


Thanks so much for this quick response. Does Mplus have any way to check on whether models are nested? 


Hi again Dr. Muthen, I have a more pointed question than the one I've asked above. I'm experiencing a problem because I have two models that I believe should be nested, but their results suggested they are not. This is a multigroup CFA model with categorical indicators using WLSMV (theta param). Model H1 has invariant residual variances and loadings, and Model H0 just has invariant residual variances. In theory these models should be nested, but the model chisquare for H0 is higher, as is the function minimum. In H1, the loadings are constrained this way: Group 1 Model: f1 BY y1* y2y4(L1L4); Group 2 Mode: f1 BY y1* y2y4 (L1L4); In H0, the loadings are constrained this way: Group 1 Model: f1 BY y1* y2y4; Group 2 Model: f1 BY y1@1 y2y4; Do you know why these models do not appear to be nested in the results? 


No check for nestedness in Mplus (yet). Your last message has me puzzled on 2 accounts. You say "but the model chisquare for H0 is higher"  that's how it should be for H0 because it is a stricter model. Also, your H0 model input does not have its loadings constrained. It is probably better if you send the relevant outputs to Support along with your license number. 


Dear Drs Muthen, We are running competing CFA models with WLSMV, including correlated factors, bifactor and S1 (a modified bifactor in which one less specific factor is specified so that g is defined by the items of this missing group factor). We understand how to nest the correlated factors model in the bfiactor solution but are having difficulty nesting the correlated model in the S1. S1 has more parameters than correlated but the fit function seems to be worse so the models cannot be nested with WLSMV. We have tried constraining the H1 model in 2 different ways but had no luck. We also have some residual correlations across all models and these seem to be causing a singular matrix. 1. Can the correlated model be nested in S1? 2. How should we interpret the singular matrix warning when the residual correlations are included? 3. Why would the fit function be worse for a model with more parameters? Thanks, Louise and Margarita 


Have a look at our new NESTED checking feature described in the paper under SEM: Asparouhov, T. & Muthén, B. (2018). Nesting and equivalence testing in Mplus. Technical Report. May 16, 2018. (Download scripts). See especially Section 4.2 on the bifactor model. 


thank you very much for this it was very helpful. Having read the paper and implemented the procedure we wondered if you could just clarify something: Is the rule about the product of the two larger correlations a theoretical principle that should be followed even if the NET procedure suggests models are nested? Our correlated model seems to violate this criterion (depending on whether correlated errors are included) so we were expecting that NET would find it to be not nested in all of our bifactor solutions (classical and S1). This is the case even when we make sure all correlations are positive. Can we check that we should not perform the difftest for any of these models even though NET suggests the correlated model is nested in the classical bifactor? many thanks, Louise and Margarita 


The rule about the product of the two larger correlations is not a general rule  it only applies when you are looking at a 3x3 matrix equivalence to a one factor analysis model. The rule doesn't apply to other situations. If the NET procedure concludes that the models are nested I would trust that. 

Back to top 