Comparing Non-Nested Models PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Andrea Lawson posted on Saturday, August 30, 2008 - 9:05 am
Hi there. I was told that one of the ways to compare non-nested models was to scrutinize the standardized matrix of the residuals....whichever one has the most off-diagonal entries larger than .1 is the poorer model. Well, I ran my models in Mplus and almost ALL the standardized residuals (found under Standardized Residuals (z-scores) for Covariances/ Correlations/Residual Corr) for both models were above .1! Does this mean there is something wrong with both my models, or am I interpreting the matrix incorrectly? Or perhaps I've been given bad advice? Thank you very much, Andrea
 Linda K. Muthen posted on Sunday, August 31, 2008 - 9:26 am
I have not heard of doing this. If I were looking at z-scores, I would use a value of about 2.
 Andrea Lawson posted on Monday, September 01, 2008 - 2:43 pm
Thanks so much for getting back to me so quickly - and on a long weekend!
What would you recommend I look at when comparing non-nested models....AIC/BIC and RMSEA? Thanks very much, Andrea
 Linda K. Muthen posted on Monday, September 01, 2008 - 3:51 pm
If the non-nested models have the same set of observed variables, you can look at BIC. If they do not have the same set of observed variables, I know of no way to compare them.
 Andrea Lawson posted on Tuesday, September 02, 2008 - 6:55 am
Yes, the models have the same set of observed variables. Thanks for your help! Andrea
 craig neumann posted on Tuesday, August 04, 2009 - 9:56 am
I have recently encountered a statistically minded person who insists that BIC can be used to compare non-nested models which are NOT based on the same set of observed variables (e.g., M1 has items 1,2,3,4,5 & M2 has items 4,5,6,7,8). Any literature which supports or refutes this assertion would be most helpful.
 craig neumann posted on Tuesday, August 04, 2009 - 10:08 am
sorry, in my just recent post, I should have asked "can BIC be used to compare non-nested models which are NOT *completely* based on the same set of observed variables, but have some overlap (e.g., M1 has items 1,2,3,4,5 & M2 has items 4,5,6,7,8)?
 Bengt O. Muthen posted on Tuesday, August 04, 2009 - 10:58 am
I can't see how BIC can be used here. The log likelihood that the BIC is based on is in a different metric if the set of DVs is not the same.
 ClaudiaBergomi posted on Wednesday, June 16, 2010 - 2:21 am
Hello. This means that there is no way of comparing two non-nested models with only overlapping observable variables? To give a concrete example: A mediation analysis with one indipendent variable X1, two mediators M1 M2 and one outcome variable Y1, and I would like to show that adding M1 to X1-->Y1 and then adding M2 to X1-->M1-->Y1 does not "make the model worst" with the help of some indices.
 Linda K. Muthen posted on Wednesday, June 16, 2010 - 8:04 am
You need to have the same set of observed variables to compare models. Use the full set in both analyses and fix the paths you don't want to zero.
 Alcohol Study posted on Friday, December 02, 2011 - 12:02 pm
Hello. I want to compare a 4 with a 3 factor model, such that the 3 factor model has one full factor (and corresponding indicators) removed. It appears that there is no test for this non-nested model, is that correct?
Thanks
 Bengt O. Muthen posted on Friday, December 02, 2011 - 2:13 pm
No, not even BIC can be used due to having different variables in the two models.
 Kristin Nichols posted on Thursday, February 09, 2012 - 9:47 am
Regarding the question above, I understand that BIC can only be used to compare non-nested models with the same set of observed variables. Linda mentions the possibility of using the full set of observed variables in both analyses and fixing certain paths to zero.

Could this be used in a CFA framework? If I want to compare two versions of a measure with overlapping items, but not all the same items, these models are clearly not nested.

Could I work around this by fixing certain paths from my latent variable to my indicators to be zero?


For example,

In one model, factor 1 is defined as:

f1 by y1-y5;

in the 2nd model factor 1 is defined as:

f1 by y1-y3; items 4,5 are not included.

Would it be plausible to use

f1 by y1@1 y2* y3* y4@0 y5@0; in order to compare one model to the other with the BIC?

Of course this is a simplification of my real model. I am just wondering if this could work in theory and if there are any other implications of fixing those paths to zero that I am not considering.
 Linda K. Muthen posted on Friday, February 10, 2012 - 9:59 am
I don't think this will work.
 Dayuma Vargas posted on Thursday, September 13, 2012 - 1:56 pm
Hello. I have two latent growth models, one for changes in sibling conflict and one for changes in friendship conflict across 4 years in adolescence. The same scale was used for both measures (we simply changed the relationship they needed to report on). Both LGM show significant linear decreases in conflict over time. I am trying to find out if there is a way for me to test whether the slopes in these two models are significantly different. That is, is the decrease in sibling conflict steeper than the decrease in friendship conflict, or vice versa.

Would you be able advise me on this matter?

Thank you,

Dayuma
 Linda K. Muthen posted on Friday, September 14, 2012 - 3:37 pm
You can test this difference using MODEL TEST. See the user's guide.
 Dayuma Vargas posted on Wednesday, September 19, 2012 - 2:59 pm
I ran a parallel process model so that I could use MODEL TEST, but the slopes were too highly correlated, so I constrained all the covariances between the LGMs to zero. Syntax as follows:

MODEL:

!Friend
iF sF | fC_W1@0 fC_W2@1 fC_W3@2 fC_W4@3 fC_W5@4;
fC_W1 fC_W2 fC_W3 fC_W4 fC_W5 (v1);
iF sF;
iF WITH sF;
[fC_W1@0 fC_W2@0 fC_W3@0 fC_W4@0 fC_W5@0];
[iF] (m1);
[sF] (m2);

!Sib
iS sS | sC_W1@0 sC_W2@1 sC_W3@2 sC_W4@3 sC_W5@4;
sC_W1 sC_W2 sC_W3 sC_W4 sC_W5 (v4);
iS sS;
iS WITH sS;
[sC_W1@0 sC_W2@0 sC_W3@0 sC_W4@0 sC_W5@0];
[iS] (m3);
[sS] (m4);

!Covariances
iF WITH iS sS@0;
sF WITH iS sS@0;

MODEL TEST:
m2 = m4;

Is this appropriate? Thank you
 Linda K. Muthen posted on Wednesday, September 19, 2012 - 3:55 pm
I do not think this is a good solution. Sometimes in a parallel process model, when growth factors are highly correlated, there is a need to correlate residuals at each time point across the processes. I would try that.
 Dayuma Vargas posted on Thursday, September 20, 2012 - 1:24 pm
Thank you for the quick response.
 Sara Geven posted on Wednesday, May 15, 2013 - 8:31 am
Hello,

I am trying to compare a model without a mediator to a model in which a mediator is included. In this thread I read that the BIC can only be used when the observed variables are the same across the two models. Hence, Prof Muthen suggested to fix some paths in the analysis without the mediation to zero. However, when I did so, I saw that the RMSEA and the CFI also went down compared to a model in which I do not include the mediator at all (maybe because the intercepts and variances are now estimated for the mediator, but there are no predictors for the mediator in the model?). Is it ok to use the BIC of the model in which the paths are fixed to zero and to rely on the RMSEA and the CFI of the model in which the mediator and its paths are not included?

Thank you in advance.

Kind regards,

Sara Geven
 Linda K. Muthen posted on Thursday, May 16, 2013 - 8:38 am
I would use only BIC for model comparisons.
 Elli posted on Friday, March 18, 2016 - 11:02 am
Hello,

When comparing these models, I get the same model fit criteria.

A --> B --> C
B --> A --> C

However, I get different model fit criteria if I compare the following

A-->B-->C
A-->C-->B

Does this seem correct?

Thanks
 Bengt O. Muthen posted on Saturday, March 19, 2016 - 4:24 pm
Please send this general analysis question to SEMNET.
 Anton Dominicson  posted on Sunday, November 27, 2016 - 7:37 pm
Hello, I would like to ask about a similar method I was experimenting with. For now, Im concerned with a simple regression. I also wanted to compare models with different sets of variables like this...

Model:
x1 ON y1;

Model:
x2 ON y1;

Model:
x3 ON y1;

I tried the suggestion of including all x variables and allowing only one x to regress on y1 at a time, disallowing unwanted covariances, and comparing the BIC. The other method I experimented with consisted on creating duplicates of y with the DEFINE command to run the alternative models at the same time, disallowing all unwanted covariances and then using the model test command to compare the parameters of x on y.

DEFINE:
y2 = y1 +0;
y3 = y1 +0;

MODEL:
x1 ON y1 (p1);
x2 ON y2 (p2);
x3 ON y3 (p3);
x1 with y2-y3@0 x2-x3@0; etc...

MODEL TEST:
p1=p2;

The estimates, standard errors and p-values for x ON y that I got with this method were same as with the previous suggested method. Thus, instead of using BIC, I used the size of the estimates and the pvalues of the various wald tests to determine the best model. My question would be: Is there a problem with using this method or is it ok? I think that this might be a similar method to the case of Dauyma Vargas, but Im not familiar with growth models enough to say it is.

Thanks in advance,

Anton.
 Bengt O. Muthen posted on Monday, November 28, 2016 - 2:13 pm
I trust that when you say

x ON y;

you mean that x is the DV and y is the IV (I ask because the naming convention in Mplus is the reverse).

I don't know why you don't simply say

x1-x3 on y1 (p1-p3);

(It doesn't matter if you add

x1-x3 with x1-x3@0;)

And then do the Model Test you mention.
 Anton Dominicson  posted on Tuesday, November 29, 2016 - 6:38 am
Yes, I see. Thank you very much!
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: