A colleague and I both recently ran the same model on the same data, I used MPlus and he used LISREL. Our models were identical although the chi-square values differed by 2.0 and some of the estimates differed by less than .01. However, the CFI and TLI indices were substantially different (i.e., MPLUS: CFI=.73, TLI=.72; LISREL: CFI=.89, TLI/NNFI=.88). Do you have any idea why the fit indices would be so different?
I suspect that your model has covariates. In this case, the baseline model differs between LISREL and Mplus. The baseline model in LISREL does not contain covariances among the covariates. In Mplus, it does.
laura smith posted on Friday, February 01, 2008 - 2:25 pm
i am a beginner to SEM and to Mplus, so thanks in advance for your patience.
my question is: can fit indices be too high? initially, i ran the measurement portion of my model and obtained a nonsignificant chi-square (good) but a CFI of .88, a TLI of .65, and an RMSEA of .25 (not so good).
next, as one of the model modification indices was theoretically consistent with my model, i added it (it was a WITH path). Consequently, the last three fit indices improved dramatically(CFI=1, TLI=1, RMSEA=0).
The model is not just-identified, but it has only one degree of freedom. could that be the "problem" that is creating a near-perfect model fit?
also, as i went on to the structural model, the degree of freedom increased, but the fit indices remained at those high levels.
i would like to be happy about the seemingly excellent fit of this model, but it seems suspicious to me. can you suggest some factors that i should investigate to see whether they are inflating these indices spuriously?
thanks so much! this discussion board is a treasure.
It sounds like the correlations among your observed variables are low. This makes it difficult to reject the H0 model. And with only one degree of freedom, the model does not place many restrictions on the H1 model. You may also have a small sample size which results in low power.
laura smith posted on Saturday, February 02, 2008 - 10:22 am
thanks very much for those pointers, linda.
looking into those possibilities, my sample size is 321, which i think is reasonable.
the correlations among the 4 indicators of the proposed latent range from .61 to .34 (with three of them below .40 at .39, .39, and .34).
do those sound low enough to you to suggest that i've found where the problem may lie?
Your correlations don't sound that low and your sample size is not particularly large. But with one degree of freedom, you don't have many restrictions. If you send two outputs, one without the WITH statement and one where the WITH statement dramatically changes the fit, and your license number to email@example.com, I can take a look at it.
Okay - I'm looking at someone else's model - it's fully saturated (they're looking at mediation)...and saying that the fit indices can't be evaluated, I'm assuming because of the saturation. I just wondered if there wasn't something that could be done to make the fit indices meaningful, rather than just leaving it at that.
That's part of my point/question. If it has perfect fit because it's saturated, does it make sense to just leave it at that? They've tested a mediation model, and have two control variables with the exogenous, outcomes, and mediators all controlled. I can't see where a constraint would make sense. But it seems strange to just say "it's saturated, we can't evaluate model fit."
You can't evaluate model fit but you can evaluate whether the indirect effect is significant. Perhaps that is sufficient.
Rob Nobel posted on Tuesday, November 11, 2008 - 1:23 am
I was wondering: is the term saturated a "discrete" term (is a model only saturated with df=0) or can you also say that a model with for example df=1 is "highly saturated" and thus that fit indices are less informative?
With such a small sample, you cannot discount chi-square. I would say the model does not fit.
EFA is a good way to isolate a problematic variable. See the Topic 1 course handout and video on the website for further information.
tom norton posted on Thursday, April 17, 2014 - 7:04 am
I'm using Anderson & Gerbings 2-step approach to SEM: 1. CFA: using "Latent variable BY indicator" commands 2. SEM: using "Latent variable BY indicator" and "Latent variable ON latend variable" commands
When I run the SEM , I generate the same model fit statistics as I do when running the CFA.
Is there a default in Mplus that generates model fit based on the CFA part ("latent variable BY indicator") part of the syntax
The structural part of the model must be just-identified if you get the same fit for both models. The structural part does not contribute to fit. There is no option to generate fit for a subset of the model.
tom norton posted on Thursday, April 17, 2014 - 5:09 pm
To be clear, wouldn't a just-identified model have a df=0? Both the measurement model and structural model have df=505.
Your model is only just-identified in the structural part, not overall.
I don't know what you mean about generating model fit based on the CFA part - if you have only f BY y statements, that's how you get the CFA fit.
tom norton posted on Thursday, April 17, 2014 - 8:46 pm
When I run the CFA with just the f B y commands (i.e. the measurement model) I get model fit statistics.
When I run the SEM by adding f ON f commands to the model (i.e. the structural model), the model fit statistics are precisely the same.
I've just tested this with the data provided for ex 5.11 in the user's guide by running the SEM and then removing the f ON f command. The same model fit statistics were produced each time.
Is there something else I should be doing to somehow differentiate the measurement model from the structural models? Or is it the case that I have reduced my df from >0 in the measurement model to = 0 in the structural model by introducing the paths between the latent factors (and thus making my structural model just-identified)?
For model fit to change when you add the structural part of the model, the structural part of the model must have degrees of freedom. It cannot be just-identified. Fit cannot be assessed on a just-identified model or part of a model. The change should be to the structural model not the measurement model.