Fit indices PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Anonymous posted on Monday, June 25, 2001 - 11:57 am
We are running a multi-level model (Version 2)in which we are testing the stability of a measurement model over two time periods. The model converges, and we have a non-significant chi-square value with 17 degrees of freedom. However, our RMSEA is 0.000, CFI is 1.0, and TLI is 1.021. Can you provide some insight into why we would be getting these results?
 bmuthen posted on Monday, June 25, 2001 - 4:42 pm
Your RMSEA, CFI and TLI values suggest a very good model fit and chi-square does not disagree with that.
 Tom Munk posted on Thursday, November 17, 2005 - 12:19 pm
I am testing a multilevel SEM. MLR provides CFI, TLI, RMSEA, SRMR(b), and SRMR(w). But it also provides a warning against using chi-square difference tests. Can all of these fit indices be used with the same standards as a single-level SEM?

A web search finds class notes from Newsome suggesting:
>.95 for CFI and TLI
<.08 for SRMR
<.06 for RMSEA
 Linda K. Muthen posted on Thursday, November 17, 2005 - 2:18 pm
The studies used to come up with cutoffs for fit measures have not been based on multilevel analysis so may not be appropriate for these models.
 Pancho Aguirre posted on Wednesday, November 23, 2005 - 10:44 am
Hello Linda and Bengt,


I'm wondering how do we determine a good fitting model in multilevel analysis. By looking at the output from Mplus user's guide example 9.9. Test of model fit are given as:

TESTS OF MODEL FIT
Loglikelihood
H0 Value -6752.350
Information Criteria
Number of Free Parameters 23
Akaike (AIC) 13550.700
Bayesian (BIC) 13663.578
Sample-Size Adjusted BIC 13590.529
(n* = (n + 2) / 24)

What are the cutoffs for these values? From what I understand the more negative the loglikelihood gets the better model fits. But is there a statistical test for this value? Can we transform it to a chi-square distribution? If yes, can we conduct a chi-square difference test between an unconditional model (no predictor at level two) and the target model?

thanks in advance for your help,

Pancho
 bmuthen posted on Wednesday, November 23, 2005 - 6:35 pm
For general multilevel models, no overall fit index has been developed. The usual indices are based on covariance matrix fitting and this is not necessarily relevant when as with random slope models the variance varies across subjects. This is why you don't see fit indices in multilevel programs. Instead you should do what most statisticians do, namely consider a sequence of nested models and get LR chi-square tests by 2 times the log likelhood difference.
 Kštlin Peets posted on Tuesday, April 24, 2007 - 12:38 pm
Just to make sure. I am being asked to report N for the chi-square (model fit index). Am I correct when I assume that in case of multilevel modeling, it is cluster size*number of individuals (number of observations in the output)?

Thank you!
 Linda K. Muthen posted on Tuesday, April 24, 2007 - 2:42 pm
In multilevel modeling the number of observations reported is the N. N is only the number of clusters if the unit of analysis is cluster.
 Kštlin Peets posted on Wednesday, April 25, 2007 - 5:34 am
Thank you.
As I am looking at whether individuals differentiate between different conditions, each individual forms a cluster. So, for the chi-square, I should report the number of clusters, and in my case, it is the number of individuals.
Did I understand you correctly?
 Linda K. Muthen posted on Wednesday, April 25, 2007 - 8:21 am
I don't understand where your clustering comes in if you have one individual per cluster.
 Kštlin Peets posted on Wednesday, April 25, 2007 - 9:29 am
We use multilevel modeling so that conditions within individuals form the within level (we are looking at variance between different conditions within individual) and individuals form the between level (examining variance between individuals across the conditions).
 Linda K. Muthen posted on Wednesday, April 25, 2007 - 9:36 am
N is the number of individuals and you have several members (conditions) per cluster.
 Kštlin Peets posted on Wednesday, April 25, 2007 - 10:47 am
Yes, that is the case. So, I will report the number of individuals (clusters) for the chi-square.
Thank you for your time.
 Hsien-Yuan Hsu posted on Wednesday, June 06, 2007 - 11:32 am
Dr. Muthen,

I got the results after running example 9.6 in MPLUS user guide. I got the Chi-Square Test of Model Fit (3.864) and its df is 17.

Q1. How does MPLUS calculate the df ??

I got CFI, TLI, AIC, BIC, RMSEA, SRMR.

Q2. Are these fix indexes for overall model??
Q3. Why does MPLUS provide SRMR for Between and Within models, respectively? Could I get other fix indexes for Between and Within models?


Many thanks,
Hsien-Yuan Hsu
 Linda K. Muthen posted on Wednesday, June 06, 2007 - 5:20 pm
1. In this example, the sample statistics consist of 4 means for the y variables, 10 variances and covariances for the y variables on the within level, 8 covariances between the x and y variables, 10 variances and covariances for the y variables on the between level, 4 covariances between the w and y variables. This is a total of 36. There are 19 free parameters so there are 17 degrees of freedom.

2. Yes.

3. This is the only fit statistic that is provided for each part of the model.
 Sanjoy Bhattacharjee posted on Saturday, September 29, 2007 - 10:47 am
Prof Muthen,

Suppose we have to choose between HLM-2 and HLM-3. Which test procedure should we use. Is there any model selection criterion for the HLM setup? We need to cite something similar to Hausman test, the test we use to select between fixed effect and random effect model (within panel data framework).

Could we do the test in MPlus?

Thanks and regards
Sanjoy
 Bengt O. Muthen posted on Sunday, September 30, 2007 - 10:35 am
Perhaps you can settle the issue of how important level 3 clustering is by making a comparison of two runs. First use Type = Complex Twolevel where Complex deals with clustering on level 3 and Twolevel deals with clustering on level 2. Compare the SEs you get there with those of Type = Twolevel and ignoring the level 3 clustering.

Mplus does not do Hausman testing.

The choice between fixed and random effects is another, broader matter.
 Sanjoy Bhattacharjee posted on Sunday, September 30, 2007 - 2:32 pm
Thank you Professor. I can see the point you made.

Regards
Sanjoy
 Joyce Kwan  posted on Thursday, July 03, 2008 - 1:26 am
Dear Professors,

I would like to ask if the interpretation of fit indices such as CLI, TLI, RMSEA for multilevel model the same as that for single level model. I read from the above that it may not be appropriate for us to use the cutoffs for fit measures that are used for single level models on multilevel models. So is there other rule of thumbs for using these fit indices for multilevel models. How do we use fit indices such as CLI, TLI and RMSEA for evaluation of model fit?

Besides, I have fit a single level model and multilevel model for the same data set. The resulting TLI and RMSEA showed a great drop in model fit but the CLI remain more or less the same. Why would it be in this case?

Thanks
 Linda K. Muthen posted on Thursday, July 03, 2008 - 4:17 pm
I do not know of a study where cutoffs have been studied for multilevel models. I would use those for single level models.

I can't explain your findings in comparing a single level and multilevel model.
 Elif «oker posted on Wednesday, May 27, 2009 - 8:36 am
Hi,

My first question is which formula is used to calculate the loglikelihood and the concerned covariance matrices for multilevel path models in Mplus? Can you please give an exact reference?

And lastly is there a new option to save the matrices in the normal exact dimensioned matrix format not like a mixed one saved disorderly?

Thanks so much already,

Elif
 Linda K. Muthen posted on Thursday, May 28, 2009 - 10:34 am
See the following paper for random intercepts:

Muthťn, B. (1990). Mean and covariance structure analysis of hierarchical data. Paper presented at the Psychometric Society meeting in Princeton, NJ, June 1990. UCLA Statistics Series 62.

You can download it from the following link where it is paper #32:

http://www.gseis.ucla.edu/faculty/muthen/full_paper_list.htm

See the following paper which is on our website for random slopes:

Muthťn, B. & Asparouhov, T. (2008). Growth mixture modeling: Analysis with non-Gaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis, pp. 143-165. Boca Raton: Chapman & Hall/CRC Press.
 Sally Czaja posted on Thursday, December 03, 2009 - 11:43 am
Hello. I'm trying to find out whether a whole group model or one with two groups better fits my data.
Nested model syntax:
USEVARIABLES ARE female raceWb ageint1 acrimyn poverty neighpov;
CLASSES= c(2);
KNOWNCLASS = c(grp=0 grp=1);
WITHIN = female raceWb ageint1 poverty;
CLUSTER = census;
BETWEEN = neighpov;
CATEGORICAL = acrimyn;
ANALYSIS: TYPE= TWOLEVEL mixture;
Model: %WITHIN%
%OVERALL%
acrimyn ON female raceWb ageint1 poverty;
%BETWEEN%
%OVERALL%
acrimyn on neighpov;

In the comparison model, everything is the same as above except for the following model specification.
Model: %WITHIN%
%OVERALL%
acrimyn ON female raceWb ageint1 poverty;
%c#1%
acrimyn ON female raceWb ageint1 poverty;
%c#2%
acrimyn ON female raceWb ageint1 poverty;
%BETWEEN%
%OVERALL%
acrimyn on neighpov;
%c#1%
acrimyn on neighpov;
%c#2%
acrimyn on neighpov;

1) Is my modeling approach correct. 2) I'm using the loglikelihood difference testing to compare the fit of the models. Is this correct? Are there any other ways comparing model fit? 3) If the loglikelihood difference test is not significant does that indicate that the nested model better explains the data than the comparison? Thank you.
 Linda K. Muthen posted on Friday, December 04, 2009 - 9:24 am
This sounds correct. If the constrained model does not worsen model fit, then the parameters are equal across groups.
 Murphy T. posted on Wednesday, October 19, 2011 - 12:15 am
Dear professors,

I estimated a two-level model and get the following fit indices for my model:
RMSEA: 0.058
CFI: 0.967
TLI: 0.845
SRMR (within): 0.010
SRMR (between): 0.194

The RMSEA and CFI seem to look quite good (by conventional cutoff values), but the TLI and SRMR (between) seem to indicate poorer fit. What could be the reason for these discrepancies? Are you aware of cutoff values for these fit indices for multilevel models? Thank you very much!
 Linda K. Muthen posted on Wednesday, October 19, 2011 - 3:02 pm
Lack of model fit can be caused by many problems. I don't know of any cutoffs specific to multilevel models.
 Eva posted on Wednesday, September 26, 2012 - 5:51 am
Would anyone happen to know if by now some guidelines have been supported in evaluating cut-off values for fit indices in multilevel SEM?
 Linda K. Muthen posted on Wednesday, September 26, 2012 - 1:48 pm
You should post this on SEMNET or Multilevel net. They should know this.
 Karen Kegel posted on Monday, May 06, 2013 - 9:40 am
Dear Drs. Muthen,

I am testing SEM model fit for 4 sequential, multiple mediation models. The fit index results I get with MPlus are all the same, however--which is highly unanticipated. One example of a model is:

UDO ON HS;
SC ON HS UDO;
PD ON HS UDO SC;

Another is:

SC ON UDO;
HS ON UDO SC;
PD ON UDO SC HS;

These are very different models, yet I get the same fit index results for both. Is there something I'm missing in my syntax that should be used to indicate the sequence of mediations each model proposes? Thanks!
 Linda K. Muthen posted on Monday, May 06, 2013 - 11:00 am
Please send the two outputs and your license number to support@statmodel.com.
 Aleksandra Holod posted on Thursday, November 14, 2013 - 3:51 pm
I am testing a path model and receiving fit indices that appear unrealistically high (RMSEA=0, TLI/CFI=1) in model output that includes an error message saying,
"THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.213D-16. PROBLEM INVOLVING PARAMETER 226."

I am using MLR estimation with survey weights and clustered standard errors to account for nested sampling design (children within schools).

I believe the error message is due to the fact that I have dichotomous covariates, which I allow to covary for the purposes of the FIML approach to missing data. My sample size is over 16,000 and I have no latent variables, so I believe this is not a model identification problem.

When I remove parameter 226, I get the same error message for another covariance between two dichotomous covariates. I have also experimented with setting numerous other covariate paths to zero, but the fit indices and error message remain the same (except for the parameter number).

So, should I assume that I have a model with excellent fit and ignore the error message. Or is there some other alternative?

Thank you,
Aleksandra
 Linda K. Muthen posted on Thursday, November 14, 2013 - 6:16 pm
Remove the WITH statements involving the dichotomous covariates. If the message disappears, you can put the statements back and ignore the message. It is triggered because the mean and variance of a dichotomous variable are not orthogonal.
 Aleksandra Holod posted on Friday, November 15, 2013 - 9:28 am
Hi Linda,

Yes, when I remove the WITH statement for the covariates, the error message goes away. Thank you for that suggestion!

However, when I remove that WITH statement, I still have perfect fit statistics (RMSEA=0, CFI/TLI=1). That seems implausible to me. It is really possible for an empirical model to have perfect fit?

Could this be caused by shared method variance? The data for independent and mediator variables was gathered via survey from a single respondent, i.e. the mother of each child. The dependent variables are direct assessments of children's literacy and numeracy skills.

Thank you for any insight you can provide.

Best,
Aleksandra
 Linda K. Muthen posted on Friday, November 15, 2013 - 11:50 am
Your model must have zero degrees of freedom to get those values.
 Aleksandra Holod posted on Friday, November 15, 2013 - 12:04 pm
Actually, the model has 2 degrees of freedom.
 Aleksandra Holod posted on Friday, November 15, 2013 - 12:15 pm
So, to clarify, my question is: How is it possible for a model with 2 degrees of freedom to have perfect fit statistics? Thank you!
 Linda K. Muthen posted on Friday, November 15, 2013 - 1:19 pm
Please send the output and your license number to support@statmodel.com.
 Yao Wen posted on Thursday, February 13, 2014 - 11:26 am
Hi Linda,

I ran a cross-classified model using Bayesian estimator. I found no model fit indices were reported in the output. I attached part of my syntax below.

ANALYSIS: TYPE = crossclassified random; ESTIMATOR = BAYES;
PROCESSORS = 2; CHAINS=2; BITERATIONS = (20000);
MODEL:
%WITHIN%
s1-s4 | lit_w by y1-y4 ;
lit_w on hisp;
y1-y4;
lit_w;
[lit_w@0];

%BETWEEN cl2%
s1-s4 ;

%BETWEEN tid%
s1-s4 ;

OUTPUT: TECH1 TECH8 TECH4 TECH10 STANDARDIZED SVALUES;

I received warning messages below.

*** WARNING in OUTPUT command
STANDARDIZED (STD, STDY, STDYX) options are not available for TYPE=RANDOM.
Request for STANDARDIZED (STD, STDY, STDYX) is ignored.
*** WARNING in OUTPUT command
TECH4 option is not available for TYPE=RANDOM.
Request for TECH4 is ignored.
*** WARNING in OUTPUT command
TECH10 option is only available with categorical or count outcomes.
Request for TECH10 is ignored.


Is there a way to obtain model fit indices in this case?

Thank you for your time!
 Bengt O. Muthen posted on Friday, February 14, 2014 - 11:19 am
That has not been developed yet.
 Ellen posted on Saturday, June 28, 2014 - 12:49 am
I was running a multi-level path analysis with binary variables(mediator) and to use MLR estimator.
I also used Type=complex twolevel random. I have some questions about the model.

1. I was not getting regular fit indices(chi-sqare, CFI, TLI, RMSEA), only reported AIC, BIC.
I wonder if I can get chi-square and other fit indices in for the fitted model.

2. I'd like to compute marginal effects of indirect effect.
The model is as follows.
Y on M X
M on X

M is binary, Y is continuous variable.
Generally when compute margianl effect of binary variable, we multiply un-standardized coefficient by (1- mean of latent variable).
For the marginal effects of indirect effect, do we have to use general method or other ways?
 Bengt O. Muthen posted on Saturday, June 28, 2014 - 6:29 pm
1. These are not available with Type=Random because a random slope implies that the DV variance changes over observations so that there isn't a single covariance matrix to test.

2.This is a big and complex topic that is complicated by the binary mediator and the two-level model with Type=Random. My mediation papers on our website deal with the first issue and our Topic 7 handout and video deals with the second issue.

I am not aware of the approach of that multiplication you mention.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: