Message/Author 

Anonymous posted on Monday, June 25, 2001  11:57 am



We are running a multilevel model (Version 2)in which we are testing the stability of a measurement model over two time periods. The model converges, and we have a nonsignificant chisquare value with 17 degrees of freedom. However, our RMSEA is 0.000, CFI is 1.0, and TLI is 1.021. Can you provide some insight into why we would be getting these results? 

bmuthen posted on Monday, June 25, 2001  4:42 pm



Your RMSEA, CFI and TLI values suggest a very good model fit and chisquare does not disagree with that. 

Tom Munk posted on Thursday, November 17, 2005  12:19 pm



I am testing a multilevel SEM. MLR provides CFI, TLI, RMSEA, SRMR(b), and SRMR(w). But it also provides a warning against using chisquare difference tests. Can all of these fit indices be used with the same standards as a singlelevel SEM? A web search finds class notes from Newsome suggesting: >.95 for CFI and TLI <.08 for SRMR <.06 for RMSEA 


The studies used to come up with cutoffs for fit measures have not been based on multilevel analysis so may not be appropriate for these models. 


Hello Linda and Bengt, I'm wondering how do we determine a good fitting model in multilevel analysis. By looking at the output from Mplus user's guide example 9.9. Test of model fit are given as: TESTS OF MODEL FIT Loglikelihood H0 Value 6752.350 Information Criteria Number of Free Parameters 23 Akaike (AIC) 13550.700 Bayesian (BIC) 13663.578 SampleSize Adjusted BIC 13590.529 (n* = (n + 2) / 24) What are the cutoffs for these values? From what I understand the more negative the loglikelihood gets the better model fits. But is there a statistical test for this value? Can we transform it to a chisquare distribution? If yes, can we conduct a chisquare difference test between an unconditional model (no predictor at level two) and the target model? thanks in advance for your help, Pancho 

bmuthen posted on Wednesday, November 23, 2005  6:35 pm



For general multilevel models, no overall fit index has been developed. The usual indices are based on covariance matrix fitting and this is not necessarily relevant when as with random slope models the variance varies across subjects. This is why you don't see fit indices in multilevel programs. Instead you should do what most statisticians do, namely consider a sequence of nested models and get LR chisquare tests by 2 times the log likelhood difference. 


Just to make sure. I am being asked to report N for the chisquare (model fit index). Am I correct when I assume that in case of multilevel modeling, it is cluster size*number of individuals (number of observations in the output)? Thank you! 


In multilevel modeling the number of observations reported is the N. N is only the number of clusters if the unit of analysis is cluster. 


Thank you. As I am looking at whether individuals differentiate between different conditions, each individual forms a cluster. So, for the chisquare, I should report the number of clusters, and in my case, it is the number of individuals. Did I understand you correctly? 


I don't understand where your clustering comes in if you have one individual per cluster. 


We use multilevel modeling so that conditions within individuals form the within level (we are looking at variance between different conditions within individual) and individuals form the between level (examining variance between individuals across the conditions). 


N is the number of individuals and you have several members (conditions) per cluster. 


Yes, that is the case. So, I will report the number of individuals (clusters) for the chisquare. Thank you for your time. 


Dr. Muthen, I got the results after running example 9.6 in MPLUS user guide. I got the ChiSquare Test of Model Fit (3.864) and its df is 17. Q1. How does MPLUS calculate the df ?? I got CFI, TLI, AIC, BIC, RMSEA, SRMR. Q2. Are these fix indexes for overall model?? Q3. Why does MPLUS provide SRMR for Between and Within models, respectively? Could I get other fix indexes for Between and Within models? Many thanks, HsienYuan Hsu 


1. In this example, the sample statistics consist of 4 means for the y variables, 10 variances and covariances for the y variables on the within level, 8 covariances between the x and y variables, 10 variances and covariances for the y variables on the between level, 4 covariances between the w and y variables. This is a total of 36. There are 19 free parameters so there are 17 degrees of freedom. 2. Yes. 3. This is the only fit statistic that is provided for each part of the model. 


Prof Muthen, Suppose we have to choose between HLM2 and HLM3. Which test procedure should we use. Is there any model selection criterion for the HLM setup? We need to cite something similar to Hausman test, the test we use to select between fixed effect and random effect model (within panel data framework). Could we do the test in MPlus? Thanks and regards Sanjoy 


Perhaps you can settle the issue of how important level 3 clustering is by making a comparison of two runs. First use Type = Complex Twolevel where Complex deals with clustering on level 3 and Twolevel deals with clustering on level 2. Compare the SEs you get there with those of Type = Twolevel and ignoring the level 3 clustering. Mplus does not do Hausman testing. The choice between fixed and random effects is another, broader matter. 


Thank you Professor. I can see the point you made. Regards Sanjoy 

Joyce Kwan posted on Thursday, July 03, 2008  1:26 am



Dear Professors, I would like to ask if the interpretation of fit indices such as CLI, TLI, RMSEA for multilevel model the same as that for single level model. I read from the above that it may not be appropriate for us to use the cutoffs for fit measures that are used for single level models on multilevel models. So is there other rule of thumbs for using these fit indices for multilevel models. How do we use fit indices such as CLI, TLI and RMSEA for evaluation of model fit? Besides, I have fit a single level model and multilevel model for the same data set. The resulting TLI and RMSEA showed a great drop in model fit but the CLI remain more or less the same. Why would it be in this case? Thanks 


I do not know of a study where cutoffs have been studied for multilevel models. I would use those for single level models. I can't explain your findings in comparing a single level and multilevel model. 

Elif Çoker posted on Wednesday, May 27, 2009  8:36 am



Hi, My first question is which formula is used to calculate the loglikelihood and the concerned covariance matrices for multilevel path models in Mplus? Can you please give an exact reference? And lastly is there a new option to save the matrices in the normal exact dimensioned matrix format not like a mixed one saved disorderly? Thanks so much already, Elif 


See the following paper for random intercepts: Muthén, B. (1990). Mean and covariance structure analysis of hierarchical data. Paper presented at the Psychometric Society meeting in Princeton, NJ, June 1990. UCLA Statistics Series 62. You can download it from the following link where it is paper #32: http://www.gseis.ucla.edu/faculty/muthen/full_paper_list.htm See the following paper which is on our website for random slopes: Muthén, B. & Asparouhov, T. (2008). Growth mixture modeling: Analysis with nonGaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis, pp. 143165. Boca Raton: Chapman & Hall/CRC Press. 

Sally Czaja posted on Thursday, December 03, 2009  11:43 am



Hello. I'm trying to find out whether a whole group model or one with two groups better fits my data. Nested model syntax: USEVARIABLES ARE female raceWb ageint1 acrimyn poverty neighpov; CLASSES= c(2); KNOWNCLASS = c(grp=0 grp=1); WITHIN = female raceWb ageint1 poverty; CLUSTER = census; BETWEEN = neighpov; CATEGORICAL = acrimyn; ANALYSIS: TYPE= TWOLEVEL mixture; Model: %WITHIN% %OVERALL% acrimyn ON female raceWb ageint1 poverty; %BETWEEN% %OVERALL% acrimyn on neighpov; In the comparison model, everything is the same as above except for the following model specification. Model: %WITHIN% %OVERALL% acrimyn ON female raceWb ageint1 poverty; %c#1% acrimyn ON female raceWb ageint1 poverty; %c#2% acrimyn ON female raceWb ageint1 poverty; %BETWEEN% %OVERALL% acrimyn on neighpov; %c#1% acrimyn on neighpov; %c#2% acrimyn on neighpov; 1) Is my modeling approach correct. 2) I'm using the loglikelihood difference testing to compare the fit of the models. Is this correct? Are there any other ways comparing model fit? 3) If the loglikelihood difference test is not significant does that indicate that the nested model better explains the data than the comparison? Thank you. 


This sounds correct. If the constrained model does not worsen model fit, then the parameters are equal across groups. 

Murphy T. posted on Wednesday, October 19, 2011  12:15 am



Dear professors, I estimated a twolevel model and get the following fit indices for my model: RMSEA: 0.058 CFI: 0.967 TLI: 0.845 SRMR (within): 0.010 SRMR (between): 0.194 The RMSEA and CFI seem to look quite good (by conventional cutoff values), but the TLI and SRMR (between) seem to indicate poorer fit. What could be the reason for these discrepancies? Are you aware of cutoff values for these fit indices for multilevel models? Thank you very much! 


Lack of model fit can be caused by many problems. I don't know of any cutoffs specific to multilevel models. 

Eva posted on Wednesday, September 26, 2012  5:51 am



Would anyone happen to know if by now some guidelines have been supported in evaluating cutoff values for fit indices in multilevel SEM? 


You should post this on SEMNET or Multilevel net. They should know this. 


Dear Drs. Muthen, I am testing SEM model fit for 4 sequential, multiple mediation models. The fit index results I get with MPlus are all the same, howeverwhich is highly unanticipated. One example of a model is: UDO ON HS; SC ON HS UDO; PD ON HS UDO SC; Another is: SC ON UDO; HS ON UDO SC; PD ON UDO SC HS; These are very different models, yet I get the same fit index results for both. Is there something I'm missing in my syntax that should be used to indicate the sequence of mediations each model proposes? Thanks! 


Please send the two outputs and your license number to support@statmodel.com. 


I am testing a path model and receiving fit indices that appear unrealistically high (RMSEA=0, TLI/CFI=1) in model output that includes an error message saying, "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NONPOSITIVE DEFINITE FIRSTORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.213D16. PROBLEM INVOLVING PARAMETER 226." I am using MLR estimation with survey weights and clustered standard errors to account for nested sampling design (children within schools). I believe the error message is due to the fact that I have dichotomous covariates, which I allow to covary for the purposes of the FIML approach to missing data. My sample size is over 16,000 and I have no latent variables, so I believe this is not a model identification problem. When I remove parameter 226, I get the same error message for another covariance between two dichotomous covariates. I have also experimented with setting numerous other covariate paths to zero, but the fit indices and error message remain the same (except for the parameter number). So, should I assume that I have a model with excellent fit and ignore the error message. Or is there some other alternative? Thank you, Aleksandra 


Remove the WITH statements involving the dichotomous covariates. If the message disappears, you can put the statements back and ignore the message. It is triggered because the mean and variance of a dichotomous variable are not orthogonal. 


Hi Linda, Yes, when I remove the WITH statement for the covariates, the error message goes away. Thank you for that suggestion! However, when I remove that WITH statement, I still have perfect fit statistics (RMSEA=0, CFI/TLI=1). That seems implausible to me. It is really possible for an empirical model to have perfect fit? Could this be caused by shared method variance? The data for independent and mediator variables was gathered via survey from a single respondent, i.e. the mother of each child. The dependent variables are direct assessments of children's literacy and numeracy skills. Thank you for any insight you can provide. Best, Aleksandra 


Your model must have zero degrees of freedom to get those values. 


Actually, the model has 2 degrees of freedom. 


So, to clarify, my question is: How is it possible for a model with 2 degrees of freedom to have perfect fit statistics? Thank you! 


Please send the output and your license number to support@statmodel.com. 

Yao Wen posted on Thursday, February 13, 2014  11:26 am



Hi Linda, I ran a crossclassified model using Bayesian estimator. I found no model fit indices were reported in the output. I attached part of my syntax below. ANALYSIS: TYPE = crossclassified random; ESTIMATOR = BAYES; PROCESSORS = 2; CHAINS=2; BITERATIONS = (20000); MODEL: %WITHIN% s1s4  lit_w by y1y4 ; lit_w on hisp; y1y4; lit_w; [lit_w@0]; %BETWEEN cl2% s1s4 ; %BETWEEN tid% s1s4 ; OUTPUT: TECH1 TECH8 TECH4 TECH10 STANDARDIZED SVALUES; I received warning messages below. *** WARNING in OUTPUT command STANDARDIZED (STD, STDY, STDYX) options are not available for TYPE=RANDOM. Request for STANDARDIZED (STD, STDY, STDYX) is ignored. *** WARNING in OUTPUT command TECH4 option is not available for TYPE=RANDOM. Request for TECH4 is ignored. *** WARNING in OUTPUT command TECH10 option is only available with categorical or count outcomes. Request for TECH10 is ignored. Is there a way to obtain model fit indices in this case? Thank you for your time! 


That has not been developed yet. 

Ellen posted on Saturday, June 28, 2014  12:49 am



I was running a multilevel path analysis with binary variables(mediator) and to use MLR estimator. I also used Type=complex twolevel random. I have some questions about the model. 1. I was not getting regular fit indices(chisqare, CFI, TLI, RMSEA), only reported AIC, BIC. I wonder if I can get chisquare and other fit indices in for the fitted model. 2. I'd like to compute marginal effects of indirect effect. The model is as follows. Y on M X M on X M is binary, Y is continuous variable. Generally when compute margianl effect of binary variable, we multiply unstandardized coefficient by (1 mean of latent variable). For the marginal effects of indirect effect, do we have to use general method or other ways? 


1. These are not available with Type=Random because a random slope implies that the DV variance changes over observations so that there isn't a single covariance matrix to test. 2.This is a big and complex topic that is complicated by the binary mediator and the twolevel model with Type=Random. My mediation papers on our website deal with the first issue and our Topic 7 handout and video deals with the second issue. I am not aware of the approach of that multiplication you mention. 


Hi Profs . Muthen My fit indicies for one of my models is as follows (multilevel  with only moderating variable at level 2; interacting variable at level 1) RMSEA (Root Mean Square Error Of Approximation) RMSEA 0.125 CFI 0.825 TLI 0.703 ChiSquare Test of Model Fit for the Baseline Model Value 1036.788 Degrees of Freedom 78 PValue 0.0000 SRMR (Standardized Root Mean Square Residual) Value for Within 0.196 Value for Between 0.000 1. Is there any empirical reference you can provide with respect to assessing fit of a mujltilevel model? 2. Is there anything i can do to improve this fit? 


You may want to ask this general analysis question on SEMNET. You need to show the full input for the model. Also include the chisquare fit for the model. 


Ok Dr. Muthen, Certainly, thank you. Regards 

Qiao Hu posted on Wednesday, November 25, 2015  6:51 am



Do there have any cutoffs for pppvalue of the model fit in BSEM? 


Not really, but see the papers on our website: Asparouhov, T. & Muthén, B. (2010). Bayesian analysis of latent variable models using Mplus. Technical Report. Version 4. Click here to view Mplus inputs, data, and outputs used in this paper. download paper contact second author Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3. download paper contact second author 

Back to top 