Anonymous posted on Monday, June 25, 2001 - 11:57 am
We are running a multi-level model (Version 2)in which we are testing the stability of a measurement model over two time periods. The model converges, and we have a non-significant chi-square value with 17 degrees of freedom. However, our RMSEA is 0.000, CFI is 1.0, and TLI is 1.021. Can you provide some insight into why we would be getting these results?
Your RMSEA, CFI and TLI values suggest a very good model fit and chi-square does not disagree with that.
Tom Munk posted on Thursday, November 17, 2005 - 12:19 pm
I am testing a multilevel SEM. MLR provides CFI, TLI, RMSEA, SRMR(b), and SRMR(w). But it also provides a warning against using chi-square difference tests. Can all of these fit indices be used with the same standards as a single-level SEM?
A web search finds class notes from Newsome suggesting: >.95 for CFI and TLI <.08 for SRMR <.06 for RMSEA
I'm wondering how do we determine a good fitting model in multilevel analysis. By looking at the output from Mplus user's guide example 9.9. Test of model fit are given as:
TESTS OF MODEL FIT Loglikelihood H0 Value -6752.350 Information Criteria Number of Free Parameters 23 Akaike (AIC) 13550.700 Bayesian (BIC) 13663.578 Sample-Size Adjusted BIC 13590.529 (n* = (n + 2) / 24)
What are the cutoffs for these values? From what I understand the more negative the loglikelihood gets the better model fits. But is there a statistical test for this value? Can we transform it to a chi-square distribution? If yes, can we conduct a chi-square difference test between an unconditional model (no predictor at level two) and the target model?
thanks in advance for your help,
bmuthen posted on Wednesday, November 23, 2005 - 6:35 pm
For general multilevel models, no overall fit index has been developed. The usual indices are based on covariance matrix fitting and this is not necessarily relevant when as with random slope models the variance varies across subjects. This is why you don't see fit indices in multilevel programs. Instead you should do what most statisticians do, namely consider a sequence of nested models and get LR chi-square tests by 2 times the log likelhood difference.
Just to make sure. I am being asked to report N for the chi-square (model fit index). Am I correct when I assume that in case of multilevel modeling, it is cluster size*number of individuals (number of observations in the output)?
Thank you. As I am looking at whether individuals differentiate between different conditions, each individual forms a cluster. So, for the chi-square, I should report the number of clusters, and in my case, it is the number of individuals. Did I understand you correctly?
We use multilevel modeling so that conditions within individuals form the within level (we are looking at variance between different conditions within individual) and individuals form the between level (examining variance between individuals across the conditions).
1. In this example, the sample statistics consist of 4 means for the y variables, 10 variances and covariances for the y variables on the within level, 8 covariances between the x and y variables, 10 variances and covariances for the y variables on the between level, 4 covariances between the w and y variables. This is a total of 36. There are 19 free parameters so there are 17 degrees of freedom.
3. This is the only fit statistic that is provided for each part of the model.
Suppose we have to choose between HLM-2 and HLM-3. Which test procedure should we use. Is there any model selection criterion for the HLM setup? We need to cite something similar to Hausman test, the test we use to select between fixed effect and random effect model (within panel data framework).
Perhaps you can settle the issue of how important level 3 clustering is by making a comparison of two runs. First use Type = Complex Twolevel where Complex deals with clustering on level 3 and Twolevel deals with clustering on level 2. Compare the SEs you get there with those of Type = Twolevel and ignoring the level 3 clustering.
Mplus does not do Hausman testing.
The choice between fixed and random effects is another, broader matter.
I would like to ask if the interpretation of fit indices such as CLI, TLI, RMSEA for multilevel model the same as that for single level model. I read from the above that it may not be appropriate for us to use the cutoffs for fit measures that are used for single level models on multilevel models. So is there other rule of thumbs for using these fit indices for multilevel models. How do we use fit indices such as CLI, TLI and RMSEA for evaluation of model fit?
Besides, I have fit a single level model and multilevel model for the same data set. The resulting TLI and RMSEA showed a great drop in model fit but the CLI remain more or less the same. Why would it be in this case?
See the following paper which is on our website for random slopes:
Muthén, B. & Asparouhov, T. (2008). Growth mixture modeling: Analysis with non-Gaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis, pp. 143-165. Boca Raton: Chapman & Hall/CRC Press.
Sally Czaja posted on Thursday, December 03, 2009 - 11:43 am
Hello. I'm trying to find out whether a whole group model or one with two groups better fits my data. Nested model syntax: USEVARIABLES ARE female raceWb ageint1 acrimyn poverty neighpov; CLASSES= c(2); KNOWNCLASS = c(grp=0 grp=1); WITHIN = female raceWb ageint1 poverty; CLUSTER = census; BETWEEN = neighpov; CATEGORICAL = acrimyn; ANALYSIS: TYPE= TWOLEVEL mixture; Model: %WITHIN% %OVERALL% acrimyn ON female raceWb ageint1 poverty; %BETWEEN% %OVERALL% acrimyn on neighpov;
In the comparison model, everything is the same as above except for the following model specification. Model: %WITHIN% %OVERALL% acrimyn ON female raceWb ageint1 poverty; %c#1% acrimyn ON female raceWb ageint1 poverty; %c#2% acrimyn ON female raceWb ageint1 poverty; %BETWEEN% %OVERALL% acrimyn on neighpov; %c#1% acrimyn on neighpov; %c#2% acrimyn on neighpov;
1) Is my modeling approach correct. 2) I'm using the loglikelihood difference testing to compare the fit of the models. Is this correct? Are there any other ways comparing model fit? 3) If the loglikelihood difference test is not significant does that indicate that the nested model better explains the data than the comparison? Thank you.
This sounds correct. If the constrained model does not worsen model fit, then the parameters are equal across groups.
Murphy T. posted on Wednesday, October 19, 2011 - 12:15 am
I estimated a two-level model and get the following fit indices for my model: RMSEA: 0.058 CFI: 0.967 TLI: 0.845 SRMR (within): 0.010 SRMR (between): 0.194
The RMSEA and CFI seem to look quite good (by conventional cutoff values), but the TLI and SRMR (between) seem to indicate poorer fit. What could be the reason for these discrepancies? Are you aware of cutoff values for these fit indices for multilevel models? Thank you very much!
I am testing SEM model fit for 4 sequential, multiple mediation models. The fit index results I get with MPlus are all the same, however--which is highly unanticipated. One example of a model is:
UDO ON HS; SC ON HS UDO; PD ON HS UDO SC;
SC ON UDO; HS ON UDO SC; PD ON UDO SC HS;
These are very different models, yet I get the same fit index results for both. Is there something I'm missing in my syntax that should be used to indicate the sequence of mediations each model proposes? Thanks!
I am testing a path model and receiving fit indices that appear unrealistically high (RMSEA=0, TLI/CFI=1) in model output that includes an error message saying, "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.213D-16. PROBLEM INVOLVING PARAMETER 226."
I am using MLR estimation with survey weights and clustered standard errors to account for nested sampling design (children within schools).
I believe the error message is due to the fact that I have dichotomous covariates, which I allow to covary for the purposes of the FIML approach to missing data. My sample size is over 16,000 and I have no latent variables, so I believe this is not a model identification problem.
When I remove parameter 226, I get the same error message for another covariance between two dichotomous covariates. I have also experimented with setting numerous other covariate paths to zero, but the fit indices and error message remain the same (except for the parameter number).
So, should I assume that I have a model with excellent fit and ignore the error message. Or is there some other alternative?
Remove the WITH statements involving the dichotomous covariates. If the message disappears, you can put the statements back and ignore the message. It is triggered because the mean and variance of a dichotomous variable are not orthogonal.
Yes, when I remove the WITH statement for the covariates, the error message goes away. Thank you for that suggestion!
However, when I remove that WITH statement, I still have perfect fit statistics (RMSEA=0, CFI/TLI=1). That seems implausible to me. It is really possible for an empirical model to have perfect fit?
Could this be caused by shared method variance? The data for independent and mediator variables was gathered via survey from a single respondent, i.e. the mother of each child. The dependent variables are direct assessments of children's literacy and numeracy skills.
*** WARNING in OUTPUT command STANDARDIZED (STD, STDY, STDYX) options are not available for TYPE=RANDOM. Request for STANDARDIZED (STD, STDY, STDYX) is ignored. *** WARNING in OUTPUT command TECH4 option is not available for TYPE=RANDOM. Request for TECH4 is ignored. *** WARNING in OUTPUT command TECH10 option is only available with categorical or count outcomes. Request for TECH10 is ignored.
Is there a way to obtain model fit indices in this case?
Ellen posted on Saturday, June 28, 2014 - 12:49 am
I was running a multi-level path analysis with binary variables(mediator) and to use MLR estimator. I also used Type=complex twolevel random. I have some questions about the model.
1. I was not getting regular fit indices(chi-sqare, CFI, TLI, RMSEA), only reported AIC, BIC. I wonder if I can get chi-square and other fit indices in for the fitted model.
2. I'd like to compute marginal effects of indirect effect. The model is as follows. Y on M X M on X
M is binary, Y is continuous variable. Generally when compute margianl effect of binary variable, we multiply un-standardized coefficient by (1- mean of latent variable). For the marginal effects of indirect effect, do we have to use general method or other ways?
1. These are not available with Type=Random because a random slope implies that the DV variance changes over observations so that there isn't a single covariance matrix to test.
2.This is a big and complex topic that is complicated by the binary mediator and the two-level model with Type=Random. My mediation papers on our website deal with the first issue and our Topic 7 handout and video deals with the second issue.
I am not aware of the approach of that multiplication you mention.
Asparouhov, T. & Muthén, B. (2010). Bayesian analysis of latent variable models using Mplus. Technical Report. Version 4. Click here to view Mplus inputs, data, and outputs used in this paper. download paper contact second author
Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3. download paper contact second author
May Lee posted on Tuesday, November 15, 2016 - 10:55 am
I was running level 1 model with nested data using type=twolevel analysis(level 2 only has 21 clusters).And the MODEL FIT INFORMATION is below:
Number of Free Parameters 22
H0 Value -269.424 H0 S 1.5769 for MLR H1 Value -269.432 H1 Sc 1.5769 for MLR
Hello, I am testing a SEM with at least one categorical dependent variable. I have used the WLSMV estimator and my results are as folllows: chi-square(376, N = 865) = 987.996, p < .01, CFI = 0.828, RMSEA = .043. The CFI value indicates that my model does not fit the data well but the RMSEA seems to indicate that it does. My model is complex (one latent variable and 23 observed variables) and I am wondering if the CFI is not the best indicator of model fit to use in this context. Also, my data is non-normal and I am wondering if this could affect the fit statistics.‎
Look at modification indices to see if the model can be improved.
Sophie Dan posted on Wednesday, April 26, 2017 - 7:24 am
Dear Dr. Muthen,
If I just do a between level EFA, the model fit cannot be accepted, but when do the twolevel EFA together with within level, the model fit is acceptable, can I just use the twolevel EFA directly? Can the poor model fit when doing between level EFA seperately due to a limited number of cluster? (For example, with 13 variables but only 45 clusters?) If the cluster number is limited, even the twolevel(within+between) result is not trustworth?
1. I wonder why my CFI is too low. I understand that CFI is a ratio between null model and proposed model and that a low CFI may indicate high correlation between variables.
I used the option modindices but none of them fits my theory. Could you please suggest how I can improve this model?
2. I did not use latent factor modelling. This is merely a Path Analysis with ordinal categorical variables. Should I even be worried about the model fit? I think a difftest may be more reasonable to indicate explanatory power of specific variables.