Message/Author |
|
|
Hi. I am running an EFA with 12 categorical variables and all load strongly on a single factor (factor loadings range between .771 and .953). However, while my Chi Square is significant and the CFI = .989 and TLI = .987, the RMSEA is .184 and the SRMR is .061. Can you suggest any reason for the high RMSEA? Are there any steps that I should take to try and make it lower? Is it possible that the scale is still valid, even with the RMSEA out of an "acceptable" range? Thank you. |
|
|
The chi-square p-value should be greater than .05 for good fit. It sounds like you need to modify your model. |
|
|
Linda, Thank you for your quick response. The Chi square p-value being less than .05 can be attributed to the large sample size (n=1285), is there any reason that the CFI and TLI are good indicators of fit, while the RMSEA is not? Thanks again. |
|
|
If chi-square is not good, RMSEA will not be either. A low p-value for chi-square cannot always be attributed to sample size. It is often truly a sign of poor fit. You might instead ask why CFI and TLI are good. This could be because of low correlations among your variables. |
|
|
Chi-square is not a reliable "fit index" since it is affected by sample size (it is always significant when N > 200). It is also affected by the complexity of the model (too many variables in one factor, just like your case). Check normality of the data since highly skewed and kurtotic variables would also increase chi-square values. |
|
|
Hi, We are running a CFA with two factors in Mplus. As a result we found the following fit indices: Chi square value:6959.053 p-value:0.0000 RMSEA:0.130 CFI:0.955 TLI:0.944 All factor loadings were high, ranging between 0.603 and 0.882 Correlations among variables were high. Can you suggest any reason why the RMSEA is so high? The high ch-square can be attributed to the large sample size (n=7661) Thank you! |
|
|
I don't think these fit statistics show good fit. I would look at modification indices to seewhat is causing the misfit in the model. You might also consider an EFA to see if your CFA is correct the the data. |
|
|
Dear Ms. Muten, I´m running a CFA with three factors in Mplus (n>1000). As a result I got the following fit indices: chi-square: 1285 p .00000 CFI: 0.982 TLI: 0.979 RMSEA: 0.093 Factor loadings are very high (.80 - .90) Factor correlations are high aswell (.80) CFI/TLI are very goog, while RMSEA is not. chi-square is (unfortunatly) significant. What do you think? Is the model fit ok? Can your recommend any papers accepting a RMSEA <.10 as an still appropriate model fit or discussing the problem (RMSEA bad, CFI/TLI very goog)? Thank you very much for your help in advance! |
|
|
I would explore the fit of this model further. You don't say what your sample size is but chi-square and RMSEA both show poor fit and CFI and TLI are similar in that they compare to a baseline model. |
|
|
My sample size is almost 1200 persons. |
|
|
This is not overly large. |
|
Xu, Man posted on Tuesday, March 26, 2013 - 11:49 am
|
|
|
The previous posts didn't show their df. I ran into a similar situation, but came across on the web that RMSEA could be artificially high for models with low df - actually it w. Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2011). The performance of RMSEA in models with small degrees of freedom. Unpublished paper, University of Connecticut. |
|
|
I'm working with PISA data, and I'm using repweights, so i don't get a chi-square or CFI/NFI-estimates, but I get this: RMSEA Estimate 0.056 90 Percent C.I. 0.053 0.059 Probability RMSEA <= .05 0.000 SRMR Value 0.026 Can the unacceptable RMSEA probability be contributed to my large sample (N=4686), or does the model really have unacceptable fit? My Mplus course teacher said that RMSEA and SRMR both under 0.06 showed good fit, but I'm unsure because of the probability. |
|
|
Please send the full output and your license number to support@statmodel.com. |
|
|
I'm having a similar problem. I'm running a CFA with one factor with four indicators, and I'm including one covariance between two indicator error variances, as mod indices of the model without the covariance indicated it would dramatically improve model fit. I have a sample size of N=1700. Chi-square is significant but has a very low value. My RMSEA is high, but CFI/TLI and SRMR are in a very good range. Should I be concerned about chi-square and RMSEA? Chi-Square Test of Model Fit Value 21.455 Degrees of Freedom 1 P-Value 0.0000 RMSEA Estimate 0.109 90 Percent C.I. 0.072 0.151 Probability RMSEA <= .05 0.005 CFI/TLI CFI 0.995 TLI 0.972 Chi-Square Test of Model Fit for the Baseline Model Value 4429.274 Degrees of Freedom 6 P-Value 0.0000 SRMR Value 0.007 |
|
|
Also here are my correlations between the indicator variables, some of these are high, some are moderate, is this adding to the strange fit indices I'm getting Correlations DEL AGG DIS FIGHT ________ ________ ________ ________ DEL 1.000 AGG 0.775 1.000 DIS 0.485 0.599 1.000 FIGHT 0.541 0.614 0.820 1.000 |
|
|
I just realized now that this is under an EFA discussion and not CFA...oops! Sorry, maybe someone can still answer though. Thanks! |
|
|
I would be concerned with the poor chi-square and RMSEA fits. It is quite possible this might not be the best model (e.g, why not 2 factors each with 2 indicators?), but with only 1 df and 4 indicators you have put yourself in a situation where it is hard to know. |
|
|
Thanks. I tried 2 factors 2 indicators and the fits were terrible, much worse than one factor. All of my indicator measures are very related and could be argued to have significant item overlap and relatedness. I realize that one factor with four indicators and more than one residual covariance is not identified. My mod indices indicated more than one residual covariance which would make a big difference in improving model fit, however, I can't add them and still have an over-itentified model. This CFA was the first step in a larger SEM with this one factor and multiple observed exogenous variables. When I add my first exogenous variable predicting my one factor, I now can add those other residual covariances in my measurement side and be over-identified. When I do this, my fit is excellent. Is this considered bad practice? Everyone says to do your CFA first, make that good fitting, then begin to add structural components (my observed exogenous variables). However, I can't make my CFA look great on its own (with the residual covariances that I want to add) because I get under-identified quick. Is it OK practice to add exogenous variables and then do more work on the measurement side? THANKS! |
|
|
I think it is bad practice to make a model part identified by borrowing information from other parts. That makes the modeling more susceptible to misspecification in one part influencing many parts. If you can't get the CFA to fit well, perhaps you want to either try BSEM or simply give up on the latent variable representation and sum up your variables into a single score. Measurement modeling should really be done on carefully constructed items that have been pilot tested. |
|
|
Thanks, this has been very helpful. A few follow-up questions. Originally when I ran my CFA with one factor and four indicators, I had a couple of Heywood cases in my modindices, I had a few StdYX E.P.C.'s over 1.0, interestingly those cases had the largest M.I.'s. Any thoughts on what may be happening here? Another question, it may be that one of my indicators doesn't belong in this CFA...leaving me with one factor and three indicators. Is adding an equality constraint between two factor loadings the only way to make that model over-identified? If that three indicator model looks good, can I remove those equality constraints when I add exogenous predictors? |
|
|
These data are also highly positively skewed |
|
|
You may want to ask these questions on a general discussion list like SEMNET. |
|
DavidBoyda posted on Saturday, February 20, 2016 - 1:13 pm
|
|
|
Dear Dr.Muthen, would just like to check the fit of this model since it has quite a large sample size (n=5000). I have 8 indicators loading significantly onto 4 latent variables. Fit is: (χ2 =38.371, df =14, p = <0.001, CFI = .99, TLI = .99, RMSEA= 0.017) Chi-square p-value is significant but I cant decypher from this thread if that is a result of the large sample size. Also my RMSEA is quite small. |
|
|
Probably a result of a large sample. Check by freeing large modindices and see if key parameters change in important ways or not. |
|
João Maroco posted on Tuesday, September 05, 2017 - 7:49 am
|
|
|
Dear all, With very low DF rmsea is overestimated. In that case, rely on the chi-sq and associated p-value. Best Jº |
|
Rhyan posted on Wednesday, May 02, 2018 - 7:23 pm
|
|
|
Hi. We have a sample of about 1,300. Our model has 4 individual observed and 2 latent constructs formed with 3 and 4 additional observed items. We have 46 parameters and 50 df. Our model fit indices are below. RMSEA= 0.108 (CI: 0.102, 0.114) CFI= 0.931 TLI= 0.909 Chi2 p-value = 0.0000 Are there any recommendations for ways to decrease the RMSEA and increase the CFI? |
|
|
Look at Modindices. |
|
Ali posted on Wednesday, May 16, 2018 - 2:20 am
|
|
|
I was running CFA model with two factors each of which has 4 items. Items are measured by 4-point Likert scale, so I used WLSMV to estimate the model. I obtained CFI=0.98 and TLI=0.972, but RMSEA is 0.246 and the test of Chi-square is rejected. The rejection of Chi-square could be attributed to the large sample size (i.e. 4978). The values of loadings are above .7, and most of them are between 0.85 and 0.9. Also, I conducted EFA, and it showed two factors are fitted my data well. So, I am a bit concerned if my data fit the two-factor CFA model due to the high RMSEA. |
|
|
Perhaps you have significant cross-loadings. You can find out in your EFA. |
|
Daniel Lee posted on Monday, November 18, 2019 - 12:22 pm
|
|
|
Hi Dr. Muthen, I noticed that the RMSEA has a corresponding probability value (Probability RMSEA <= .05). Is this a p-value for the RMSEA? If not significant, could one argue that the p-value is not significantly different from zero? Thank you! |
|
|
The RMSEA P-value is the Probability that RMSEA <= .05. If that P-value is greater than 5% you can argue that the RMSEA value does not indicate a model rejection (the RMSEA value doesn't reject the model if the RMSEA value is between 0 and 0.05). Usually this is useful when the RMSEA value is near the cutoff value of 0.05. If the RMSEA value is not near 0.05 one can typically ignore the confidence limit and the P-value and simply use the actual RMSEA value. |
|
Back to top |