RMSEA is currently provided when all outcomes are continuous. This includes the missing data situation.
Anonymous posted on Thursday, May 16, 2002 - 8:31 am
When running separate analyses on two nested models using FIML to handle missing data, is it still legitimate to do a chi-square difference test?
bmuthen posted on Thursday, May 16, 2002 - 9:39 am
Yes, but if you use MLR you need to use the chi-square difference testing procedure described in Chi-Square Difference Test for MLM under Special Analyses with Mplus.
Yifu Chen posted on Monday, December 06, 2004 - 11:53 am
Hi, Dr. Muthen,
I was running a growth model analysis with three time-varying variables. I also used TYPE=MISSING to handle the missing data. I found that MPLUS 3.11 only reported limited model fit indices (AIC, BIC...). I wonder if I can get chi-square and other fit indices in for the fitted model.
bmuthen posted on Monday, December 06, 2004 - 3:04 pm
You get fit indices if you add H1, saying
Type = Missing H1;
Anonymous posted on Sunday, January 23, 2005 - 9:58 am
Do I understand from the comments above that it is reasonable to do a Chi-Square difference test with Chi-Square values that result from MLR estimation if the correction procedure for MLM estimation is used?
bmuthen posted on Sunday, January 23, 2005 - 3:32 pm
I am running a 1-factor CFA model with the following features:
TYPE IS MISSING H1 COMPLEX ESTIMATOR IS MLR
My 3 dependent variables (indicators) are count variables (I have specified this, as well as my missing specification, weight strata and cluster variables in the "VARIABLE" section).
My problem is that I am only getting limited fit indices (AIC, BIC...). Is there a way for me to get more fit indices (RMSEA, CFI...)? If not, which specification above is preventing me from getting these indices?
I am using Mplus to estimate a multiple group path model using FIML. Because it is a path model, the Chi square of the baseline (freely estimated) model is 0. So, when I constrain a parameter to be equal across groups how do I test to see the impact the constraint has on fit. Doing a chi square difference test between the baseline and a constrained model means comparing the constrained model to 0. Is this appropriate? Do you know of any references regarding how to test for group differences in path models?
USEVAR ARE sex zacap zaca zach zscho zstu; MISSING IS ALL (-99.00); cluster = scho; define: int1=zach*zscho; int2=zach*zstu; ANALYSIS: type=complex; type=missing H1; estimator =MLR; MODEL: zaca ON sex zach zscho zstu; zacap ON sex zaca zach zscho zstu; model indirect: zacap IND zach; OUTPUT: STANDARDIZED;
I already added the text "H1" to my syntax to get information about the overall fit of my model, but I only got this:
Chi-Square Test of Model Fit
Value 0.000* Degrees of freedom 0 P-value 0.0000 Scaling Correction Factor 1.000 for MLR
Is this because I have a recursive/saturated model? And how do I report the overall model fit? Can I do this through chi square difference testing, i.e., comparing with a model with no predictors? Thank you for your time!
I had a similar problem as Sofie Wouters (April 30, 2009) with my freely estimated (full/saturated) path model:
USEVARIABLES ARE alc2 cn0 gp1 cn2 gp2; CLUSTER = IDYRFAM; ANALYSIS: TYPE = COMPLEX; MODEL: gp1 gp2 cn2 alc2 ON cn0; gp2 cn2 alc2 ON gp1; alc2 ON gp2; alc2 ON cn2; gp2 WITH cn2; OUTPUT: SAMPSTAT STANDARDIZED; standardized mod(3.84);
Chi-Square Test of Model Fit Value 0.000* Degrees of freedom 0 P-value 0.0000 Scaling Correction Factor 1.000 for MLR
Using your suggestion to add @0, I was able to get a Chi-Square Test of Model Fit with actual values, but I am confused as to why I would want to do that. The UCLA Academic Technology Services explanation suggests that @0 sets the structural paths to 0. If I am using this as my full, comparison model with all paths being freely estimated and then trying to compare it to nested models with more constraints, how would I do that if the paths have been set to 0?
When you fix the parameters to zero, the chi-square you obtain is not a test of the saturated model. It is a chi-square difference test between the two models. There is no way to assess the fit of a saturated model.
autonomy BY finemp deciexpl deciexps decihealth visitfam;
mch ON autonomy;
OUTPUT: STDYX ;
I am getting only loglikelihood, AIC, BIC and Chi-Square Test of Model Fit for the Binary and Ordered Categorical (Ordinal) Outcomes. My N=46304, and degrees of freedom reported under the chi-square tests is 489.
Is there any way for me to get CFI, TLI and RMSEA?
1. If I run WLS or WLSMV as the estimator in the same model I noted above, then the model drops all cases where there is any missing data on x-variables. Is this normal? Is there a way to avoid this?
2. The standardized parameters across WLS and MLR results are extremely similar. With WLS, I even get CFI, TLI and RMSEA and they show very good model fit. Now if there were some way for WLS to run with all of my cases, and not do listwise deletion on my cases with missing data, then would I be better off just using WLS rather than MLR?
The chi-square you get with maximum likelihood and categorical outcomes is not the chi-square for the H0 model. It is the chi-square that compares observed and expected frequencies of the categorical outcomes. Chi-square and related fit statistics are not available in this situation.
Yes, cases with missing on covariates are dropped with all estimators as the model is estimated conditioned on x.
I would suggest imputing data sets using multiple imputation and then using WLSMV.
I conduct a path analysis with weighted least squares means and variance adjusted (WLSMV) estimation in Mplus, version 5.1. The WLSMV estimator was chosen automatically, because the variable ďprocessing depthĒ was indicated as categorical.
The information of 20 data sets (reached over imputation) was included to examine the model.
For each descriptive quality criterium the average fit indices and standard deviations over the 20 data sets were computed.
In the output of Mplus, the model fit indexes for descriptive quality criteria are indicated as means and standard deviations. I think, I must report these means of the fit indices to report the model fit? (The fit indices of my path model are (standard deviation are in parentheses): CFI = 1.00 (.00), TLI = 1.39 (.02), RMSEA = .00 (.00), and WRMR= .26 (.02)).
I do not know, whether I must report the Chi-Square, because there is not the mean of chi square indicated.
Furthermore I read that TLI and CFI have rather low power to reject a model with binary outcomes, while WRMR works well. And I read that ďRecent studies indicate that a value less than .90 indicates good fit for WRMR.Ē (Linda K. Muthen posted on Thursday, March 08, 2001 - 3:14 pm). Can you please give me the references to these studies?
It sounds like you are using TYPE=IMPUTATION for your analysis. We provide means for the fit indices over the imputed data sets. It is a research topic of how fit statistics should be handled with multiple imputation. So it is not known how to interpret these.
Eric Teman posted on Sunday, June 24, 2012 - 11:45 am
When using WLSMV with multiple imputation in Mplus, is the model fit chi-square valid?
wei w posted on Monday, January 27, 2014 - 12:37 pm
I am wondering how the chi square test statistic is computed with FIML. Is it computed based on the log likelihood difference between tested and saturated model or one of the test statistics proposed in Yuan and Bentler(2000)?
If it is computed based on Yuan and Bentler(2000), which formula is used? The formula in equation 18 or 20.
Yuan, K., & Bentler, P. (2000). Three likelihood-based methods for mean and covariance structure analysis with nonnormal missing data. Sociological methodology, 30(1), 165-200.
Hello, When compute SRMR using FIML (with missing data).The model implied Covariance matrix can be obtained using the parameter estimated from the model output. I was wondering how does the population covariance matrix be estimated. Does it obtained by treating the estimation of each variance or pairwise covariance separately, and to use all the observations for which both variables have valid values? Could I get the (estimated) population covariance matrix by specifying "savedata sample=...". Thanks for your help.
I need help, as I am not getting any fit indices (e.g., RMSEA, CFI, etc.) for my model when I run it. I only get the following:
MODEL FIT INFORMATION Number of Free Parameters 40 Loglikelihood H0 Value -14893.531 Information Criteria Akaike (AIC) 29867.061 Bayesian (BIC) 30093.522 Sample-Size Adjusted BIC 29966.438
How can I get the other fit indices? Here is my input syntax:
ALGORITHM = INTEGRATION; ESTIMATOR = ML; Model: f1 BY LS1-LS4; f2 BY FS1-FS4; f3 BY Health1-Health4; f3 ON f1 (b1) f2 (b2); f1xf2 | f1 XWITH f2; f3 ON f1xf2 (b3); Output: sampstat residual standardized TECH1 TECH8;
You cannot get an absolute fit index. You can compare nested models using -2 times the loglikelihood difference which is distributed as chi-square or non-nested models with the same set of dependent variables using BIC.
I am running an SEM with one predictor, three mediators,and four latent constructs as outcomes. The model is estimation is terminated normally, but I'm getting very little fit information. It's giving me the Loglikelihood, AIC, BIC, and SSABIC. I'm wondering why it's not giving me Chi-Square, RMSEA and SRMR?
I am running three versions of the same two-level path model - the only difference between the three models is the academic domain being assessed (math, reading, or language). All model specifications and covariates are parallel. All fit indices are provided for the math and language models, but only the Loglikelihood, AIC, BIC, and SSABIC are provided for the reading model. Any ideas on why this would happen?
I was able to figure this out by running the model on one dataset rather than on all imputed datasets. The former approach gave me a message that had previously not been visible. The message stated that
"THE H1 MODEL ESTIMATEION DID NOT CONVERGE. CHI-SQUARE TEST AND SAMPLE STATISTICS COULD NOT BE COMPUTED. INCREASE NUMBER OF H1ITERATIONS."
Iím a beginner with SEM/MPlus and Iím trying to figure out how to deal with the following:
Iím currently estimating a structural model using the WLSMV estimator. Since my data is not MCAR, I have conducted multiple imputation and created 5 datasets. When I run the full structural model, the fit indices suggest that there are room for improvements. However, since I do not get any modification indices Iím not sure how to motivate the respecifications (even though they make a lot of sense in theory). Iím guessing that that might not be enough? Hence, I wonder what would be the best way to localize areas of misfit in this situation? I tried running the model with FIML (without imputation) and results were very similar. By looking at the modification indices from this model, some changes were suggested that make sense (and imporve model fit) also with the imputed data. Is this a reasonable approach to finding, and motivating, localized areas of strain?
Ok, thanks. Iím not sure that FIML is appropriate though, given that most of my variables are ordinal scale and/or heavily skewed? Is there any other way to localize areas of strain with multiple imputation and WLSMV?
FIML does not mean that the variables have to be continuous-normal; they can be ordinal and skewed. FIML simply means ML under the MAR missing data assumption. ML can be used also for ordinal and other non-normal outcomes.