Message/Author 

Anonymous posted on Wednesday, November 13, 2002  11:37 am



I have a few questions regarding analyzing the output provided in the Mplus parameter arrays. 1. When I ask Mplus to provide the model residual matrices using OUTPUT: RESIDUAL, among the matrices Mplus provides is a matrix of slope residuals. I noticed that this matrix contains slopes for relationships I don't explicitly include in the model  for example, I regress Y1, Y2 and Y3 on X1, X2, and X3 and on a latent variable U1 which loads on I1, I2, and I3. Mplus provides the slope for Y1 on I1 even though the relationship is not explicity modeled. How do I interpret this slope ? 2. How can I ask Mplus to provide me the correlations between several latent variables in a model, i.e., U1, U2, and U3 ? 3. Mplus provides a residual matrix for model estimated intercepts and thresholds. How useful are these in analyzing model fit  i.e., do you recommend including these residuals in plots of residuals and should these residuals be as much a matter of concern as the residuals for the VAR/COVAR matrix ? 


1. With categorical outcomes, when covariates are included in the model, the sample statistics are no longer the correlations but the probit thresholds, regression coefficients, and residual correlations. These are estimated for all of the observed variables in the analysis just as correlations are estimated for all variables in the analysis if there are no covariates in the model. 2. You can say, v1 WITH v2 v3; v2 WITH v3; If these variables are exogenous, the covariances will be included in the model as the default. 3. The residuals of the intercepts/thresholds are usually not structured so are not important for model fit. 

Anonymous posted on Monday, July 28, 2003  7:14 pm



Hi: I have a question. That is, the model fit statistics of my model is not consistent for me to reach a conclusion. How can I interpret this result. The result is as below: ********************************************* TESTS OF MODEL FIT ChiSquare Test of Model Fit PValue 0.0683 ChiSquare Test of Model Fit for the Baseline Model PValue 0.0000 CFI 0.946; TLI 0.847 RMSEA 0.037 Probability RMSEA <= .05 0.702 SRMR 0.022 ************************************************* 

bmuthen posted on Monday, July 28, 2003  9:09 pm



The CFI is a little low, indicating a rather poor fit, while the other fit indices are good. I wonder if you sample size is perhaps small, or your sample correlations low  that might account for this discrepancy. 

Anonymous posted on Tuesday, August 12, 2003  12:36 pm



Can someone recommend a resource specifying how to translate standard model fit indices into evaluatory terms such as "unacceptable, acceptable, good, or excellent"? A simple chart is the kind of thing I want. 


I think this is a good question for SEMNET. You may also want to look at the Hu and Bentler article that you can find under References at www.statmodel.com. 

Anonymous posted on Sunday, September 07, 2003  6:16 pm



Is there currently a convenient way to do nested difference of ChiSquare tests for the Mplus WLSMV estimator ? Is this at all related to the procedure for doing similar tests for the Mplus MLM estimator ? 


There is currently no way to do difference testng for WLSMV. We recommend using WLS for difference testing and WLSMV for the final model. Difference testing for WLSMV is likely to be available in Version 3. 

Anonymous posted on Tuesday, September 09, 2003  10:42 am



A few followup questions to the reply from Sept. 8 re: WLSMV difference tests: 1. Given that there's no way of performing nested tests of fit using WLSMV, isn't it possible that using nested tests of fit with WLS one could obtain a final model that would not have be obtained using WLSMV nested tests of fit(provided that they were available) ? In other words, if one is testing a relatively complex model using the Mplus WLS estimator, wouldn't one be better off using the same WLS estimator in fitting the model and then in reporting / interpreting the final results ? 2. Aside from the scaled ChiSquared statistics, what specifically is "lost" in opting for the WLS over WLSMV ? 3. In estimating the coefficients and SEs for a rather complex SEM using Mplus, I notice a radical difference in the SEs obtained using WLS versus WLSMV (in some cases, the SEs almost double). Is this to be expected ? 4. Is Muthen's CFA for ordered categorical indicators still valid using the Mplus WLS as opposed to the WLSMV estimator ? 5. Is there a situation in which you would *not* recommend using the WLSMV estimator  i.e., large number of desired parameters relative to the sample size; a model consisting of mostly categorical exogenous variables, etc ? Thank you. 

bmuthen posted on Tuesday, September 09, 2003  5:00 pm



1. Yes on the first part. For the final model it still seems worthwhile to make sure that the WLS results are good by comparing parameter estimates and SEs to those of WLSMV  and perhaps report the latter. 2. The WLS SEs may not be as good as those of WLSMV  and in some cases of smaller samples and more skewed items, the parameter estimates may not be as good. 3. Not unless you work with smaller samples and more skewed items. 4. Yes, but see 2. 5. I am not yet aware of any such situation. 

Anonymous posted on Wednesday, September 10, 2003  11:37 am



At the risk of becoming a nuisance, I'd like to pose a final followup question (or two): Regarding your response #1: If the WLS and WLSMV parameter estimates do not agree in a fairly sizeable SEM (i.e., a SEM with many parameters relative to sample n), how would one be able to tell if the disagreement between WLS and WLSMV parameter estimates is due to the fact that WLS produced a model that does not fit by WLSMV standards (so to speak), versus the fact that the WLSMV estimates are superior to the WLS estimates in for the model in question (due to sample size, skew of continuous variables in the model, etc) ? Also, purely out of curiosity: in reading the Mplus v2.0 manual technical appendices, it appears that the WLSMV estimator "uses the information available in the data twice" to produce the relevant W matrix, whereas WLS only relies on the data once (if that makes sense) to produce W. Thus, hypothetically, wouldn't one want to stay with the WLS estimator if one had less confidence in his / her sample (for whatever reason) ? I.e., wouldn't WLSMV compound errors of estimation if one was working with a sample he / she had some, but limited, confidence in ? This is all very helpful. Thanks very much for your input. 


I think it would be a good idea for you to send the data and the two outputs  WLS and WLSMV  to support@statmodel.com. Then we can give you a more informed answer given that what you are seeing is most likely data dependent. 

bmuthen posted on Thursday, September 11, 2003  5:52 pm



Regarding the first question, if a model doesn't fit well, the quality of WLSMV estimates is not guaranteed. Regarding the second question, WLS and WLSMV both draw on "the full weight matrix, W" computed from the data. In WLS this matrix is used both in parameter estimation and in SE and Chisquare computations. In WLSMV, only the diagonal is used for parameter estimation and the full W only for SE and chisquare. So if you have limited confidence in your data  or in your W computed from the data  you are perhaps better off using WLSMV because the parameter estimates are not dependent on the whole W. 

Anonymous posted on Sunday, October 26, 2003  4:26 am



Hi, I use Mplus to test a model that was previously specified in a wellcorroborated study. The dependent variable is ordered categorical (4 categories). I use the default value about the estimator.The fit statistics are as follows: CFI=0.963, TLI=0.960, RMSEA=0.041, WRMR=1.066 According to the User's Guild, WRMR should be under 0.9 to be a good fit. Why the former three indexes indicate a good fit, but WRMR show a bad fit? By the way, I have another model of which the dependent variable is binary, the fit statistics are as follows: CFI=0.958, TLI=0.821, RMSEA=0.037, WRMR=0.586 Again, why in this case TLI is particularly bad, but CFI, RMSEA and WRMR seem good? Thanks, Neds 


If you send the two outputs to support@statmodel.com, I can comment on them. I need to see the full model, chisquare, estimtaor used etc. to discuss. 


There have been few studies of the behavior of fit statistics for categorical outcomes. In your case, you have a combination of one categorical and several continuous outcomes. I know of no studies of the behavior of fit statistics in this situation. The following dissertation studied fit statistics for categorical outcomes. It can be downloaded from the homepage of our website. Yu, C.Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles. You may have to do a simulation study to see which fit statistic behaves best for your situation. 


Greetings, I noticed that Mplus produces matrices of residual values for first and secondorder moments when the RESIDUAL option is invoked on the OUTPUT line. Is there a corresponding option that enables output or saving of individual recordlevel residuals for model equations? Or does the end user need to compute those manually from the raw input data and the parameter estimates generated by Mplus? With many thanks for your reply, Tor Neilands 

bmuthen posted on Tuesday, December 07, 2004  4:48 pm



Saving individuallevel residuals is currently not available. 

gju posted on Monday, January 10, 2005  4:32 pm



I have a question about the fit statistics of an SEM/MIMIC model that I am running. This is a 2 dimensional model of the sf12, with both of the dimensions regressed on various demographic variables. Both the latent dimension indicators and the x variables are dichotomous/polytomous. The problem I am having is that I don’t understand the fit indices enough to know why most indicate close to good fit and the WRMR indicates poor fit. I am using the WLSMV estimator. TESTS OF MODEL FIT ChiSquare Test of Model Fit Value 16631.262* Degrees of Freedom 104** PValue 0.0000 ChiSquare Test of Model Fit for the Baseline Model Value 281158.355 Degrees of Freedom 61 PValue 0.0000 CFI 0.941 TLI 0.966 RMSEA 0.054 WRMR 8.622 Any help/hints will be appreciated. 


See the Yu dissertation that is listed under Mplus Papaers on our homepage. If WRMR is discrepant, I would ignore it. 

Anonymous posted on Monday, January 17, 2005  12:05 pm



Hello: I was wondering if MPlus performs standardized residual scatterplots for various multivariate techniques (for evaluating normality, outliers, linearity and heteroscedasticity)? Thank you. 


No, Mplus does not do that yet. 

kgreen posted on Wednesday, March 02, 2005  8:43 am



My question pertains to the model fit statistics for my SEM model with all continuous variables. The model has 2 latent variables with two indicators each and eight additional measured variables. I'm confused by the RMSEA of 0. I assume it is because my DF are larger than my chisquare. Does that indicate adequate fit or a problem with the model? ChiSquare Test of Model Fit=34.887, DF =36, PValue=0.5214 ChiSquare Test of Model Fit for the Baseline Model=1036.027, DF=65, Pvalue= 0.0000 CFI=1.000 TLI=1.002 RMSEA=0.000 90 Percent C.I. (0.0000.027) Probability RMSEA <= .05 1.000 SRMR Value=0.030 Thanks! 


Yes, RMSEA of zero comes about because your degrees of freedom are larger than your chisquare value. It indicates good fit. 

Anonymous posted on Wednesday, May 25, 2005  3:39 pm



Hi, I interpreted the pvalue ( =0) associated with the chisquare of the fitted model as a sign of poor fit of my model. But my colleague here thinks that it has nothing to do with goodness of fit but rather is about the joint significance of the exogenous variables. Can you please help. Thanks! 

bmuthen posted on Wednesday, May 25, 2005  3:41 pm



This p value concerns the probability of the model having generated the data, so you are right. 

Anonymous posted on Friday, September 02, 2005  12:02 pm



Dear Professor Muthen, Can I interpret the fit statistics (chisquare) in the MPLUS results as a test of null hypothesis, which states that the model fits the data? For example, if p=0.4, can I say  the null hypothesis stating that the SEM model fits the data cannot be rejected? Thanks! 


Yes. 


Is there a way to get a matrix of standardized residuals (SSigma) in Mplus? 


No, Mplus does not provide standardized residuals. I will put that on our list of things to add. 

anonymous posted on Wednesday, November 02, 2005  2:42 am



Hello Linda, I´m sorry, but I`m not that familiar with the names of the different matrices in SEM. Is it possible to obtain standardized residuals for the difference between the observed and predicted relationship among the indicators (= correlation residuals) in Mplus? Thanks! 

bmuthen posted on Wednesday, November 02, 2005  5:15 am



Not currently. That is what Linda said she put on her list of things to add in her message above. Note, however, that often the Modification indices are more informative than the residuals in terms of which model modifications are needed to fit the model better to the data (see Sorbom articles). 

robertav posted on Monday, September 03, 2007  9:25 am



Dear Authors, I'm working with a SEM with both continuous and categorical(ordinal) indicators. I'm using the WLSMV estimator. Most fit indexes show a good fit, only the WRMR is really bigger that the suggested value of 0.9. TESTS OF MODEL FIT ChiSquare Test of Model Fit Value 6592.727* Degrees of Freedom 182** PValue 0.0000 ChiSquare Test of Model Fit for the Baseline Model Value 49532.611 Degrees of Freedom 84 PValue 0.0000 CFI/TLI CFI 0.870 TLI 0.940 Number of Free Parameters 115 RMSEA Estimate 0.051 WRMR Value 4.646 I saw the dissertation of Yu, C.Y. (2002), but he treats only the case of binary outcomes. What do you suggest me? Should I ignore the WRMR index? Thanks roberta 


You can ignore WRMR but I don't think you can ignore CFI and TLI and chisquare depending on your sample size. 

David Lin posted on Thursday, July 10, 2008  7:44 pm



I have done an CFA WITH CONTINUOUS FACTOR INDICATORS, which is estimated by "MLM" (not ML). However, according to Hu and Bentler (1999), the CFI and TLI should be bigger than .95, and RMSEA < .06 by ML estimator. Is this the case for MLM? Secondly, my CFA model with SB£q2=363.38, df=245 (p<.001), CFI=.91, TLI=.89, RMSEA=.043, SRMR=.06, I am not sure whether it is ok. Could you give me some advice? Thanks in advance. ChungTsen I cited it below Hu, L.T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 155. The results suggest that, for the "ML" method, a cutoff value close to .95 for TLI, ¡K, CFI¡K; a cutoff value close to .08 for SRMR; and a cutoff value close to .06 for RMSEA are needed before we can conclude that there is a relatively good fit between the hypothesized model and the observed data.(p. 1) 


I would use those cutoffs for MLM also. It sounds like your model does not fit the data well. You could look at modification indices to see where the misfit lies. You could also got back to an EFA to see if the items behave as expected. 


Hello!! I have been suggested to correlate indicators (using with) by looking at those with high Modification Indices, which could help to improve the model fit. I have been trying this and ChiSquare does in fact reduce, but the p value is still .0000. CFI and RMSEA also improve but they are still inadequate. My question is to what extent could I rely on this strategy to improve model fit? Or does this mean the model is not strong enough? I would deeply appreciate any suggestions/insights. Thanks, Laura. This is the model I am using. For example: CD4 is theoritised as part of anx but I included it in the other latent vars after looking at Modification Indices: Model: delinq by CD11 CD19 CD21; aggr by CD1 CD2 CD4 CD5 CD6 CD7 CD8 CD9 CD12 CD18; anx by CD3 CD4 CD13 CD14 CD15 CD16 CD17 CD22 CD23 CD25 CD20; soma by CD4 CD24 CD26; CD1 with CD4; CD7 with CD8; CD18 with CD19; CD22 with CD23; ChiSq= 1234.69, df=185; p=0.000 CFI=.872, TLI=.941, RMSEA=.051 


Followup for my last post, I forgot to mention that: 1. all of my variables have been defined as categorical. 2. correlations between CD1CD4; CD7CD8; CD18CD19; and CD22CD23 could arguably be supported by theory. for example CD7 refers to "destroys personal belongings" and CD8 refers to "destroys other´s belongings". The four pairings showed high Mod Indices. Thanks again, Laura 


You should have some justification like theory for adding residual correlations. If you need so many, perhaps your CFA model is not correct. I would suggest doing an EFA to see whether the factors you have specified in the CFA are in fact measured by the variables you are using for factor indicators. 


Dear Linda, Thanks a lot for your message; this is very helpful. I had conducted an EFA in SPSS, using Oblimin and Principal Component Analysis, which gave me 6 factors. I had changed them into 4 by allocating some indicators into the factor in which they had the second highest loading. These seemed to make sense with the original scale (it is the Achenbach Child Behaviour Checklist). After doing the EFA in MPLUS using WLSMV or WLSM instead I need to reach 10 factors until the Chisquare gets significant at .01 and 11 factors for it to be significant at .05. (CFI=.997/998; TLI=.997/.998; RMSEA=.012/.010). However, 10 or 11 factors seem too many for a CFA given that I have 26 indicators, isn´t it? Additional info: N=2152, there are no outliers and no missing cases. The scale for responses is 1=yes,2=sometimes,3=no. Any insights? Thanks a lot; your expertise is invaluable. have a wonderful day! Laura 


I would focus on a good CFI and the number of factors supported by theory. 


Hi I am comparing three alternative models which assess which mediator is superior model 1 with mediator A model 2 with mediator B model 3 with mediators A and B as models A and B are not nested, I had to assess AIC but the results of the AIC is in the opposite direction as the other fit indexes, does anyone know what this means? thanks in advance. chisq/df CFI TLI RMSEA A 5.30 0.76 0.4 0.2 AIC=1473.82 B 2.05 0.92 0.81 0.1 AIC=2066.73 A&B 2.54 0.91 0.73 0.12 AIC=2288.47 


The metric of the loglikelihood is different for the three models. I can't think of any measure that can be used to compare them. 


Dear M&M, I am using SEM to analyze a pathway in which I have a final binary outcome, from a matched casecontrol study. Because of this I am using stratification for the matching variable. So, I have only two categorical variables, in which one is the casecontrol status. I would like to know how I can get goodness of fit indexes from this analysis (ANALYSIS type: complex, Estimator=MLR). I also would like to know if it is possible to estimate the indirect effects pvalues. Many thanks, Valéria 


With MLR and categorical outcomes, chisquare and related fit statistics are not defined. You can use the WLSMV estimator instead. However, it is not clear if your model has any degrees of freedom. 


Thanks Linda. My equations are: sta ON sex X1 X2 X3 X4 X5 X6 X7 X8; X6 ON sex X1 X2 X7 X8; X3 ON sex X1 X2 X4 X6 X9 X10; X4 ON sex X1 X2 X11 X12; X9 ON sex X1 X2 X3 X11; X3 ON sex X1 X2 X6 X9 X11; X5 ON sex X1 X2 X6 X13; sta and sex are categorical, and I have different results with MLR and WLSMV estimation. I have 8 dependent variables and 7 independent. All observed and 2 binary. Do you think it is better if I use WLSMV? Many thanks, 


The results are in different metrics. Maximum likelihood results are logistic regression while weighted least squares results are probit regression. 


ok. thanks Linda. 


Dear all, I estimated several models based on my theoretical expectations. All models provide good fit, with one exception. That model looks like this: m1 BY f1 f2; m1 ON x1x12; m2 ON x1x12; y1 ON m1 m2; y1 ON x2x12; y2 ON m1 m2; y2 ON x2x12; y1 with y2; MODEL INDIRECT: y1 IND x1; y2 IND x1; In this model, y1 and y2 are both categorical and N = 339, I used type=complex to control for clustering. The fit indices I get are: chisquare = 46.351 (45), p=.095, CFI = .844, TLI = .804, RMSEA =.031, WRMR =.762. I now doubt what to do. What can be reasons of this poor fit? Because I want to test the theoretical expectations, I prefer to not change a lot. Can I, based on these fit indices, argue that the theoretical model does just not fit the data? Or do I make mistakes and are these fit indices a result of wrong model specifications? 


CFI and TLI can be poor when the correlations among the variables are low. The H0 model is not much better in this case than the baseline model. 


Thank you for your response. In the meantime, I also tried to estimate the models without indicating y1 and y2 as categorical. In that case, CFI=.986 and TLI = .973. The results remain highly similar. y1 and y2 are both measured on a 3point scale. The mean and sd of y1 is 2.11 (.045) and of y2 is 2.33 (.039), so they are skewed. The N = 339. Can I treat y1 and y2 with this N as continuous? Or do you advise me to keep them as categorical? Thank you very much again! 


If threecategory variables have floor or ceiling effects (are skewed in your words), they should be treated as categorical. Not doing so results in an attenuation of their correlations. 

Peter posted on Saturday, December 11, 2010  6:44 pm



Hi guys, I'm trying to do an EFA on 3 factors. Here's my results: ChiSquare Test of Model Fit for the Baseline Model Value 895.734 Degrees of Freedom 630 PValue 0.0000 CFI/TLI CFI 0.897 TLI 0.876 Number of Free Parameters 125 RMSEA (Root Mean Square Error Of Approximation) Estimate 0.039 The requirements were not fulfilled and there is no need to look at the factor structure. My question is this: how do you report the results from Mplus according to APA 6.0? All help appreciated. 


I am not familiar with APA 6.0 guidelines. You should check these to see how to report the results. 

naT posted on Thursday, March 03, 2011  1:28 pm



How can I obtain a single indicator which shows the discrepancies between sample observed and model implied variance/covariance matrix? (i.e. Ssigma or the chisquare value of goodness of fit) Is this the Chisquare Test of Model Fit in the output? If not, is there a way I could compute this? 


You would want to look at chisquare. This is what it tests. 

naT posted on Thursday, March 03, 2011  2:49 pm



Thank you very much for always responding promptly. Please help me clarify further about the statistics used in the output. Now I understand that H1 loglikelihood value corresponds to the observed sample variance/covariance and H0 corresponds to variance/covariance implied by the proposed model. And the chisquare of the baseline model tests the null hypothesis that all regression coefficients in the proposed model are zero. Therefore, we want to fail to reject ChiSquare Test of Model Fit, but we want to reject Chi Square Test of Model Fit for the Baseline. Please correct if these are wrong. Thank you. 


The Chisquare for the baseline model is used to compute CFI and TLI. See an SEM book like the Bollen book for a discussion of fit statistics and their interpretation. 


Dear Drs. MuthÃ©n & MuthÃ©n: I fit a model using Mplus 4.2 with this analysis and outcome options: ANALYSIS: TYPE = MEANSTRUCTURE MISSING H1; ESTIMATOR = WLSMV; PARAMETERIZATION IS THETA; ITERATIONS = 1000; CONVERGENCE = 0.00005; COVERAGE = 0.10; OUTPUT: SAMPSTAT MODINDICES(0) STANDARDIZED H1SE; And I got the following outcome: TESTS OF MODEL FIT ChiSquare Test of Model Fit Value 0.000* Degrees of Freedom 0** PValue 0.0000 * The chisquare value for MLM, MLMV, MLR, ULS, WLSM and WLSMV cannot be used for chisquare difference tests. ** The degrees of freedom for MLMV, ULS and WLSMV are estimated according to a formula given in the Mplus Tech... What is the value and df of the Chisquare? How can I find the information about the Model Fit? Thank you. AndrÃ©s FandiÃ±o. 


If you have zero degrees of freedom, the model is justidentified and model fit cannot be assessed. 


Dear Drs. M. & M. If my model is justidentified; can I rely on the estimated parameter coefficients and their errors from such model? Thank you. Andres FandinoLosada. 


Yes as long as there are no messages to the contrary. 

yan liu posted on Wednesday, March 07, 2012  11:42 am



Greetings! I did some analyses based on the multilevel SEM mediation modeling in Mplus (Preacher, Zyphur and Zhang, 2010). This new method uses latent variable decomposition approach, so it allows us to examine the mediation effect at both levels. However, I found the model fit is almost perfect (RMSEA, CFI,SRMR) and Chisquare test has zero df. So it's just identified model. I tried "delete 1add 1" and found the model fit became poor. In this case, I am not sure if this is caused by the poor model fit of my final model or the misspecified model with one path deleted. Could you please give me some advice about how to provide evidence about model fit in this justidentified model? My model is specified as follows (Predictor: teach, Mediator: PNS, Outcome: movat). Thanks! MODEL: %WITHIN% PNS ON teach (aw); movat ON PNS (bw); movat ON teach; %BETWEEN% movat PNS teach; PNS ON teach(ab); movat ON PNS (bb); movat ON teach; MODEL CONSTRAINT: NEW(indb indw); indw=aw*bw; indb=ab*bb; Thanks a lot! Yan 


Model fit cannot be assessed on a justidentified model. 


Hi, I have run a model for which the SRMR is substantially larger than the RMSEA. My fit indices are as follows: ChiSquare Test of Model Fit Value 367.148 Df 192 PValue 0.0000 RMSEA: 0.059 90 Percent C.I. 0.050 0.068) Probability RMSEA <= .05 0.055 CFI 0.922 TLI 0.914 ChiSquare Test of Model Fit for the Baseline Model Value 2443.761 Df 210 PValue 0.0000 SRMR 0.125 It was my understanding that the SRMR was typically just slightly larger than the RMSEA. Could you explain why the SRMR is so poor while the RMSEA is not? If it helps, I am running a single indicator SEM in which I have 7 waves of data. At each wave, I have a predictor, a mediator, and an outcome variable defined by a total score for which I have specified the residual variance. The model includes: a) correlations between the 3 latents at each wave, b) autoregressive paths between same latents at subsequent waves, c) paths from the predictor variable to the outcome variable 2 waves later, d) paths from the predictor variable to the mediator at each subsequent wave, and e) paths from the mediator to the outcome variable at each subsequent wave. I have constrained paths that I have not included to be zero (e.g., MODEL=NOCOVARIANCES). Any help you could offer in understanding this would be much appreciated! 


Please send the full output including SAMPSTAT in the OUTPUT command and your license number to support@statmodel.com. 


Linda, Thanks very much for replying to my student, Alison Alden, in the post above and in a subsequent email correspondence. There is another aspect of this analysis that puzzles us having to do with the matrix of residuals. Specifically, there are several places in the Residual Output in which values of 999.000 appear and we don't know what these signify. As one example, there is one in the matrix of the Standardized Residuals (zscores) for the covariances/correlations/Resid Corr (but a 999.000 does NOT appear in the same place in the matrix of Residuals for Covariances/Correlations/Residual Correlations). So, my first question is what does a 999.00 mean in the Residual Output. My second question is whether there is a way to get Mplus to output the Residual Correlation Matrix as opposed to the Residual Covariance Matrix? Thanks! 


999 means the value could not be computed for some mathematical reason, for example, divide by zero. Residuals are given for the matrix that is analyzed. 


thanks Linda. I have a few followup questions. First, how are the Standardized Residuals (z scores) computed? I think that would help me to understand why some of the values in a Standardized Residuals (z scores) matrix could be computed and others not. Second, for the standardized residuals that can't be computed, what impact do they have on SRMR? Finally, to get the Residual Correlation Matrix, we tried adding the statement Matrix = Correlation to the Analysis Command but we get an error message saying that Listwise deletion must be on but that would leave us with fewer subjects than parameters. We are wondering if we could compute the correlation matrix outside of Mplus and then use that as our data file rather than the raw data but we are not sure what to use as the value for our nobservations if we did so (the total number of subjects including those with some missing data?). 


See the following Technical Appendix on the website: Standardized and Normalized Residuals SRMR is not computed using standardized residuals. See formula 128 in Technical Appendix 5. MATRIX=CORRELATION cannot be used in conjunction with the RESIDUAL option. 


thanks, as always, for the speedy reply Linda! We now understand we cannot the matrix=correlation option in the Analysis command. However, we assume from your earlier statement that residuals are given for the matrix that is analyzed, that if we use the Type=Correlation option in the File command (using the correlation matrix as the data file rather than the raw data that we typically use) then we can use the Residual option, is that correct? If so, we believe that we also need to use the NOBSERVATIONS option and we are not sure what to put for that. To elaborate, if we were using raw data, listwise deletion would leave us with fewer subjects than parameters so we are hesitant to compute the correlation matrix using listwise deletion and the n of subjects with complete data as our value of NOBSERVATIONS. On the other hand, it doesn't seem that Mplus has an option for putting in the number of observations each correlation is based so if we don't use listwise deletion we aren't sure what value to enter for the NOBSERVATIONS. Or would we just enter the total number of subjects including those who have some missing data (which doesn't seem quite right to us)? Any thoughts/guidance will be most appreciated. 


P.S. is there any consideration of adding an option to Mplus such that one can request residuals in a correlation metric even if one has analyzed the covariance matrix? It seems to us that it would be both useful (as in the present case, helping us to figure out why our model is so bad/where the greatest strain is coming from in a metric we can easily understand) and feasible (given that you are already computing all of the individual elements in that matrix in order to calculate SRMR). Thanks! 


Standardized residuals give much more information than residuals in the correlation metric. An even better tool for examining model misfit is modification indices. You can ask for these in the OUTPUT command by specifying: MODINDICES (ALL); 


Thanks Linda  we routinely ask for mod indices and in this particular case, they are all small. Any thoughts/guidance about our NOBSERVATIONS question would also be most appreciated. 


It is not possible to give more than one sample size for a correlation matrix in Mplus. 

sojung park posted on Monday, September 23, 2013  3:27 pm



Dear Dr. Muthen I have a question on M.I. I don't seem to know understand how I can use the M.I. well  for example, it says as belows M.I. E.P.C. ON Statements vr1 ON vr1 999.000 0.000 What am I suppose to do "vr1 ON vr1"? 


999 means that it couldn't be computed, so ignore. 

Soyoung Kim posted on Friday, December 20, 2013  1:11 am



hello: I am a beginner to SEM and to Mplus, so thanks in advance for your patience. I ran a model with n = 44 and number of variables =15. And I had the results of df=198, chisquare of the model = 146.5, CFI =1, and RMSEA=0 I wonder the followings: First, is the model itself eligible for running a SEM despite the small size of n=44? Second, is it possible to have the results of df=198, chisquare of the model = 146.5, CFI =1, and RMSEA=0 based on the specified model? I am very happy to have this discussion board. 


There are several issues here. Your model should not have more free parameters than the number of observations in your data set. Chisquare cannot be trusted at your sample size. You have little power at your sample size. 

Jihyun Yoon posted on Friday, December 27, 2013  12:24 am



Do you mean that df should not exceed the number of observations? So, in this case, CFI and RMSEA can not be trusted as well? 


No, the degrees of freedom is the difference between the number of free parameters in the H1 model minus the number of free parameters in the H0 model. The number of free parameters in the H0 model should not exceed the number of observations in your data set. No fit statistics based on chisquare can be trusted. CFI and RMSEA are based on chisquare. 

Soyoung Kim posted on Monday, December 30, 2013  4:44 am



Dear Dr. Muthen, I analyzed a model with n = 44 and number of variables =15 by Mplus 6.11. And I had the results of df=198, chisquare of the model = 146.5, CFI =1, and RMSEA=0 I wonder how 'df=198' was calculated in this model. Thank you in advance for your kind advice. 


It is the number of free parameters in the H1 model minus the number of free parameters in the H0 model. 

adwin posted on Wednesday, February 18, 2015  9:49 pm



Dear Dr. Muthen I run SEM using WLSMV estimator on my model (consist of a latent variable with 4 categorical indicators, a continuous dependent variable, and 11 independent variables). I have 483 observations. I tried to test chisquare difference by typing DIFFTEST in the analysis command, but got this following statement: "THE CHISQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE FILE CONTAINING INFORMATION ABOUT THE H1 MODEL HAS INSUFFICIENT DATA." Would you please to explain me what does the statement mean? How can I solve the problem? Or perhaps, how to do the test appropriately? 


It sounds like you changed the model that generated the difftest file more than by just placing restrictions. For further help, send your files and license number to support@statmodel.com. 


Hello, I am running a latent difference score mediation model using syntax provided by Preacher and colleagues. My latent variables are made up of categorical indicators (they were originally continuous, but I am treating them as categorical because several of them were skewed), and I am generating bootstrapped confidence intervals for the indirect effects. While I was able to obtain fit statitics when treating my indicators as continous (and using the ML estimator), fit statistics are no longer produced when using the WLSMV estimator (and ordinal data). I am not receiving any sort of error message. I am a beginner to SEM, so this may be a dumb question, but any idea why? Thanks! 


Please send the output and your license number to support@statmodel.com. 

Anne Berg posted on Sunday, September 20, 2015  2:52 am



When I run my model I receive the following model fit: CFI = 1.000, TLI = 1.014, RMSEA = .000, Chi (171) = 165.990. Is it possible to receive such a model fit with CFI and TLI of/above 1.000 and RMSEA of exactly .000? My model is not saturated. However I have pretty much parameters (115 parameters with 170 participants). I thought that this could be the problem, however when I run my model with some different relations (144 parameters) the model fit seemed to be OK: CFI = .994, TLI = .993, RMSEA = .013, Chi (178) = 180.51. What can you tell me about this ‘strange’ model fit? Thank you very much in advance! 


Sounds like a perfect fit which can happen with a small sample, low correlations, or both so that you don't have much power to reject the model. 

gloria posted on Wednesday, July 06, 2016  10:20 pm



Hello, I was running some growth models using proc mixed in SAS – very simple models, an unconditional growth model (model 1), then adding a quadratic term (model 2), and then adding some predictors (model 3). I was using AIC to gauge model fit, and it was (as expected) decreasing in magnitude as I went from model 1 to model 2 to model 3. All good. In the previous set of models, all predictors were single observed indicators. Then, I defined one of my predictors as a latent variable since I had available multiple measures of the same construct. So, in MPlus, going from model 1 (linear time) to model 2 (linear + quadratic time) AIC looks okay in the sense that is going in the direction I expect. However, once I add the latent predictor (model 3), AIC goes bunkers and almost doubles in magnitude (which based on the simpler models I ran in SAS this shouldn’t be the case). Other model fit indices (RMSEA, CFI, SRMR) behave as expected from model 1 to model 2 to model 3. In terms of AIC, do you know of a reason why model 3 might not be really comparable to models 1 & 2? Thanks! 


When you add the latent predictor its indicators are DVs and therefore the number of DVs change  and this directly affects the logL and AIC. 

BAI yun posted on Thursday, April 27, 2017  7:03 am



Hi£¬ I'm confused that I always get a great model fit for my path model: RMSEA:0.000; CFI and TFI:1.000; SRMR:0.000 (Chisquare:0.000; Df:0; Free estimates:15) My sample size is 165. Is that all the path model will get a model fit like above£¿ (I test the mediation model, serial mediation model, moderated mediation model and moderated serial mediation model) Great thanks in advance! 


Your model must no have any leftout paths so it is justidentified. This gives df=0 and no overall test of model fit is possible. 


I am modeling 3 time points with a sample size of 78. I am using a latent growth curve model with a random slope timevarying covariate. The STVC has a significant mean, which I understand to indicate that there are significant effects of the timevarying covariate on the outcome. My problem is that the chisquare is good (9.5, 5dr, p = .087) but the RMSEA is high (0.109) and CFI is horrible (.65). Modification indices show nothing >10. Do you have any suggestions? 


I would not place trust in chisquare with the low N=78. You may want to ask on SEMNET. 


I have a chi square test which is statistically significant p0.044 however none of the standardised residuals are more than 1.96. Can this be possible? 


The chisquare is an overall fit measure and probably has more power to reject the model that individual residual tests. A modification index is usually a better way  and directly geared towards chisquare  to catch sources of model misfit than residual tests. 

Back to top 