Tests of model fit PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 Michael Bohlig posted on Friday, March 03, 2000 - 12:37 pm
I am running a CFA using ML estimation. The default output provided Chi-square, Loglikelihood, Information Criteria, and RMSEA. I am interested in other fit indices such as SRMR and the Satorra-Bentler SCALED chi-square. I assume that I can use the formula in the Categorical analysis section of this list to calculate TLI and CFI.

I am particularly interested in the SRMR as Hu and Bentler (Dec. 1998 - Psych Mehtods) recommend it as it is sensitive to model misspecification and less sensitive to distribution and sample size. Is it possible to instruct Mplus to provide this as well as the SCALED chi-square?

Thanks.
 Linda K. Muthen posted on Thursday, March 09, 2000 - 10:52 am
You can obtain the Satorra-Bentler SCALED chi-square by asking for ESTIMATOR=MLM in the ANALYSIS command. You can use the formula in the Categorical analysis section of this list to calculate TLI and CFI. You would need to calculate SRMR using the formula in the Hu and Bentler article. This measure is not yet available in Mplus.
 De Beuckelaer Alain posted on Thursday, March 16, 2000 - 1:32 am
I have a very practical question: I use datafiles with a unique respondent identifier. Is there a way to save this variable as well (i.e., together with calculated factor scores) to the output file (type= FSCORES) without having to insert it in the list of analysis variables?
 Linda K. Muthen posted on Thursday, March 16, 2000 - 9:57 am
There is no way to do this at the present time. And it should not be inserted in the list of analysis variables or it will be considered as part of the analysis. We will be adding this feature in Version 2.
 Tor Neilands posted on Friday, May 26, 2000 - 8:22 am
Hello,

I appreciated Linda Muthen's note of Thursday, March 9, 2000 indicating that the Satorra-Bentler Scaled T model fit statistic is equivalent to the MLM chi-square option in Mplus.

I have a similar question: Is the MLMV chi-square fit option in Mplus equivalent to the Satorra-Bentler Adjusted T statistic reported in
Bentler & Dudgeon's 1996 Annual Review of Psychology Article? A simulation study conducted by Rachel Fouladi at UT Austin reported at AERA a few years ago noted that this test statistic performed especially well in small samples (bootstrapping also performed well; it would be nice to see this added to Mplus in future releases).

Thanks a lot,

Tor Neilands
UT Austin
 Linda K. Muthen posted on Friday, May 26, 2000 - 6:33 pm
No, MLMV is not the same as the Satorra-Bentler Adjusted T statistic reported in Bentle & Dudgeon's 1996 Annual Review of Psychology Article.
 Maria Orlando posted on Monday, October 09, 2000 - 11:16 am
Is there a way to calculate TLI and CFI for a multigroup analysis using output from Mplus?
 Linda K. Muthen posted on Monday, October 09, 2000 - 2:19 pm
You can find the formulas under the Mplus Discussion topic Categorical Data Modeling, Fit Measures for Categorical Outcomes. The formulas are the same for categorical or continuous outcomes.
 Anonymous posted on Thursday, June 14, 2001 - 2:35 pm
Why is the CFI index that I calculated from LISREL and MPLUS on the same data set different? I am purplexed at the difference.
 Anonymous posted on Thursday, June 14, 2001 - 2:37 pm
Is there another fit index (like GFI, NFI,....)available other than CFI available in MPLUS?
 Linda K. Muthen posted on Thursday, June 14, 2001 - 2:59 pm
If you have a model without a mean structure, they should be the same. Do you obtain the same chi-square and degrees of freedom? Mplus uses the formula used in Hu and Bentler (1999). If this is still unclear, send both outputs to support@statmodel.com and I will look at them
 Linda K. Muthen posted on Thursday, June 14, 2001 - 3:00 pm
TLI, RMSEA, SRMR, and WRMR are also available in Version 2.
 Renita Glaser posted on Thursday, June 28, 2001 - 10:04 am
Would it be possible to get a copy of Yu and Muthen's (2001) Technical report on model fit indices?
 Linda K. Muthen posted on Thursday, June 28, 2001 - 11:42 am
Please email bmuthen@ucla.edu to request the paper.
 Chuck Green posted on Wednesday, August 27, 2003 - 11:43 am
Hello,

I have run a three factor confirmatory model. When I used the MLR estimator to correct for violations of normality etc., I found that the significance test and confidence interval associated with the RMSEA fit index were not given in the output, as they are under ML estimation of continuous data. How might I go about calculating this?

Chuck Green
University of Houston
 Linda K. Muthen posted on Wednesday, August 27, 2003 - 2:46 pm
These values have not been developed yet. That is why they are not there.
 Chuck Green posted on Sunday, August 31, 2003 - 6:13 pm
In reading your manual, I noted that the MLR estimator was listed as only being available for mixture modeling. As I understood it MLR produces the Yuan-Bentler T2* statisic (Yuan & Bentler, 2000; 1999). I have implemented it with non-normal data with missing values for a confirmatory data analysis. In examining nested comparisons I have used your manual's equations for producing the chi-square distributed values. I admit, however, to being somewhat disconcerted by the manual only mentioning the use of MLR for mixture analyses. Have I erred in using this estimator for a straight forward CFA with TYPE = MISSING?

Chuck Green
University of Houston
 Linda K. Muthen posted on Sunday, August 31, 2003 - 8:12 pm
There is an Addendum to the Mpus User's Guide which is available at www.statmodel.com under Product Support. If you look at the table on page 35, I think this will answer your question. This table has changed from the Version 2 user's guide.
 Chuck Green posted on Sunday, August 31, 2003 - 11:23 pm
Excellent. Many Thanks.
 Anonymous posted on Thursday, November 27, 2003 - 8:31 am
If you compare two models: the first one has six latent factors which are allowed to covariate; and the second has the same six factors to all load on a general latent factor. Is it possible that the general-factor-model has a better fit than the covariance-model? The indicators are categorial; estimator is WLSMV.
Thanks!
 bmuthen posted on Friday, November 28, 2003 - 10:23 am
The second model, the general factor model, imposes restrictions on the factor covariance matrix of the first model and therefore should fit worse, although perhaps not significantly so. If the p value of the WLMSV chi-square test is better for the second model it could indicate that the second model is well fitting so that the fewer parameters make up for the worse fit.
 Anonymous posted on Monday, January 26, 2004 - 12:20 am
I am running a CFA on a six-factor model consisting of 67 dichotomous items (WLSMV). The CFA (.776) and the TLI (.938) differ much from each other. I`m not sure about any reasons for this result.
 Linda K. Muthen posted on Monday, January 26, 2004 - 7:13 am
I'd need to know more to comment on this. Why don't you send the full output to support@statmodel.com.
 Bill Shipley posted on Thursday, April 22, 2004 - 12:08 pm
I am a new user of MPLUS, having moved from EQS. Although I am familiar with the Satorra-Bentler robust chi-square statistic (equivalent to your MLM estimator) I am not familiar with your MLMV estimator. Under what conditions is the second better than the first? Are these equivalent with respect to MLSM vs MLSMV in the case of categorical dependent variables?
 bmuthen posted on Thursday, April 22, 2004 - 12:31 pm
Welcome over to Mplus. The MLMV estimator adjusts not only the mean but also the variance to better approximate a chi-square distribution for the test statistic. This is written about by Satorra in his series of articles. We have found in simulations that MLMV tends to overadjust a bit with non-normal continuous outcomes and that therefore MLM is better. My paper with du Toit and Spisic on categorical outcomes and WLSM and WLSMV (analogous to MLM and MLMV but for weighted least squares) shows through simulations that in contrast to the continuous outcome case that WLSMV works better than WLSM. You can do your own simulations in Mplus to see if you are convinced.
 Barry Sexton posted on Monday, May 17, 2004 - 7:35 am
I'm now using Mplus V3 where the main attraction (over V2) is the ability to model Poisson dependent variables. I have traffic accident count data which I'm modelling in a path analysis, previously I declared them as categorical (values limited to 0, 1 or 2). In order to compare the count output with the categorical output, I have specified MLR estimation for both. However, I do not get a test statistic (Yuan-Bentler T2 as stated in the manual), I get AIC, BIC etc but I am unsure how to judge and compare the two fits and adequacy of the fit.
 Linda K. Muthen posted on Monday, May 17, 2004 - 8:55 am
You cannot get chi-square test statistics with Poisson because a mean and covariance structure does not capture the full model. Means and coviariaces are not sufficient statistics with Poisson variables. Raw data are needed because higher order moment information is needed for estimation.
 bmuthen posted on Tuesday, May 18, 2004 - 11:47 am
Just to add to the previous reply - a general approach to getting a chi-square test of a path model is to use 2 times the difference of the log likelihoods, comparing the path model to a just-identified path model (all paths included). This comparison is essentially what is done by the weighted least squares approach, although not using likelihoods. Using ML in version 3, the log likelihood difference approach can be used both with categorical and count outcomes and therefore chi-squares can be compared when treating the outcomes differently. Note, however, that this tests only the restrictions imposed by the path model and doesn't test the model against the data - and the latter fit may differ when treating the outcomes differently. The dilemma of model testing against the data is discussed for categorical outcomes in my 1993 Bollen-Long chapter on Goodness of fit (see the reference section on the Mplus web site).
 Anonymous posted on Monday, May 24, 2004 - 7:01 pm
Hi, Dr. Muthen,

If RMSEA is 0.084 and GFI is 0.90 in my MIMIC model, can I continue with my analysis or I need to do something to improve the model fit first before going on?

Thanks a lot.
 Linda K. Muthen posted on Monday, May 24, 2004 - 7:10 pm
This does not sound like good model fit. Following are some suggestions I posted earlier:

A MIMIC model is a CFA model with covariates. You want to investigate your measurement model to be sure it is well fitting before adding covariates. EFA is a good way to start looking at any factor model. You can see whether your factor indicators behave the way you think they should or that you have unexpected cross loadings. An EFA can be followed by an EFA in a CFA framework to investigate significance of factor loadings. The Day 1 handout from our short courses goes through a series of steps from EFA to a final well-fitting simple structure CFA before turning to MIMIC and multiple group analysis. You might find this handout useful. See our website for details about obtaining course handouts.
 Anonymous posted on Tuesday, May 25, 2004 - 9:35 am
Thank you.

But if my scale is unidimensional, does EFA help to investigate corss loading? By the way, does 'cross loading' mean that one item in the scale has high loading for more than one factor?

Thanks.
 Linda K. Muthen posted on Tuesday, May 25, 2004 - 9:52 am
You may think that your scale is unidimensional. EFA can confirm that. You may find through EFA that your items do not behave as you believe they will. Yes, that is what the cross loading means.
 Anonymous posted on Tuesday, May 25, 2004 - 11:10 pm
Linda, you are right.

I did find two latent factors derived and 5 items with cross loading in my scale which was supposed to be unidimensional.

Then I guess I might need to use two latent variables model instead single latent variable model. For items with cross loading, how should I handle with them and interprete them?

Thanks a lot.
 Linda K. Muthen posted on Wednesday, May 26, 2004 - 6:19 am
You can handle them by allowing them to be factor indicators for both factors. The interpretation would depend on the meaning of the items. Were they designed to load on both factors? If not, why do they? If there is not a good reason, perhaps they should be eliminated.
 Anonymous posted on Wednesday, May 26, 2004 - 10:14 am
Thank you, Linda. Your comments are very helpful.

I don't think they were designed to load on two latent factors, but one actually.

From the item wording redundant information can be observed in the scale. But because I want to examine DIF for each item, I don't know whethe I can eliminate them or not. If the items with cross loading were eliminated, I guess my research topic would be a little different.

By the way, is there any criteria or rule of thumb for deciding cross loading? Both loadings are higher than 0.4 or 0.5?

Many thanks.
 Linda K. Muthen posted on Wednesday, May 26, 2004 - 10:51 am
You can do an EFA in a CFA framework. Then you get standard errors and can assess significance.
 Anonymous posted on Thursday, May 27, 2004 - 12:53 pm
Hi, Linda, I am not sure whether I understand "do an EFA in a CFA framework" correctly. Does it mean that doing an EFA on items of each factor specified by CFA?

Thank you.
 bmuthen posted on Thursday, May 27, 2004 - 1:01 pm
No, it means doing a CFA where you set up the same model as used in the EFA - the advantage being that you get SEs and MIs. The handout for Day 1 of the Mplus Short Courses shows how.
 Anonymous posted on Monday, December 13, 2004 - 12:11 pm
I am looking for references for interpreting first-order derivatives of parameters (TECH2) output.
Thank you.
 Linda K. Muthen posted on Tuesday, December 14, 2004 - 9:04 am
I don't know of any references for interpreting first-order derivatives. Perhaps an SEM textbook would address this. If you are using them for model modification, I would suggest using modification indices as they have a simple interpretation, the drop in chi-square if that parameter is free.
 MichaelCheng posted on Wednesday, March 02, 2005 - 12:21 pm
What a wonderful forum! I feel very fortunate to have such a renowned expert available to answer my questions!

I'm running a CFA with 24 binary outcomes (true/false responses) and one latent factor using WLS estimation.

Am I correct in my understanding that the chi-square test of model fit probably isn't the best one to use because of problems with non-normal data and that chi-square df with WLS does not represent interpretable information?

Also, I'm not sure how to interpret SRMR with tetrachoric correlations. Is the value of .234 reliable? If so, do you believe it represents a better indication of fit than RMSEA (.028) for this anaysis?

Finally, what do you think would be the best way to compare nested models? Are chi-square difference comparisons appropriate with tetrachoric correlations, or should I use CFI?

Thank you very much!
 Linda K. Muthen posted on Wednesday, March 02, 2005 - 2:39 pm
You might find the following publication helpful:

Yu, C.Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles.

It can be downloaded from the Mplus website from Mplus Papers. This dissertation examines the behavior of the fit measures you are asking about for categorical outcomes.

I believe your reference to degrees of freedom and weighted least squares estimation refers to the fact that for the WLSMV estimator, the degrees of freedom are not computed in the regular way. This does not make the chi-square untrustworthy. In fact, WLSMV is the Mplus default. I recommend that you use that not WLS. The degrees of freedom for WLS and WLSM are computed in the regular way.

I would compare nested models using chi-square difference testing. I'm not sure how two CFI values can be compared.
 shawna anderson posted on Thursday, March 03, 2005 - 3:24 pm
Hi,
I'm running a CFA, and I'm not getting an overall chi-square value (it just has a zero there) or RMSEA (missing altogether). I'm wondering what the cause of this would be.
Thanks!
 Linda K. Muthen posted on Thursday, March 03, 2005 - 3:31 pm
I would need to see the output to know for sure. You should send it to support@statmodel.com along with your license number. Did your model converge? What are your degrees of freedom?
 Anonymous posted on Thursday, March 24, 2005 - 9:19 am
I am running what I think is a simple regression analysis. The output shows a good model fit but I am not getting critical values.

I am using
Analysis:
type=general missing h1;
!iterations=2000;
PARAMETERIZATION=THETA;
estimator= WLSMV;
Model:
PERSISTE on support social academic
sregl extra sregef expect
seffic assert placemt faminc72;

persistence is a categorical outcome.

What am I doing incorrectly?
 Linda K. Muthen posted on Thursday, March 24, 2005 - 9:55 am
You should get a p-value for your chi-square. I assume you are given that you say you get good model fit. What do you mean by critical values?
 Anonymous posted on Wednesday, April 06, 2005 - 10:00 am
I'm running a CFA model with the factors I got from EFA. Three factors from 14 binary variables, obtained from EFA. But the test of model fit for CFA shows that the model is not a good fit.

Chi-Square of model fit:
Value 165.523*
Degrees of freedom 51**
P-value 0.00000

RMESA is 0.035, which is ok according to the 0.06 criteria.

May I ask what should I do next?

Thank you very much!
 Linda K. Muthen posted on Friday, April 08, 2005 - 2:38 am
It sounds like there may have been important cross-loadings on some of your items that you have fixed to zero in your simple structure CFA. Or perhaps you have a very large sample which makes the chi-square test of fit overly sensitive. I would look at modification indices and also the other fit measures, for example, CFI.
 Anonymous posted on Friday, May 27, 2005 - 9:56 am
Dear Dr. Muthen,

I am running CFA and MIMIC models, and checking the fitness of the models: while the CFI/TLI are around 0.93, and RMSEA close to 0.06, the p-value for Chi-square Test of Model Fit is 0.0000.

It seems there is a paradox: while the CFI/TLI and RMSEA indicate the models fit the data well, the Chi-square Test does not. Is it right? How to interpret these indices?

Thank you very much.
 Linda K. Muthen posted on Friday, May 27, 2005 - 12:39 pm
The issue here is that chi-square is a test of exact fit. This makes it sensitive to sample size. With large samples, there is a lot of power to reject the null hypotheses. You can do a sensitivity test by freeing parameters until you obtain an acceptable chi-square. Then compare the original parameter estimates etc. to the new ones. If they are close to the same, then you can conlcude that chi-square was too sensitive. If they are not close, you can conclude that chi-square was correct about model fit.
 Mark Torrance posted on Monday, August 08, 2005 - 10:24 am
I have a set of CFA models that are all giving RMSEA of <.05 but TLI and CFI well below .95 (they range between .84 and .87).

Beyond the fact that my models don't fit as well as they might, is there anything else I can conclude from the discrepancy between these different fit statistics?
 bmuthen posted on Monday, August 08, 2005 - 1:18 pm
I don't think so. The level of agreement between fit indices seems to depend very much on the data, the model, and the kind and degree of model misspecification. I think Yu's dissertation illustrates that (see a pdf at this web site), for example when studying the Holzinger-Swineford data.
 Jeremy Miles posted on Tuesday, August 09, 2005 - 5:13 am
In my experience this (Mark Torrance's problem) can happen because of the null model fit. The incremental fit indices (CFI, IFI, etc) compare the model with the null model.

If the correlations between the variables are low, then the null model is not very bad, and that means that the incremental fit indices don't show much improvement.

To get some idea, run a model in which the model: statements are all empty. This will estimate the null model - have a look at the RMSEA. Ideally, this should indicate dreadful fit. If it's not appalling, the null model isn't so bad.

Sometimes this can happen with questionnaire data where the questionnnaire items are poor. There is so much error in each measure, that the correlations between them are low.

This has come up occasionally on semnet, so it might be worth searching the archives there. Most recently there was a posting by Stan Mulaik. (An example of the opposite problem is discussed in Browne MW, MacCallum RC, Kim CT, Andersen BL, Glaser R: When fit indices and residuals are incompatible. Psychological Methods 2002, 7:403-421.)

JM
 Joan Gerring, M.D. posted on Thursday, August 11, 2005 - 1:46 pm
Dear Dr. Muthen,

I am reviewing an article that uses Mplus v3.11. Could you please help me with these abbreviations of model fit:

CFI
TLI
RMSEA
WRMR

What do these abbreviations stand for? It would be a great help to me and to the author of the article for me to know. I was delighted to find this website. Thank you very much,

Joan Gerring, M.D.
Johns Hopkins School of Medicine
 bmuthen posted on Thursday, August 11, 2005 - 2:01 pm
See Technical Appendices in the left margin of the Mplus home page - Appendix 5 discusses these indices (comparative fit index, Tucker-Lewis fit index, Root Mean Square Error of Approximation, and Weighted Root Mean Square Residual).
 Ching-Lin posted on Tuesday, September 20, 2005 - 9:44 am
Hi,
I am trying to use MIMIC on DIF detection. I get some results from the outputs of Mplus. There are degrees of freedom (df) of the DIF model and the Baseline model,respectively, but i have not idea about how the df was calculated ? Is there any papers discuss about the df of the MIMIC model? Thanks for any comment.
 Linda K. Muthen posted on Wednesday, September 21, 2005 - 7:27 am
It would depend on which estimator you are using. I think you can find this informaton in the Technical Appenices on the website.
 Christine McWayne posted on Thursday, October 13, 2005 - 12:04 pm
Hi, I ran across this site while looking for information. I am running a CFA using M-Plus and based on results from EFA. My SRMR and RMSEA values are within the acceptable range (according to Hu & Bentler), but my CFI is low (.80). The modification indices do not seem to indicate anything troublesome. My sample is 394 for a 40 item scale, where only 36 items loaded above .40 on one factor (and were therefore included in the CFA analysis) -it's a 3-factor model based on several EFA rules and parenting theory. I'm wondering how to interpret the low CFI value against the other cut-offs. The scale is a 4-point Likert scale. Any recommendations?
 Linda K. Muthen posted on Thursday, October 13, 2005 - 1:31 pm
CFI is usually a pretty reliable fit index. I assume that your chi-square value also indicates poor model fit. I am surprised that you see no large modification indices. You can send your input, data, output, and Mplus license number to support@statmodel.com if you would like me to look at it.
 Anonymous posted on Friday, October 28, 2005 - 6:43 am
Hello. I am trying to compare Model A with 5 correlated first-order factors with a Model B, a 4-factor model that has 4 of the same correlated first-order factors from Model A but excludes the items that load on the 5th factor in Model A (because these items are potentially problematic in certain ways).

Chi-square difference is inappropriate here because the models are not nested, correct?

I know that information theory-based indices like AIC are appropriate for comparing models, regardless of whether the models are nested. However, is it meaningful to use AIC or similar indices when one model includes some indicators that are not in the other model? I can't seem to find the answer to this anywhere.

If yes, then I am all set. If no, then is there any meaningful way to compare Model A and Model B?

Thanks for your time.
 Linda K. Muthen posted on Friday, October 28, 2005 - 7:39 am
I think one generally wants to have the same set of observed variables when comparing two models in order to see which model fits better. In your case, you have different sets of observed variables. What would be the question you are trying to answer by comparing the two models?
 Anonymous posted on Friday, October 28, 2005 - 8:28 am
The instrument was designed to tap 5 distinct constructs - 4 of them behavior difficulties, 1 of them prosocial behavior. There is some concern that the items for the prosocial factor represent more of a method factor than a symptom dimension (they are reverse-scored; almost all the other items are not reverse-scored). Because of this and also because this particular factor is also conceptually distinct from the other 4 factors (the "total" score for the scale is based solely on the items from the 4 behavior difficulties factors), I want to compare the 5-factor model to a 4-factor model that just includes the behavior difficulties factors. Can this be done in a meaningful way?
 Linda K. Muthen posted on Friday, October 28, 2005 - 8:47 am
What would the question be that you are asking? If I had the same set of observed variables, I would ask which model fits the data better? The four factor model or the five factor model? I think it will help to formulate the question that you are asking. Then you can decide if it can be answered in a meaningful way.
 Anonymous posted on Friday, October 28, 2005 - 12:09 pm
The question is which model fits better, the 5-factor model for the 25-item data (5 indicators per factor) or the 4-factor model for the 20-item data; also 5 indicators per factor). The latter model excludes those last 5 indicators (the 5th factor) because they seem to represent a method factor (all 5 items are reverse-scored) rather than a symptom factor.

I understand that typically we compare model fit for different models using the same observed data. Is there a meaningful way to compare these models given that they are based on data sets with different numbers variables?

Does it make sense to use all 25 items for the 4-factor model but just not let the last 5 items load on any factor? If so, what about other parameters associated with these items besides factor loadings?

Or, would it be better to test a 6-factor model, where the 6th factor is a method factor, and have reverse-scored items load on the appropriate symptom factor and on the method factor? If so, would it be more appropriate to estimate the correlations between the method factor and each of the 5 symptom factors, or would it be better to constrain the correlation between the 6th factor and each of the 5 symptoms factors to be zero?

Thanks again.
 Linda K. Muthen posted on Friday, October 28, 2005 - 1:15 pm
I don't know of a meaningful way although there certainly may be one. There's a lot I don't know.

If I were in this situation, I would probably go back to an EFA to help understand if my variables are behaving the way they were intended to behave. But maybe you have already done that. For example, see if they load on the factors they were developed to load on or if there are unintended cross loadings that can or if there is a methods factor.
 Mark Torrance posted on Friday, November 25, 2005 - 9:27 am
Thanks for previous help. I'm stuck again...

I'm performing a series of CFAs in v3.12 with type=complex to make the analysis cluster-robust. This requires MLR. I want to perform chi-square difference tests and have looked at the method for doing this for MLM and MLR that's outlined on Mplus web pages. I understand how this works with MLM, because this gives me Satorra-Bentler Chi-Square. However, MLR gives Yuan-Bentler T2* rather than SB. Do I just treat this as if it were the SB chi-square, and if not, how do I set about doing my difference test?
 Linda K. Muthen posted on Saturday, November 26, 2005 - 6:26 am
You can follow the same steps as with MLM.
 jim posted on Wednesday, February 22, 2006 - 1:04 pm
Hello. I need to run a multiple-group CFA and my continuous data is nonnormal. I was planning to use my LISREL software, but found out that LISREL does not compute the Satorra-Bentler adjusted fit indices (they do if you have a single-group but not multiple groups). Since my data is nonnormal, I want to use this adjustment when running my multiple-group analysis.

From what I have read about MPLUS, you have the Satorra-Bentler estimator (MLM) and MPLUS produces fit indices (CFI, RMSEA). Now I just need to know if these fit indices are adjusted for nonnormality if MLM is used as the estimator in a multiple-group model.

Can you let me know if MPLUS is capable of this so I can get it ordered ASAP if it is.
 Linda K. Muthen posted on Thursday, February 23, 2006 - 10:36 am
Mplus does compute the Satorra-Bentler chi-square for multiple group analysis. A better choice might be MLR.
 jim posted on Thursday, February 23, 2006 - 10:45 am
Thanks, Linda, for the feedback.
Does MPLUS adjust the fit indices (CFI, RMSEA) along with producing the S-B chi-square for multiple-group procedures?

Also, why do you say that MLR might be a better choice?
 Linda K. Muthen posted on Friday, February 24, 2006 - 10:40 am
Yes, all fit indices are available for multiple group.

In some situations we have seen MLR behave better than MLM, for example, with complex survey data. In Version 4, MLM will not be available for these situations.
 Lois Downey posted on Friday, May 19, 2006 - 10:40 am
I've been using clustered CFA with FIML missing-data handling to evaluate a conceptually-derived model of patients' evaluations of physicians' end-of-life care. The dataset includes 801 patients clustered under 92 physicians. Indicators are dichotomous, and I used the default WLSMV estimator and the default parameter constraints. To assess fit, I used the following criteria:
normed chi-square (chi-square/df) <3.00
CFI and TLI >0.95
RMSEA <0.06
WRMR <1.00.

The original conceptual model included 29 indicators, 5 1st-order latent variables, and 1 2nd-order latent variable. All fit statistics except WRMR (1.017) met the fit criteria. Modification indices were all <8.00. Elimination of 3 indicators with low coverage (0.233, 0.263, 0.317) and two additional indicators that contributed to correlated residuals produced a 24-indicator model that met the fit criteria.

I have two questions:
(1) Do you think the normed chi-square is a reasonable "substitute" for the actual chi-square test of model fit. (The models I've produced with our datasets typically do not produce chi-square tests that come even close to non-significance.)
(2) I've read some comments about the WRMR that suggest that under some circumstances, it has performed less well than hoped. Does this suggest that I should perhaps disregard the WRMR and accept my original 29-indicator model as "good enough"? (chi-square/df = 93.155/44 = 2.12; CFI = 0.995; TLI = 0.998; RMSEA = 0.037; WRMR = 1.017)

Thanks.
 Linda K. Muthen posted on Friday, May 19, 2006 - 1:36 pm
1. I would not use the normed chi-square. I would do a sensitivity analysis by freeing parameters until I get good fit. I would then compare my original estimates to their counterparts in the mew analysis and see if they remain the same. If so, I would assume that chi-square was too sensitive and go with my original model. If the original parameter estimates changed dramatically, I would assume my original model does not fit.
2. I wouldn't worry about WRMR if all else is okay.
 Espen Røysamb posted on Monday, June 12, 2006 - 11:39 pm
Chi-square diff testing with WLSM

I'm doing CFAs with categorical data and the WLSM estimator.
How should a chi-square difference test be performed, taking the scaling correction factor into account? There are refs to the web-pages but I can't find any specific procedures.
Is the procedure similar to that described for MLM/MLR ( http://www.statmodel.com/chidiff.shtml )?

Thanks
 Linda K. Muthen posted on Tuesday, June 13, 2006 - 9:02 am
You would use the same procedure as for MLM and MLR.
 Monal Shroff posted on Friday, August 04, 2006 - 9:25 am
Hi

I am trying to do CFA and EFA with 60 observed variables. I have a lot of missing values within my categorical observed variables. I am using the TYPE=MISSING in analysis and F1 by (my observed variables); but in the output the no. of observations doesn't give show the correct no. representing all the cases in my dataset(i.e. only 226 is shown as oppose to 452). I am not sure if this is ok and if this means that it is not doing list wise deletion, at the same time give me a FIML analyses output.

Your response will be highly appreciated.

Thank you.
 Linda K. Muthen posted on Friday, August 04, 2006 - 9:53 am
If you are using TYPE=MISSING; and you don't see the correct number of observations, it is likely that you are reading your data incorrectly. You either have more variable names in the NAMES statement or you are reading your data free format and you have blanks in your data. If this information is not sufficient to help you, please send your input, data, output, and license number to support@statmodel.com.
 Monal Shroff posted on Friday, August 04, 2006 - 10:05 am
Thank you so much. I found the mistake!
 Annie Desrosiers posted on Monday, October 30, 2006 - 6:56 am
Hi

I'm running a overall model and I need these informations : RMSEA CFI and Chi-square.
Is it possible?
Here my input :

variable: name are id sexe age1-age3 p y1-y3 x1-x3 n1-n3 v1-v3;
usevariables are y1-y3 n1-n3;
useobservations are sexe eq 2;
classes = c(3);
missing = . ;

analysis: type = mixture missing;
starts = 500 10;

model: %overall%
i1 s1 | y1@0 y2@1 y3@2;
i2 s2 | n1@0 n2@1 n3@2;

Thank you so much!
Annie
 Bengt O. Muthen posted on Monday, October 30, 2006 - 7:09 am
With mixtures, it is not relevant to test model fit using the mean and covariance structure usually considered in SEM and on which conventional SEM fit indices are based. This is because mean vectors and covariance matrices are not sufficient statistics - the model implies restrictions beyond the second-order moments and needs raw data to be estimated. Instead, fit is judged by "neighboring models". For example, first do a 1-class conventional growth model and then do a 2-class model - then compare loglikelihoods using Chi-square. See Muthen (2004) in the Kaplan handbook on our web site for further info on model choice.
 Annie Desrosiers posted on Thursday, November 09, 2006 - 5:56 am
Hi,

I have this model :

VARIABLE: NAMES ARE u1 x1 x3;
NOMINAL IS u1;
MODEL: u1#1 u1#2 ON x1 x3;

Is it possible to have statistic fits (chi-square, rmsea,...) whit this king of model?

Thank you
Annie
 Linda K. Muthen posted on Thursday, November 09, 2006 - 10:00 am
The model is just identified so you will not get fit statistics. With nominal outcomes, you will not get chi-square even if the model has degrees of freedom because means, variances, and covariances are not sufficient for model estimation.
 Annie Desrosiers posted on Thursday, November 09, 2006 - 10:08 am
Thank you,

My problem is that I want to compare two classifications.

I did an analysis to know how many classes are in my data. And I'm not sure about 4 or 5 classes.
I did logistic regression on the two classifications but I need statistic fits to choose the best one!

Do you have any suggestion?

Annie
 Linda K. Muthen posted on Thursday, November 09, 2006 - 6:08 pm
If you are trying to determine the number of classes, you should be looking at BIC, loglikelihoods, and other measures. Under recent papers, you will find a paper by Bengt Muthen in a book edited by David Kaplan. This outlines how to determine the number of classes.
 Matt Moehr posted on Friday, November 10, 2006 - 8:40 am
I'm stuck between using an age covariate (MIMIC) and a multiple group analysis of factor invariance based on age groups. The crux of the problem is that I would like to have age (in months) actually be a continuous variable in the model. See the explanation below, but here's my current strategy:

Model 1.
VARIABLES: NAMES ARE x1-x9;
ANALYSIS: TYPE=missing;
MODEL: F1 BY x1-x9;

Then I covary by age, Model 2.
VARIABLES: NAMES ARE age x1-x9;
MODEL: F1 BY x1-x9;
F1 ON age;

Using the estimates from Model 1, I fix the loadings and residuals in Model 2. (Is this analogous to invariance?)
Model 3.
MODEL: F1 BY x1@1 x2@.798 ... x9@.774;
x1@.415 x2@.798 ... x9@.774;
F1 ON age;

I apparently needed to fix the loadings to recover the model fit from Model 1.

..........chi^2 (df)
Model 1 29.9 (27) p=.33
Model 2 53.2 (35) p=.03
Model 3 58.7 (52) p=.23

Is this a valid approach? Is there a way to make age as a "continuous group" variable to test invariance in the more traditional way?
 Matt Moehr posted on Friday, November 10, 2006 - 8:42 am
Background: This is a study of 3-6 year old children examining the development of certain cognitive attributes. The use of a single factor or multiple factors is a hotly debated subject, but my colleagues have a paper in press where the 1-factor model above seems to be well supported. Now we would like to show that this factor has some sort of predictive capabilities. But in order to relate the cognitive factor to educational or behavioral outcomes we need to "control" for age. I realize this could/should be done as a multi-group analysis of measurement invariance, but there are two problems with that: 1) the younger the kids the more missing data there is, and 2) I don't really want to impose an arbitrary developmental cut-off point (say 3&4 year olds vs. 5&6).

Both technical and theoretical comments are welcomed.

Thanks,
matt
 Linda K. Muthen posted on Saturday, November 11, 2006 - 11:28 am
You can approach measurement invariance in two ways -- a CFA with a covariate (MIMIC model) or multiple group analysis. If you use a CFA with a covariate, you can assess only intercept invariance using direct effects. Factor loading invariance cannot be studied. In our experience, factor loading invariance is most often not a problem. It is intercept invariance. You can use multiple group analysis to study both intercept and factor loading invariance. However, you will have to make some decision about age groups.

The approach you suggest above is not how measurement invariance is generally looked at. See the discussion of testing for measurement invariance in Chapter 13 in the Mplus User's Guide. It comes at the end of the discussion of multiple group analysis.
 yshing posted on Wednesday, December 13, 2006 - 2:25 am
Is a negative AIC value produced in M+ plausible? I understand that in M+ the way AIC is computed is by taking the loglikelihood of the better model (-2*logL+2*free parameter). My loglikelihood turned out to be positive hence leading to a negative AIC value. What does this indicate? The other fit indices look reasonable.
 Linda K. Muthen posted on Wednesday, December 13, 2006 - 7:49 am
This is unusual but possible. Sometimes the loglikelihood can be positive resulting in a negative AIC. If you want me to look at this further, send your input, data, output, and license number to support@statmodel.com.
 stephane vautier posted on Wednesday, December 13, 2006 - 8:47 am
Hello,
Using the maximum likelihood robust estimator, is it possible to get a confidence interval for the RMSEA?
Thanks in advance for your answer.
 Linda K. Muthen posted on Wednesday, December 13, 2006 - 9:09 am
Not at this time.
 Ralf Wierich posted on Tuesday, January 09, 2007 - 6:45 am
Hello,
when using MLR Mplus gives me the YB-\chi^2, right?
Is the calculation of the fit indices like TLI and CFI based on the YB-\chi^2?

Thanks in advance!
 Linda K. Muthen posted on Tuesday, January 09, 2007 - 9:34 am
The chi-square for MLR is asymptotically equivalent to the Yuan-Bentler T2* test statistic. CFI and TLI are based on whichever type of chi-square is given.
 Julia Diemer posted on Tuesday, April 24, 2007 - 5:51 am
Hello,

I am using Mplus for a CFA with ordinal data (4-point Likert scale) for my dissertation. The distributions are also rather skewed. I am looking for ways to assess model fit.
I understand that the cutoff criteria for various fit indices suggested by Hu and Bentler (1999) refer to normal data, and that Yu (2002) extends their findings for non-normal and categorical (binary) data, the latter using WLSMV as the estimator.
I was wondering whether this research generalizes to WLSMV estimation based on ordinal data, and if it would make sense to work with the cutoff criteria Yu (2002) suggests for binary (unequal proportions) data.
Has anything been published about the performance of fit indexes and cutoff critera with WLSMV estimation and ordinal data?

Thank you very much,
Julia
 Linda K. Muthen posted on Tuesday, April 24, 2007 - 8:20 am
I don't know of any study of fit statistics for ordinal dependent variables. The cutoffs for binary dependent variables are very similar to those for continuous dependent variables. I would think they are similar for ordinal. You would have to do a Monte Carlo simulation study to answer this question.
 Alex posted on Monday, June 04, 2007 - 1:22 pm
Greetings,

Is it possible to obtain the 95% confidence interval for the RMSEA using MLR and/or WLSMV ? Is it is, how ?

Thank you
 Linda K. Muthen posted on Monday, June 04, 2007 - 3:48 pm
Not at this time. If this has been defined in the literature, we are not aware of it.
 David Bard posted on Thursday, August 02, 2007 - 4:40 pm
In the June 13th response to Espen Røysamb, I see that a WLSM difference test is performed in like manner to that of MLR & MLM. I have two related questions:

1. I've noticed the scaling parameter in WLSM does not [always?] equal the ratio of the WLS chi-square and the WLSM chi-square (as would be true for MLR & MLM). How is this scaling parameter calculated? Is it possible to calculate this parameter by hand using Mplus output (I'm trying to use it in a simulation study where the Difftest calculations are too difficult to capture)?

2. For judging significance of any of these difference tests (MLR,MLM,WLSM), do you use the difference in the adjusted degrees of freedom or the unadjusted degrees of freedom?
 Joanna Harma posted on Thursday, August 02, 2007 - 10:44 pm
Dear Dr. Muthen
I just ran CFA with 24 variables and got a three factors solution (EFA on the same variable showed around 6 factors solution but those factors make so sense for interpretation). Now my model fits all goodness of fit indices but one which is Chi-square fit.
1. Is there any way you can suggest which could improve my model (I've taken into account modification indices)?
2. Will my model be considered bad if it does not fit chi-square? And how does not fitting chi-square effect the model?
3. Correlation between my factors in EFA was quite low (0.3-0.5) but is it quite high when I do CFA (0.7-0.8). Why is that? Also, do you think I could use factors, with such high correlation, as independent variables when doing regression analysis?
Many thanks,
Joanna
 Linda K. Muthen posted on Friday, August 03, 2007 - 10:21 am
Harma: One suggestion is to do a sensitivity analysis by freeing parameters until chi-square shows good fit and seeing if this changes the original results. If it does, I would worry about model fit.

The correlations go up because you go from EFA to simple structure CFA. I wouldn't worry too much about the size of the correlations.
 Linda K. Muthen posted on Friday, August 03, 2007 - 1:51 pm
BArd: That's correct -- WLS is not the same estimator as WLSM in terms of point estimates and thus there is no direct relation between these chi-square statistics (beyond the same asymptotic properties). The scaling parameter for WLSM is calculated according to formula (106) in the Technical Appendices http://statmodel.com/download/techappen.pdf

The output gives you the scaling correction factor to use in the difference testing. Are you not getting this?

There are no adjusted degrees of freedom for WLSM. You can either use the difference in the degrees of freedom or the difference in the number of free parameters.
 David Bard posted on Friday, August 03, 2007 - 4:21 pm
Thank you.

I do get the scaling parameter in the individual output files with WLSM. I was hoping to use the Mplus Montecarlo output, though, for speed considerations. I do not get the scaling parameter in that output, right? If true, is there a way to add these scaling parameters to the results file in Monte Carlo?
 Linda K. Muthen posted on Friday, August 03, 2007 - 4:45 pm
The scaling factor is not saved for Monte Carlo simulations.
 Christian Geiser posted on Monday, October 08, 2007 - 1:28 pm
Is there any known reason why Mplus does not produce exactly the same ML chi-square as do other SEM programs such as EQS? I read exactly the same covariance matrix in Mplus 4.2 and EQS 6.1 and obtained slightly different chi-squares. The issue is that the editor of a journal wants me to reproduce exactly the same findings (in the revision of a paper) he gets with obviously a different program than Mplus. I get the same chi-square he gets in EQS but not in Mplus. So I am just wondering what the reason might be... Thanks.
 Linda K. Muthen posted on Monday, October 08, 2007 - 2:05 pm
The reason is likely that we use n and the other program uses n-1.
 Jessica Schumacher posted on Friday, October 26, 2007 - 12:00 pm
I am running a CFA with ordinal variables. The CFI is greater than 0.95, suggesting good model fit -- but the RMSEA is high (0.3) and the chi-square is high also (likely due to sample size -- n=9,000). I have not found a paper that has suggested cut-offs for ordinal data. Is it appropriate to rely solely on the CFI/TLI criteria? Also -- I used a * after the first variable in my model statement to free it for estimation -- but it appears that doing that creates a situation in which the standard errors are not able to be calculated (identification problem). Is there a way to get around this in MPLUS so that I can report factor loadings for all of the variables? Thank you.
 Linda K. Muthen posted on Friday, October 26, 2007 - 12:37 pm
If you free the first factor loading, you need to either fix another one or fix the factor variance to one. This is described in Chapter 16 of the user's guide under the BY option.

See the Yu dissertation on the website for cutoffs for categorical outcomes.
 Mi-young Lee Webb posted on Thursday, March 13, 2008 - 12:27 am
I ran simple CFA model via Mplus and LISREl but got substantially different model fit indices. Chi-square statistics were similar, but CFI was .732 in Mplus (.895 in LISREL), TLI=.708 in Mplus (.886 in LISREL), and RMSEA=.099 in Mplus (.107 in LISREL).

Any suggestions?

Thanks,
 Linda K. Muthen posted on Thursday, March 13, 2008 - 6:09 am
The difference you see in Chi-square and RMSEA is likely due to the fact that Mplus uses n and LISREL uses n-1. The differences in CFI and TLI are due to the baseline models that are used to compute CFI and TLI being different. LISREL uses a baseline model that includes covariates with zero covariances among the covariates. This causes the baseline model to fit poorly and makes the H0 model fit look better. We do not believe in this baseline model.
 Alexandre Morin posted on Friday, March 14, 2008 - 5:32 am
Greetings Linda,

Following up on this one, I saw this in the Tech appendices (p. 23):
"the baseline model has uncorrelated outcomes with unrestricted variances unrestricted means and/or thresholds. With two-levels models, the baseline model sets both the between and within covariances to zero. With categorical outcomes, the baseline model does not set to zero the covariances among the covariates of X because the x variables are not part of the model".
From what you replied to the previous question, I am now led to believe that the baseline models for any kind of outcomes includes covariances among covariates ? Is that it ?
 Linda K. Muthen posted on Friday, March 14, 2008 - 8:22 am
We never fix the covariances of covariates to zero.
 meng-li yang posted on Thursday, March 20, 2008 - 11:10 pm
I am doing multiple group factor analysis to test the invariance of parameters. There are more than 13000 cases in my dataset.
In doing difference test,I found that the chi-square value (108) is highly significant with 10 degrees of freedom, but the other fit indices (TLI, CFI, and RMSEA) are satisfactory in both models, H1 and Ho, although the more relaxed model is a little bit better.

I guess this is because of the huge sample size. Maybe I can just ignore the chi-square test for now?
However, when the model gets more restricted, the other fit indices might fall just a little bit below the acceptable criteria, and I will not be able to use chi-square to test if this is really serious.

So, my question is: is there any way to test the invariance other than the chi-square test when the sample size is very large? Thank you.
 Linda K. Muthen posted on Friday, March 21, 2008 - 7:25 am
Even though chi-square may be sensitive to large samples, I think this sensitivity is less when chi-square is used for difference testing. I don't know of any other option for difference testing.
 meng-li yang posted on Saturday, March 22, 2008 - 4:59 pm
Thank you, Linda.

But then, do you think I should stick to chi-square test result or refer to the other fit indices when chi-square says highly significantly different but the other fit indices say both acceptable?

Thank you again.
 Linda K. Muthen posted on Saturday, March 22, 2008 - 5:45 pm
I have not seen other fit statistics used for difference testing. Another option is randomly splitting your sample to reduce the sample size.
 meng-li yang posted on Monday, March 24, 2008 - 4:12 am
Hello, Linda, it's me again.

The indicators in my dataset are categorical variables (4 levels). n>=13000.

Three questions:

1.I found from watching instruction movie on web that chi-square test for categorical indicators is not good. But is it still ok to do the difference test by comparing chi-square values using the difftest?

2. Yu's dissertation seemed to suggest that WRMR is a good fit index, while SRMR is not recommended. However, as similar to the experience with my dataset and the example shown in the movie, WRMR seems large when the other indices seem reasonably good. Should I care about WRMR with my case?

3. I found that the fit indices look better when I constrained loadings and thresholds across groups
(model: f1 by y15-y23;
f2 by y24-y28;
y22 with y23;
y17 with y18;

model female: [f1-f2];)
than when I set them free
( model: f1 by y15-y23;
f2 by y24-y28;
y22 with y23;
y17 with y18;
[f1-f2@0];
{y15-y28@1};
model female: f1 by y16-y23;
f2 by y25-y28;
y22 with y23;
y17 with y18;).
Is this possible? As we usually want to free more parameters when the model does not fit well. Thank you.
 Linda K. Muthen posted on Monday, March 24, 2008 - 9:18 am
1. I think what you mean is that for WLSMV, the chi-square value and degrees of freedom cannot be interpreted in the regular way. The proper way to do difference testing with WLSMV is to use the DIFFTEST option.

2. I think Yu conlcuded CFI does well. I would pay more attention to CFI than to WRMR.

3. I'm not sure what you mean. The MODEL commands you show are for the unrestricted not the restricted model. If you have further questions on this, please send your input, data, output, and license number to support@statmodel.com.
 meng-li yang posted on Sunday, March 30, 2008 - 8:08 pm
I use WLSMV for categorical data. I read in the user's guide that when doing chi-square tests for nested models with WLSMV as the estimator, DIFFTEST should be used.

However, in the appendices on the web, it seems to me that DIFFTEST is suitable for continuous but non-normal data, not for categorical data. So I cannot use DIFFTEST for my categorical data?

Am I correct?

Thank you.
 Linda K. Muthen posted on Monday, March 31, 2008 - 6:13 am
DIFFTEST is used with WLSMV and MLMV for chi-square difference testing. It is appropriate for categorical data.
 meng-li yang posted on Monday, March 31, 2008 - 11:37 pm
Linda,

One more question:

when doing the DIFFTEST:
The chi-square turned out several hundred and significant, but the CFI dropped only slightly, still within the acceptable range, like from CFI=.968 to CFI=.955.

In this case, should I make the model selection decision based on the difftest or on CFI?

That is, should I accept that there is a group difference in some parameter (factor mean) based on the difftest?
Or, since the CFI looks OK, should I say they are acceptably invariant?

Thank you.
 Linda K. Muthen posted on Tuesday, April 01, 2008 - 8:58 am
The chi-square difference test answers the question of whether the model restrictions significantly worsen the fit of the model. You can't answer that question with CFI. With CFI, you are answering the question about the fit of two different models. I think a CFI of .955 is marginal. The chi-square difference test is a more stringent test.
 meng-li yang posted on Friday, April 11, 2008 - 2:59 am
Dear Linda,

Regarding the contradictory suggestions by DIFFTEST value and CFI value in categorical outcome variables with continuous latent variables, I got an even more extreme case:

The DIFFTEST result:
chi-square= 42.01, df=1
whereas CFI jumps from .974 to .983.

This happens when I am testing if the factor variances are the same across groups. The sample size is about 13000.

Should I accept the advice of difftest and say that the two variances are different? or take the advice of CFI and regard them as equal ?

Thank you for your help.
 Linda K. Muthen posted on Friday, April 11, 2008 - 8:29 am
I would use the DIFFTEST results. CFI does not answer the question about the variances being different.
 meng-li yang posted on Sunday, April 13, 2008 - 9:13 pm
Thank you, Linda.

I have four more questions regarding multipe group confirmatory factor analysis:

1. The suggestions in the Mplus short course handout suggests freeing factor loadings (and other things) and fixing factor means as the second group. The third step does most of the reverse.

The two models from these two steps do not seem to be nested. So do we use only the model fit index as the criteria for them?

2. Is residual covariance also part of the measurement model?

3. Do I have to do DIFFTEST for it?

4. If the model fit is not better (or DIFFTEST looks bad) when I constrain the residual covariance to be equal across groups, do I stop here and do not test for population heterogeneity?
 Linda K. Muthen posted on Monday, April 14, 2008 - 7:56 am
The models shown are nested. Note that Topic 1 covers continuous variables. For the models to test with categorical outcomes so the section on measurement invariance in Chapter 13. With WLSMV, all nested model comparisons must be done using DIFFTEST.

Although residual variances and covariances are measurement parameters, many disciplines do not require measurement invariance of these parameters. Once you establish measurement invariance, it is appropriate to test the structural parameters.
 meng-li yang posted on Tuesday, April 15, 2008 - 12:03 am
Dear Linda,

Thanks for the information regarding the covariance of residuals.

However, I am still confused about testing measurement invariance:

On page 399 of Users' Guide version 5, there are 2 steps under the title of 'WEIGHTED LEAST SQUARES ESTIMATOR USING THE DELTA PARAMETERIZATION.' The first one frees thresholds and factor loadings, and fixes scale factors and factor means.
The second step constrains thresholds and factor loadings, and frees scale factors and factor means for one group.

These do not seem to be nested models to me. When I request DIFFTEST, the model says that they are not nested models either.

So, how do I make them become nested?
Thank you.
 Linda K. Muthen posted on Tuesday, April 15, 2008 - 6:24 am
The models are nested. You must not be setting up the model correctly. Please send the relevant files and your license number to support@statmodel.com.
 meng-li yang posted on Wednesday, April 16, 2008 - 12:57 am
Dear Linda,

Thank you for the confirmation. I got the difftest done with your assurance of the models being nested.

I have one more question. It is about CFA rather than multiple group CFA.

When I test whether a parameter should be included, do I also have to check with DIFFTEST or do I just look at the CFI or the size of the parameter?

In my case, one item's cross loading on a second factor has a small value, such as .3 or .4. Dropping it lowers the CFI. But when I include another parameter (residual covariance of still another two items), the CFI jumps higher than before the cross-loading is dropped.
Obviously, including the latter is more useful than including the former. But, Should I keep the former?

Is there any criteria for this?
Thank you.
 Linda K. Muthen posted on Wednesday, April 16, 2008 - 6:17 am
DIFFTEST is used to compare nested models. Whether a cross loading should be included should be driven by whether it is significant both in the statistical and practical sense and whether it is substantively supported. Residual covariances should also be substantively driven. Parameters should not be added and removed solely to affect model fit.
 Sophie van der SLuis posted on Tuesday, June 10, 2008 - 11:50 am
Dear Linda,
If I request residuals [i.e. difference between observed and model-implied values] using the "RESIDUALS" option in the output command, does Mplus then print standardized or unstandardized residuals?
Thank you
Sophie
 Sophie van der SLuis posted on Tuesday, June 10, 2008 - 11:53 am
Sorry, and related question:
can I write these residuals to a file to plot them?
[eye-balling all residuals for all my groups doesnt seem an trustworthy option...]
thanks!
sophie
 Linda K. Muthen posted on Tuesday, June 10, 2008 - 3:04 pm
The RESIDUAL option in the OUTPUT command provides raw, normalized, and standardized residuals for continuous outcomes. Residuals cannot be saved. These are not individual residuals but residuals of the sample statistics.
 Sophie van der SLuis posted on Wednesday, June 11, 2008 - 1:40 am
Dear Linda,
You seem to suggest that I get (or can choose between?) three types of residuals (raw, normalized, standardized), but I only seem to get 1 type of residuals and I can't infer from the output what kind of residuals are printed.

My output statement equals:

Output:
standardized residual;

and then I get the following output [like in the manual, page 506]:

ESTIMATED MODEL AND RESIDUALS (OBSERVED - ESTIMATED) FOR GROUP1

Model Estimated Covariances/Correlations/Residual Correlations
ZPCR ZSIR
________ ________
ZPCR 1.211
ZSIR 0.895 1.294


Residuals for Covariances/Correlations/Residual Correlations
ZPCR ZSIR
________ ________
ZPCR 0.029
ZSIR -0.047 -0.115

Are these residuals standardized, raw, normalized? [NB the residuals do not change when I delete the 'standardized' option from the Output command line].

If I can choose between different kinds of residuals, how do I do that? I don't see that option described in the manual.

Thanks
Sophie
 Linda K. Muthen posted on Wednesday, June 11, 2008 - 6:41 am
It sounds like you are not using Version 5 or 5.1. The residuals you are getting are raw.
 Sophie van der SLuis posted on Thursday, June 12, 2008 - 6:13 am
Hi Linda,
You're right: I updated to version 5 and get all the residuals; great improvement!
Thanks again
Sophie
 Sophie van der SLuis posted on Thursday, June 12, 2008 - 12:19 pm
Hi Linda,
I now get the standardized residuals in Mplus version 5, but sometimes the standardized residuals for (co) variances and intercepts are printed as 999.000 while the raw and normalized residuals [which may be positive or negative] seem ok [see below for example]. do you maybe have an explanation?
Best
Sophie

Residuals for Covariances/Correlatio
ZRESPC 0.017
ZRESBD -0.009 -0.036

Standardized Residuals (z-scores)
ZRESPC 0.441
ZRESBD 999.000 999.000

Normalized Residuals for Covariances/
ZRESPC 0.217
ZRESBD -0.207 -0.679
 Linda K. Muthen posted on Thursday, June 12, 2008 - 12:36 pm
We have a technical appendix on standardized residuals on the website. If the variance in formula 16 is negative, 999 is printed.
 Derek Kosty posted on Monday, July 07, 2008 - 11:27 am
Hello,

It has occurred to me that the BIC can be computed using the chi-square value.

(BIC = chisquare - df (ln (N)).

However, I believe that one assumption is that the chi-square value has to be based on the likelihood values of the null model and the model of interest.

1)What is the formula for the chi-square test of model fit when using the WLSMV estimator? (I cannot find this in the technical appendix).

2)If the chi-square value is not a function of the likelihood, is it defensible to compute the BIC from it?
 Linda K. Muthen posted on Monday, July 07, 2008 - 3:02 pm
1. Technical Appendix 4, formula 108.
2. No.
 Erika Wolf posted on Friday, July 11, 2008 - 10:47 am
I'm using the MLR estimator for clustered categorical data (a CFA with 8 categorical indicators) for a nested and comparison model. My ouput gives me:
1. The loglikelihood and scaling correction factor
2. AIC
3. BIC
4. Chi-square test of model fit for the binary and ordered categorical outcomes (for which a large number of cells with presumably low frequencies were deleted)
5. The likelihood ratio chi square
6. Pearson Chi square for MCAR
7. Likelihood ratio Chi Square for MCAR

I've computed the chi-square difference test using the -2 log likelihood formula on the website, but my question is, are there any stats here that can help me interpret absolute (not relative) model fit? I read that 4 and 5 (above) are not approporiate to evaluate with 8 or more variables in the model.

Thanks for your help.
 Linda K. Muthen posted on Saturday, July 12, 2008 - 10:53 am
I would use 5 and 6 if they agree. Ignore them if they don't. I would also look at the standardized bivariate residuals from TECH10.
 Erika Wolf posted on Tuesday, July 15, 2008 - 7:36 am
Thanks for your help. Are 5 and 6 (above) interpreted in the same way as the traditional model chi square? In my case, both values are large (likelihood ratio chi square = 1483, DF = 6511; Pearson chi square = 1407, DF = 13038, and the p-value is 1.0 for both stats). How is this interpreted?
Thanks again.
 Linda K. Muthen posted on Tuesday, July 15, 2008 - 11:17 am
The chi-square in 5 and 6 are not testing the full model. They test the observed versus the estimated entries in the multiway contingency table for the categorical latent class indicators. When they both have probabilities of one they should be ignored.
 Sanjoy Bhattacharjee posted on Tuesday, July 15, 2008 - 12:32 pm
Prof. Muthen,

1. What should be the exact citation for your 1997 paper (WLSMV estimator)?


2. Apart from citing Yu’s dissertation, is there any other way of citing Yu and Muthen’s (2001) work on cut-off criteria? I mean journal article.

3. Did (Ching-Yun) Yu publish her work any where? I constantly refer back to her dissertation while working on latent variable models. But for some journals, dissertation could not be cited in the reference. They just don’t accept. I don’t know why though.

Thanks and regards
 Linda K. Muthen posted on Wednesday, July 16, 2008 - 3:10 pm
1. Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Accepted for publication in Psychometrika.

Or you could call it a technical report. It was accepted but never revised and at this point won't be.

2. No.

3. No.
 Sanjoy Bhattacharjee posted on Thursday, July 17, 2008 - 9:45 am
Thank you Madam.

Reagrads
 Alexey Milstein posted on Sunday, December 21, 2008 - 5:41 am
Dear Prof./s,
it seems to me, that I don't understand well, what do You mean under "residual correlations" in Section "Residuals for Covariances/Correlations/Residual Correlations" of residuals-output (I have a multi-group CFA with categorical data and use PARAMeterisation=theta and estim=WLSMV).

You say, "residuals for <...>" = (estimated-observed) value, but I'm interesting, how can I get the Residual Correlations? Is it the parameter, that is adressed to with WITH-operator?

If I see the term "residual correlation matrix", should I understand it as the matrix of residuals for correlations or as the matrix of correlations of residuals?

Thank You.

! I hope, I could explain all this clearly, I'm not very good in english.
 Linda K. Muthen posted on Sunday, December 21, 2008 - 6:36 am
In the residual output, the residual is the difference between the observed correlation and the model estimated correlation. This is a way of assessing model fit.

If you want to include a residual correlation in your model, you do this using the WITH option.
 Leslie Rutkowski posted on Thursday, January 08, 2009 - 2:25 am
Dear Linda & Bengt,

I am fitting CFA where country is the grouping variable. I notice that the chi-square is broken down by group contribution, but that there are no other group specific fit statistics. Is there a way to request this? Or is is necessary to fit separate models for each country?

Thanks for your response and happy New Year!
 Linda K. Muthen posted on Thursday, January 08, 2009 - 6:09 am
There is no way to request this. It is always a good idea to fit each group separately as a first step to be sure that the model is correct for each group. I would do an EFA in each country as a first step to be sure that each country has at a minimum the same number of factors.
 ehsan malek posted on Wednesday, January 14, 2009 - 2:37 am
Dear Dr. Muthen,

I am running a CFA model with 4 latent variables (I have around 130 cases). the chi-square of the model is around 200 with 49 degrees of freedom using pearson correlation (a poor fit). I used kendall correlation and there was a strange result. the chi-square was around 30 (a very good fit) and cfi=1 and rmsr=0.0. what is your interpretation? can I use kendall correlation and say that I have a very good fit?

thank you in advance.
 Linda K. Muthen posted on Wednesday, January 14, 2009 - 8:06 am
The program does not know that you are using Kendall's correlations rather than Pearson. I would say the results using the Kendall's correlations are not meaningful.
 ehsan malek posted on Thursday, January 15, 2009 - 9:37 am
What about Spearman's correlation?
As spearman or kendall's correlation can show relations other than linear, can't we take this (much better fit with Kendall's or Spearman's correlation) as an evidence of nonlinear relations among variables?
 Linda K. Muthen posted on Thursday, January 15, 2009 - 9:54 am
Whatever type of correlations you use will be interpreted as though they are Pearson correlations. It would be incorrect to use other types of correlations.
 Derek Kosty posted on Wednesday, February 04, 2009 - 5:17 pm
Dear Mplus Team,

This is a follow-up to a previous post I made on July 7th, 2008. I asked, “If the chi-square value is not a function of the likelihood, is it defensible to compute the BIC from it?” Linda simply responded, “No”.

Now, after further consideration of our current research, it has been decided that the AIC would be more appropriate (I don’t expect this to change Linda’s response). Is this application of AIC/BIC not defensible because the computation of chi-square under WLSMV “essentially involves the usual chi-square statistic multiplied by an adjustment akin to the Satorra and Bentler (1986, 1988) robust chi-square test statistic…” (Flora & Curran, 2004, p. 470)? Or is the reason more fundamental than this? Any feedback would truly be appreciated as we have been wrestling with the issue of comparing the fit of non-nested models when using the WLSMV estimator.

Thank you for your support,

Derek
 Elisabet Solheim posted on Thursday, February 05, 2009 - 3:49 am
Hi,

I ran a CFA on data obtained from a questionnaire with 3 latent variables and 28 observed (likert scale) variables. I used WLSMV as the estimator and did not get a very good fit. I therefore went back to doing an EFA 1-4 factors. I did this borth with WLSM and then got very high chi-square values. I then used estimator0 WLSMV, and the chi-square values went down. the other indicators stayed more or less the same. Now I am wondering if there is a way to do chi square difference testing when WLSMV is used as estimator. if not, how would one go about to assess if one modell fits the data better than the other?

Best regards,
Elisabet
 Linda K. Muthen posted on Thursday, February 05, 2009 - 10:03 am
Derek: The chi-square for weighted least squares estimation is not based on a loglikelihood. It is a Wald chi-square. See Muthen (1984).

Both AIC and BIC are based on the loglikelihood. If you want further information about this, see

Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461-464.
 Linda K. Muthen posted on Thursday, February 05, 2009 - 10:05 am
Elisabet: With WLSMV, only the p-value should be interpreted not the chi-square value or the degrees of freedom. You can test nested models with WLSMV using the DIFFTEST option.
 Elisabet Solheim posted on Friday, February 06, 2009 - 4:45 am
Thank you very much for your answer!

Elisabet
 craig neumann posted on Tuesday, February 10, 2009 - 6:11 am
Is the SRMR index available for CFA with ordinal data (WLSMV estimator) in Mplus version 5?
 sheretta teksie barnes posted on Tuesday, February 10, 2009 - 9:07 am
Hello,
I am having some trouble on how to interpet this. I know this means that the model I specificed is different from the baseline.

Is this something I can report as signficant? Can I report the chi-square for the baseline model?

Chi-Square Test of Model Fit
Value 0.000
Degrees of Freedom 0
P-Value 0.0000

Chi-Square Test of Model Fit for the Baseline Model

Value 5.314
Degrees of Freedom 5
P-Value 0.3784

CFI/TLI
CFI 1.000
TLI 1.000

Loglikelihood
H0 Value -1017.132
H1 Value -1017.132

Information CriteriaNumber of Free Parameters 6
Akaike (AIC) 2046.264
Bayesian (BIC) 2066.289
Sample-Size Adjusted BIC 2047.279
(n* = (n + 2) / 24)

RMSEA (Root Mean Square Error Of Approximation) Estimate 0.000
90 Percent C.I. 0.000 0.000
Probability RMSEA <= .05 0.000

SRMR (Standardized Root Mean Square Residual)
Value 0.000
 Linda K. Muthen posted on Wednesday, February 11, 2009 - 10:10 am
Craig: SRMR has been available for categorical outcomes since Version 1 when all outcomes are categorical and there are no thresholds or covariates in the model. With Version 5, thresholds are included in the model as the default. To remove them from the modeling request MODEL=NOMEANSTRUCTURE in the ANALYSIS command.
 Linda K. Muthen posted on Wednesday, February 11, 2009 - 10:11 am
Sheretta: Model fit cannot be assessed for a model with zero degrees of freedom.
 Bjorn Roelstraete posted on Thursday, February 19, 2009 - 3:21 am
Dear Linda,

Am i correct that the bayes factor can be calculated in Mplus by the following formula: bayes factor = exp((BIC_model1 - BIC_model2)/-2)?

Thank you in advance,
Bjorn Roelstraete
 Linda K. Muthen posted on Thursday, February 19, 2009 - 11:17 am
I am not sure but you should be able to find the answer in:

Kass and Raftery (1995). Bayes Factors. Journal of the American Statistical Association, 90, 430, 773-795.
 Bjorn Roelstraete posted on Friday, February 20, 2009 - 1:13 am
Sorry for not being clear. My actual question was how the BIC in Mplus is calculated? Is it based on the loglikelihood or -2 * loglikelihood? If its the former, BF = exp(BIC_model1 - BIC_model2), but if its the latter, I should devide the difference by -2 first.

Thank you,
Bjorn
 Bengt O. Muthen posted on Friday, February 20, 2009 - 4:39 am
Mplus BIC = -2*LL + #par.'s*log(n), so just like in the Kass & Raftery article.
 Bjorn Roelstraete posted on Friday, February 20, 2009 - 7:01 am
Thank you very much.
 Eric Chen posted on Tuesday, March 10, 2009 - 2:26 am
Dear professor:

I run an 2PL IRT analysis like example5.5 mentioned in manual chapter5.
And I have a problem about the test of model fit. The output fit indices of Mplus were H0 Value, AIC, BIC and adjusted BIC. These indices seem to use to compare 2 or more models. But I only specify 1 model. How do I explain these indices and assess my modle fit?

Thank you!


Eric Chen
 Linda K. Muthen posted on Tuesday, March 10, 2009 - 8:42 am
The values given for the fit statistics are for the H0 model that is estimated in the analysis. These values do not compare two models.
 Joykrishna Sarkar posted on Sunday, May 03, 2009 - 12:23 pm
Dear Prof.,
I am new to use the Mplus. I am trying to conduct MCFA with simulated complex data. Data were generated in SAS. The following attached program gave me errors. Could you please look at my M-plus syntex if there is any error? If no, what could be the errors? Thanks a lot in advance.
Program :Configural Invariance
title: MCFA with complex survey data
DATA: FILE = data_MCreplist.dat;
type=montecarlo;
VARIABLE: NAMES = y1-y6 strata cluster weight_f;
USEVARIABLES = y1-y6;
CLUSTER = cluster;
weight=weight_f;
grouping is strata (1=g1 2=g2);
ANALYSIS: TYPE = COMPLEX;

model: f1 by y1-y3;
f2 by y4-y6;
model g1: f1 by y2-y3;
f2 by y5-y6;
model g1: [y1-y6];

output: tech9;
Errors:
THE MODEL ESTIMATION TERMINATED NORMALLY

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE
COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL.
PROBLEM INVOLVING PARAMETER 36.

THE CONDITION NUMBER IS -0.510D-08.

THE ROBUST CHI-SQUARE COULD NOT BE COMPUTED.
 Joykrishna Sarkar posted on Sunday, May 03, 2009 - 12:37 pm
Dear Prof.,
Again I am posting another program for scalar invariance. I ran the following syntex in M-plus demo version currently available. Would you think there is an error in syntex? Your help is appreciated. The same data sets were used for configural invariance but that program gave different errors that I posted in my previous posting.
Program: Scalar Invariance
title: Scalar Invariance MCFA with complex survey data
DATA: FILE = data_MCreplist.dat;
TYPE = MONTECARLO;
VARIABLE: NAMES = y1-y6 strata cluster weight_f;
USEVARIABLES = y1-y6;
CLUSTER = cluster;
weight=weight_f;
grouping is strata (1=g1 2=g2);
ANALYSIS: TYPE = COMPLEX;
model: f1 by y1-y3;
f2 by y4-y6;
output: tech9;
Program gives: Errors for replication with data files data_MCrep_1.dat to data_MCrep_10.dat, as there are 10 data files in data_MCreplist.dat :
 Linda K. Muthen posted on Sunday, May 03, 2009 - 3:26 pm
For your first question, when you free the factor loadings and intercepts, the factor means must be fixed to zero in all groups.

For the second question, please send your output and license number to support@statmodel.com.
 Joykrishna Sarkar posted on Monday, May 04, 2009 - 11:15 pm
Dear Dr. Muthen,
Thanks for your quick reply. Could you please tell me how can I fix the factor means to zero in all groups, when I free the factor loadings and intercepts?

Another question: Is it possible to generate complex sample data with strata, cluster, and weight in Mplus ?
 Linda K. Muthen posted on Tuesday, May 05, 2009 - 6:43 am
If a factor is named f, you say,

[f@0];

See Chapter 16 of the user's guide for a full description of the MODEL command.

You can generate clustered data but not weighted data. See Example 11.4 and 11.6 Step 1. See also Chapter 18 of the user's guide for related options.
 Wen-Hung Chen posted on Tuesday, August 11, 2009 - 12:56 pm
Dear Dr. Muthen,

I took your M-plus short course in 2008 at Johns Hopkins University. In the classes and in your handouts, you stated that RMSR and RMSEA less 0.05 were recommended. I constantly have people ask me about the rationle or reference about the 0.05 value. Do you have a reference that I can cite for the recommended 0.05 value?

Thanks.
Wen-Hung Chen
 Linda K. Muthen posted on Tuesday, August 11, 2009 - 1:26 pm
See the following:

Yu, C.Y. (2002). Evaluating cutoff criteria of model fit indices for latent
variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles. On the website.

Hu, L. & Bentler, P.M. (1999). Cutoff criterion for fit indices in covariance
structure analysis: conventional criteria versus new alternatives. Structural
Equation Modeling, 6, 1-55.
 Christine Maier posted on Tuesday, August 25, 2009 - 2:15 am
Dear Mplus Team,

I ran a confirmatory factor analysis (four highly correlated items loading on one factor, N = 420). While CFI = .99, TLI = .96, SRMR = .02, the RMSEA = .129 (Chi2 = 12.39, df = 2). I am not sure how to interprete these results - is the model fit acceptable? I have the same problem with another CFA using this data (six highly correlated items loading on one factor): Chi2 = 157.27, df = 20, CFI = .92, TLI = .89, RMSEA = .147, SRMR = .04.

I would appreciate your help very much.

Christine
 Linda K. Muthen posted on Tuesday, August 25, 2009 - 11:49 am
I think with highly correlated data CFI and TLI fit measures may be too high.
 kanreutai klangphahol posted on Tuesday, September 15, 2009 - 5:24 am
Dear Mplus Team,


Can TLI value over 1.00 ?

thank you so much
tam
 Bengt O. Muthen posted on Tuesday, September 15, 2009 - 9:02 am
When the value is larger than 1, just round it off as 1.
 Dvora Shmulewitz posted on Sunday, January 24, 2010 - 3:06 am
I have 10 items that fit well on a single factor, based on CFI or TLI greater then or equal to 0.95 and RMSEA less than 0.06. I now have three alternate versions of another item and I want to know which version fits best with the other 10 items.

When I do a single factor EFA using ESEM, version 3 of the item shows the largest CFI (0.973), TLI (0.977) and the lowest RMSEA (0.035) of all three versions and a single-factor model is supported.

But when I do an IRT analysis with the MLR estimator, the AIC/BIC/SS-BIC model fit indices are lowest for version 1 of the item, indicating that version 1 fits best with the other 10 items.

Why would this happen, that one version of the item appears to fit better based on factor analysis and a different version based on IRT? Which set of model fit indices are more reliable?
 Linda K. Muthen posted on Sunday, January 24, 2010 - 11:14 am
I think when you say you do IRT you mean that you treat the factor indicators as categorical. Is this the case?
 Dvora Shmulewitz posted on Sunday, January 24, 2010 - 11:35 am
Yes, I should have mentioned that all the items are categorical, actually dichotomous.
 Linda K. Muthen posted on Sunday, January 24, 2010 - 1:19 pm
You cannot compare BIC when the observed variables are not the same. I think this is the problem.
 Camille Ferdenzi posted on Thursday, March 11, 2010 - 7:21 am
Dear Linda and Bengt,

I am running a CFA with MPlus 2.0 (type is complex), on a matrix of 2448 observations : each line of the matrix corresponds to several ratings of 1 stimulus by 1 subject (in total, 56 stimuli are rated by 351 subjects, but there are several missing lines, ie 2248 instead of 2457 expected). There are some missing data (blanks) within the 2248 lines. We set CLUSTER IS subject.

My problem is that the output indicates that the analyses are performed on 2358 observations, and I don't understand why it doesn't use the initial 2448 (I tried to remove all lines with missing data in the matrix, just in case MPlus automatically removes these lines, but the number of remaining observations does not correspond).

Can you please help me with this? Thanks a lot!
 Linda K. Muthen posted on Thursday, March 11, 2010 - 10:46 am
It sounds like you are reading the data in free format. Blanks are not allowed in free format. You either need to change the blanks to a different missing value flag or read the data using a FORMAT statement.
 Zoe Chan posted on Sunday, April 25, 2010 - 2:23 pm
Hello,

I am new to CFA. I am trying to test if my data fit the model and the analyses showed that it doesn't. I try to modify the model so that it fits and I can proceed to the next level to test for measurement invariance. However, regardless of how i try to modify it, the data still doesn't fit. What can I do? Thanks!
 Hidde Bekhuis posted on Monday, April 26, 2010 - 5:17 am
Hello,

I’m testing measurement invariance for 33 countries using group-cfa, based on imputated data (so, I’m using the imp option). I’m wonder if it is possible to obtain the fit statistics for the separated countries besides the overall fit statistics, as possible in Lisrel.

Thank you for your reaction.
 Linda K. Muthen posted on Monday, April 26, 2010 - 8:07 am
Zoe: I would do an EFA to see what is happening with the data.
 Linda K. Muthen posted on Monday, April 26, 2010 - 8:08 am
Hidde: With IMPUTATION Mplus does not give the chi-square for each group. However, you should run each country separately as a first step before doing multiple group analysis to determine whether the same factor model fits in each country. If it does not, multiple group analysis should not be done.
 ehsan malek posted on Tuesday, April 27, 2010 - 12:20 pm
Hello

Is there a way to calculate RMSEA, NFI, GFI, CR and AVE value for a CFA model using MPlus?
 Linda K. Muthen posted on Tuesday, April 27, 2010 - 12:40 pm
Of those fit statistics, Mplus gives RMSEA.
 ehsan malek posted on Tuesday, April 27, 2010 - 9:26 pm
could I calculate the other statistics myself using Mplus output? if yes, please introduce a reference.
 Linda K. Muthen posted on Wednesday, April 28, 2010 - 5:09 am
You would need to obtain the formulas for the other fit statistics and see if the information is available in the Mplus output.
 Albert E. Mannes posted on Wednesday, August 04, 2010 - 11:00 am
Hi,

Hopefully a quick question. I'm fitting a two-level CFA with continuous indicators. I want to compare two models. In the first, the indicators load on their respective traits and also on a method factor; in the second, I constrain the loadings on the method factor to zero using Model Constraint. The difference in df between the models is 6.

When I use a Wald test (Model test) in the first model to test whether the factor loadings are collectively zero, it is clearly rejected (p < .000). This suggests the method factor is meaningful.

When I use a chi-square difference test (following the procedure for MLR), the difference is 9.87 with 6 df, which is not significant (p = .13). This tells me the method factor is not meaningful.

Any advice on reconciling or interpreting this difference is appreciated. Thanks,

Al
 Mario Mueller posted on Thursday, August 05, 2010 - 5:35 am
Hello,

I was running a very simple CFA with 4 indicators and 1 factor. Chi-square and other indices indicated a very good fit! However, one item had a very low loading on the factor what was expected due to preceding EFA-analyses.
When I exclude this item from the model I get the following fit statistics:

Chi-Square Test of Model Fit

Value 0.000
Degrees of Freedom 0
P-Value 0.0000

Chi-Square Test of Model Fit for the Baseline Model

Value 136.263
Degrees of Freedom 3
P-Value 0.0000

CFI/TLI

CFI 1.000
TLI 1.000

RMSEA (Root Mean Square Error Of Approximation)

Estimate 0.000
90 Percent C.I. 0.000 0.000
Probability RMSEA <= .05 0.000

SRMR (Standardized Root Mean Square Residual)

Value 0.000



Does that mean to be a very good model? It looks a bit weird...
 Linda K. Muthen posted on Thursday, August 05, 2010 - 10:04 am
Albert: The two tests are asymptotically equivalent. If you have a small sample, this could cause the discrepancy. Or perhaps one test was not done correctly. If you want further help on this, send the information along with your license number to support@statmodel.com.
 Linda K. Muthen posted on Thursday, August 05, 2010 - 10:04 am
Mario: With three factor indicators, the model is just-identified so model fit cannot be assessed.
 Michelle Hill posted on Friday, September 24, 2010 - 11:33 am
I seem to be a little confused as to the significance of the chi square test of model fit. I was wondering, what does it tell you about your data and what is the difference between the test of model fit and the test of model fit for the baseline model?

Thank you
 Linda K. Muthen posted on Friday, September 24, 2010 - 4:52 pm
The baseline model is the model used along with the H0 model in the computation of CFI and TLI. Chi-square tests the fit of H0 model against the unrestricted H1 model.
 Michelle Hill posted on Friday, September 24, 2010 - 8:22 pm
Dr. Muthen,
Thank you. So are there standards to what the number should be (ie cutoffs or standards, as with the CLI or RMSEA)? What do the values/ degrees of freedom of the chi square test indicate?
 Linda K. Muthen posted on Saturday, September 25, 2010 - 10:56 am
See an SEM book like the Bollen book where you can find a full discussion of various fit statistics. Or listen to our Topic 1 course video where fit statistics and cutoffs are discussed along with difference testing.
 Michelle Hill posted on Saturday, September 25, 2010 - 9:25 pm
Thank you very much. I will look for that information.
 Leslie Rutkowski posted on Thursday, November 11, 2010 - 6:12 am
Hello Linda and Bengt,

I'm having trouble finding what the default unrestricted model is in Mplus. That is, what is the model that generates the H1 log likelihood value?

Thanks,
Leslie
 Linda K. Muthen posted on Thursday, November 11, 2010 - 6:43 am
This is the model of means, variances, and covariances.
 Abdel posted on Thursday, April 28, 2011 - 2:06 am
Dear Linda and Bengt,

I am running a multigroup CFA on categorical data (10 items underlying 1 latent factor) with the WLSMV estimator and PARAMETERIZATION = THETA (because I put constraints on the residual item variances), and the COMPLEX option. I would like to get a BIC value out of this analysis, and I was wondering whether it is possible to calculate that with the available output? I know that the chi-square is not based on the log likelihood, so that can't be used. If I try to get the BIC out by using MLR instead of WLSMV, I get the warnings:

*** WARNING in ANALYSIS command
PARAMETERIZATION=THETA is not allowed for TYPE=MIXTURE or
ALGORITHM=INTEGRATION. Setting is ignored.
*** ERROR in ANALYSIS command
ALGORITHM=INTEGRATION is not available for multiple group analysis.
Try using the KNOWNCLASS option for TYPE=MIXTURE.

And I'm not even using TYPE=MIXTURE of ALGORITHM=INTEGRATION. My analysis command looks like this:

ANALYSIS:
TYPE = mgroup COMPLEX MISSING h1 ;
ESTIMATOR = MLR ; !MLR used to be WLSMV
PARAMETERIZATION = THETA ;
ITERATIONS = 1000 ;

Is there any way to get the BIC out in this model? Many thanks in advance!
 Linda K. Muthen posted on Thursday, April 28, 2011 - 7:40 am
BIC is not available with weighted least squares estimation only with maximum likelihood estimation.

Remove PARAMETERIZATION=THETA; That is only for weighted least squares estimation.
 Abdel posted on Thursday, April 28, 2011 - 8:36 am
Thanks! Is it possible to calculate the BIC manually using the output from a weighted least squares estimation? Or do you perhaps have other recommendations for a fit statistic that can be used to compare different models and can be calculated from the output of a weighted least squares estimation?
 Linda K. Muthen posted on Thursday, April 28, 2011 - 10:22 am
I know of no way to calculate BIC for weighted least squares estimation. With WLSMV, you obtain chi-square and other related fit statistics.
 Patchara Popaitoon posted on Saturday, October 29, 2011 - 8:43 am
Dear Linda,

To my understanding, Mplus does not provide other fit statistics than the standard report obtained from the analysis. I normally get RMSEA, CFI, TLI and SRMR from the analysis but please could you let me know how to obtain IFI.

Many thanks.
Pat
 Patchara Popaitoon posted on Sunday, October 30, 2011 - 5:20 am
Dear Linda,

Regarding the question about IFI that I posted earlier, I already got the formula from a book. Thanks.

Pat
 Ellinor Owe posted on Monday, November 07, 2011 - 7:15 am
Hi,

I want to compare three non-nested multilevel CFA models and was thinking of using the AIC for this. I know that a smaller AIC indicate better fit, but is there a way of knowing which magnitude of difference can be considered meaningful and which can be considered trivial?

Thank you very much

Ellinor
 Linda K. Muthen posted on Monday, November 07, 2011 - 1:36 pm
I am not aware that a way to do this exists. See the following FAQ on the website which discusses this issue for BIC:

# BIC citations of interest - how big a difference
 Wen-Hsu Lin posted on Thursday, November 24, 2011 - 7:47 pm
Hi,
I have 5 imputated datasets. I ran CFA in each set using same model and each individual analysis showed acceptable fit (CFI = .94~.96; TLI = .95~.96; RMSEA .053~.61). However, when I use type = imputation, the result was strange (CFI = 0; TLI = -4.3; RMSEA = .53). Any suggestion? Thank you
 Linda K. Muthen posted on Friday, November 25, 2011 - 7:26 am
If you are not using Version 6.12, please do so. If you are, please send the relevant files and your license number to support@statmodel.com.
 Jiyeon So posted on Monday, December 05, 2011 - 9:06 pm
Hi Prof. Muthen,

I was wondering if Mplus gives out "Gamma hat" and "McDonald's NCI" as fit indices in the CFA output. I can't find them in my output.

Is there syntax that asks Mplus to give these out?

Thank you in advance!
 Linda K. Muthen posted on Monday, December 05, 2011 - 9:18 pm
There is no option to request additional fit indices. All that are available are given.
 Rebecca Fortgang posted on Tuesday, December 06, 2011 - 3:51 pm
Linda,

Given that it is impossible to request additional fit indices (bummer!), what would you suggest when using categorical (ordinal) data? We would prefer two absolute and two incremental fit indices. We would have preferred to have GFI estimated, as it is said to be analogous to Rsquared and we would like some "variance explained" index. We would also have wanted an index akin to PCFI (to compare similar models) or AIC (which includes a "penalty" for increasing parameters in comparing similar models).

We would appreciate any advice you could give, including a way to make the best use of the existing mplus output.

Thanks very much,
Becky
 Linda K. Muthen posted on Tuesday, December 06, 2011 - 5:19 pm
If you want variance explained, ask for STANDARDIZED in the OUTPUT command and you will get R-square. For maximum likelihood, AIC and BIC are given. These indices are not appropriate for weighted least squares which is the default for categorical outcomes. If you want other fit indices, you should be able to find the information to compute them.
 Rebecca Fortgang posted on Thursday, December 08, 2011 - 12:28 pm
Thank you!
One follow-up question: R-square seems only to account for variance explained by each item. Is there a way to find variance explained by the model as a whole?
 Linda K. Muthen posted on Thursday, December 08, 2011 - 1:27 pm
We provide R-square for each dependent variable not for the model.
 Michelle Jongenelis posted on Monday, March 12, 2012 - 11:23 pm
I am running a CFA on a nine-item measure. Each item is ordered-categorical in nature. I have 3% missing cases. I used multiple imputation in MPlus to generate 25 datasets. However when I ran the CFA with WLSMV estimator on the imputed datasets, I do not get a pooled chi-square, I only get pooled parameter estimates and SE's. I've had a quick look at the literature- am I correct in saying that pooled chi-square is only given in MPlus for the ML estimator? If that's the case, is there any other way to calculate the pooled chi square when WLSMV estimator has been used? I have read somewhere that once you obtain parameter estimates from the WLSMV run you can change the estimator to ML to get the pooled chi square in a second run but this doesn't seem right given the ordinal nature of the variables.

Any suggestions on how else I am able to correctly analyse my data?
Thanks!
 Linda K. Muthen posted on Tuesday, March 13, 2012 - 6:29 am
It is true that the pooled chi-square is given for only the ML estimator with multiple imputation. Research on how to correctly pool in other cases does not exist. It sounds like you have one factor. In this case, you can use maximum likelihood with the CATEGORICAL option. The default is logistic regression. You can use PARAMETERIZATION=PROBIT if you want probit regression.
 Michelle Jongenelis posted on Tuesday, March 13, 2012 - 8:52 pm
Thanks for the prompt reply Linda. There are two factors actually. Will this change the suggestions you have made?
 Linda K. Muthen posted on Wednesday, March 14, 2012 - 6:51 am
No.
 Michelle Jongenelis posted on Monday, March 19, 2012 - 12:51 am
Hi again,

I have run the analysis as per your suggestions but my chi square p value is coming up as 1.00 and i get the following message underneath "of the 82944 cells in the latent class indicator table, 51 were deleted in the calculation of chi square due to extreme values"
I also don't get any of the usual fit indices. I am a little confused.
 Linda K. Muthen posted on Monday, March 19, 2012 - 6:20 am
With maximum likelihood and categorical outcomes, means, variances, and covariances are not sufficient statistics for model estimation. As a result, a chi-square comparing sample and model estimated covariance matrices and related fit statistics are not available. The chi-square values you are looking at compare observed versus estimated multiway frequency tables of the categorical items. With more than eight categorical items these tables become vary large and empty cells are a problem. They should not be used with more than about eight items or if they do not agree.
 Michelle Jongenelis posted on Monday, March 19, 2012 - 8:03 pm
Thanks Linda.
 Julia Lee posted on Saturday, March 24, 2012 - 4:59 pm
I am conducting:
1) CFA to determine whether 5 indicators in the fall & spring of first grade, respectively, form a unitary factor (literacy). (n = 521)

Question:
a)If there are floor effects and outliers, is MLR robust enough to handle the issue? Should the floor effects and outliers be deleted? I am retaining the floor effects & outliers; I used MLR because in my main research question I am interested in the latent profiles & latent transitions of this sample. My CFA fit indices are mixed: I have p < .001, high RMSEA, great fits for SRMR and CFI/TLI. I do not know why this is happening because these indicators are theoretically driven. However, nobody has tested these constructs in tandem.

b)Would ill scaled covariance matrices result in this kind of fit indices? My covariance matrix has a combination of variance as high as 1074.168 and as low as 16.
 Bengt O. Muthen posted on Saturday, March 24, 2012 - 5:52 pm
a) Theoretically-driven indicators very often give poor model fit when the indicators have not also been subjected to a previous series of pilot studies using EFA to refine them.

MLR is not enough if you have strong floor effects because in that case the linear model is wrong. You can instead for instance treat the indicators as censored-normal.

b) No, the scales of the variances don't affect fit. But you do want to make the variances more similar for purposes of easier convergence.
 Julia Lee posted on Saturday, March 24, 2012 - 7:28 pm
Thank you for your reply, Dr. Muthen!

1) I found something on the CENSORED option on UG p. 487. Do you have any recommendations of good articles on censored-normal? What other alternatives would you recommend for floor effects apart from censored-normal? I was thinking of transformation, which would affect the interpretation of the results....

3) Are the suggested cutpoints of skewness and univariate kurtosis values of 2 and 7 (Finney & Distefano, 2006), respectively, conservative enough for MLR? One spring indicator has a skewness of .471 and kurtosis of -.250; another with skewness = .026 and kurtosis = -.823; one with skewness = .316 and kurtosis = .280). I think the challenge is not being able to visualise what the multivariate abnormality looks like based on the bivariate plots.

The fall indicators are more nonnormal. I have indicators with skewness = 1.625 and kurtosis = 3.026, skewness = .948 and kurtosis = 3.157, and skewness = 1.010 and kurtosis = .267.
 Bengt O. Muthen posted on Sunday, March 25, 2012 - 10:23 am
1) Google "tobit regression".

You can also categorize (discretize) the variable and put them on the Categorical=list. This may be the simplest approach. A more advanced approach is "two-part (semi-continuous)" modeling as in

Kim, Y.K. & Muthén, B. (2009). Two-part factor mixture modeling: Application to an aggressive behavior measurement instrument. Structural Equation Modeling, 16, 602-624.

I would not transform variables. It would not avoid the main problem of the floor effect.

3) Skewness and kurtosis are not a problem - that's what the "R" in MLR takes care of. The problem is the floor effect.

But perhaps the most likely reason for the poor fit by chi-square is that the model needs adjustment, such as using cross-loadings or more factors.
 Jean-Samuel Cloutier posted on Tuesday, April 17, 2012 - 10:31 am
Hi
Anyone would have an exlication.
The p-value of my Chi-Square are 1,000
Seem too nice to be true.
Thanks
 Linda K. Muthen posted on Tuesday, April 17, 2012 - 12:36 pm
The degrees of freedom are probably zero in which case model fit cannot be assessed.
 Lisa Aschan posted on Saturday, June 30, 2012 - 10:00 am
Hi,

I have a question about the fit indices of a CFA which I am planning to use in a larger SEM model. I am using version 6.
I have 5 observed variables and 1 factor. The observed variables are categorical or binary, and the analysis is complex with clusters and weights. My sample size is 1700 but I have some missing data.

My model is:
F1 by y1* y2 y3 y4 y5 ;
F1@1 ;

My fit indices are:

Chi-square(5) = 51.67 (p<.001)
RMSEA: 0.074 (90% CI: 0.057 - 0.093)
CFI: 0.972
TLI: 0.943

So my model fit is not great. However, all of my factor loadings are highly significant. Why might my model fit be inadequate? How can I improve my model fit?

I have thought maybe I am violating assumptions of independent error. How can I test this assumption?

Many thanks for your help.
 Linda K. Muthen posted on Sunday, July 01, 2012 - 10:41 am
Significance is not fit. It tests whether a parameter is significantly different from zero. Fit compares the model estimated and sample covariance matrices. Ask for MODINDICES (ALL) in the OUTPUT command to see if you are violating assumptions of independent error.
 Mészáros Veronika posted on Friday, August 10, 2012 - 12:37 am
Dear Dr Multhen,

I have tested the factor structure of Maslach Burnout Inventory. I made a correlated three factor model (with emotional exhaustion, depersonalization and personal accomplishment), and a second order factor model (burnout on EE, DP, PA).
The fit indices are:
correlated three factor model: chi-square = 844, df = 206, p < 0.001, RMSEA = 0.069 (0.064- 0.074), CFI = 0.85, TLI = 0.84, SRMR = 0.07
second order factor model: chi-square = 878, df = 206, p < 0.001, RMSEA = 0.072 (0.067- 0.077), CFI = 0.84, TLI = 0.82, SRMR = 0.07

My question is, how is it possible, that the two model's degree of freedom are the same, but the other modification indices are different?

Thank you for your answer:

Veronika
 Linda K. Muthen posted on Saturday, August 11, 2012 - 10:29 am
You should get the same fit because the second-order factor is just identified. Try using STARTS = 10; in the ANALYSIS command for the second-order model. You must be hitting a local solution.
 Hass posted on Thursday, November 29, 2012 - 2:49 pm
Hello,

I tried to RUN this model, but I got the following error: Unexpected end of file reached in data file.

I know the file is big (10 variables, 41 items), but it shouldn't be a problem?

Your advice is much appreciated.
G
 Linda K. Muthen posted on Thursday, November 29, 2012 - 2:57 pm
It sounds like you either have blanks in your data set which is not allowed with free format data or the number of names in the NAMES statement is not the same as the number of columns in the data set.
 Kofan Lee posted on Thursday, December 06, 2012 - 7:24 am
Hi,

I was running a 5 factor CFA using MLM estimation. The final model takes one factor removed because the items fails to support this factor. Also, several measurement errors are added.

To assess the improvement of the new model, I try to use stickly positive approach, but how should I use the start values obtained from Model 0 to Model 10 since that particular factor is removed? Should I delete those values or just set as 0?

Another question is when Mplus use ML estimation even if I input as MLM. This happens when I try to calculate Nodel 10.

Thanks for your time

k
 Bengt O. Muthen posted on Thursday, December 06, 2012 - 8:59 am
Are you trying to compare a 4-factor to a 5-factor CFA?

Regarding getting ML when requesting MLM, please send output and license number to Support.
 Christoph Weber posted on Tuesday, April 30, 2013 - 3:51 pm
Dear Dr. Muthén!

Do you know any references regarding the perfomance of the MLR-estimator under non-normal data conditions?

I only found studies investigating the performance of the SB-Correction (MLM).

Thanks
Christoph Weber
 Linda K. Muthen posted on Wednesday, May 01, 2013 - 9:23 am
See Web Note 2 on the website.
 Christoph Weber posted on Thursday, May 02, 2013 - 12:20 am
Thanks a lot for the hint!
At the end of the web note it is noted:

"To study the generalizability of these findings, it may be of interest
to study variations on the Monte Carlo setup, varying the sample size and the degree of missingness."

Are you aware of such extensions?

Thanks
Christoph Weber
 Linda K. Muthen posted on Thursday, May 02, 2013 - 6:09 am
No, I am not. You could do this your self.
 Christoph Weber posted on Thursday, May 02, 2013 - 9:29 am
Thanks, it's on my to do list!
 Jenny L.  posted on Tuesday, June 11, 2013 - 8:29 am
Dear Drs. Muthen,

I was doing a path analysis with imputed data. One dependent variable was count and thus Poisson regression and MLR were used, but I'm not familiar with interpreting the output.

Under Model Fit Information, I saw mean scores of Log Likelihood, AIC, BIC, and sample-size adjusted BIC. I thought AIC and BIC were most useful when comparing different models. Could you tell me how I can assess model fitness of this particular model by looking at these indices?

Thank you in advance for your help.
 Bengt O. Muthen posted on Tuesday, June 11, 2013 - 10:22 am
Unfortunately, there doesn't seem to be statistics developed for this conbination. You can analyze each imputed data set separately and use what's available in that case - comparing competing models by BIC and by likelihood-ratio chi-square testing (and I think also Tech10).
 Jenny L.  posted on Tuesday, June 11, 2013 - 1:35 pm
I see. Thank you for your advice!
 Tracy Witte posted on Wednesday, August 28, 2013 - 8:15 am
I am attempting to replicate an article that, among other methods, compared two non-nested models by subtracting the model-implied correlation matrices from one another to identify differences in predictions across the models. The authors state that they used Mplus v. 6.1 for their analyses. However, I'm unsure how to get model implied correlation (not covariance) matrices with Mplus. (Note - the authors used the ML estimator with continuous indicators).

Can this information be obtained with the TECH1 and TECH3 output? Specifically, if I look at the TECH1 output to get the parameter numbers for the NU matrix, and then look at the estimated correlation matrix in TECH3 for the corresponding parameter numbers, do those values represent the model implied correlation matrix?
 Linda K. Muthen posted on Wednesday, August 28, 2013 - 8:49 am
TECH4 gives this for latent variables and RESIDUAL gives it for observed variables.
 Tracy Witte posted on Wednesday, August 28, 2013 - 9:32 am
It looks like the residual output gives only the standardized and normalized values (i.e., z scores) and the covariance matrix. Perhaps I'm missing something?
 Linda K. Muthen posted on Wednesday, August 28, 2013 - 10:17 am
Please send the output and your license number to support@statmodel.com.
 Nancy Lewis posted on Tuesday, September 17, 2013 - 11:17 am
I have run a 4-factor CFA on four independent samples of respondents to a 31 item Likert scale measure (1 to 7 range of answer choices). I ran the models using the MLM estimator. The N for each model was around 300.

In all of the models, the CFI indicates marginal fit (.90-.92) but the RMSEA indicates good fit (.06-.07) and the SRMR also indicates good fit (.06-.08).

I am struggling to understand why the CFI doesn't agree with the RMSEA and SRMR. The factor loadings for the items are all moderate or better (.60 and above with most .70 and above).
 Bengt O. Muthen posted on Tuesday, September 17, 2013 - 12:29 pm
I think it sounds like the model can be improved, even in terms of RMSEA. Check the modification indices. You can also try multiple-group ESEM (see our website for ESEM papers), which is less restrictive than multiple-group CFA.
 Nancy Lewis posted on Tuesday, September 17, 2013 - 12:53 pm
Dr. Muthen,

Thank you for responding to my question. I should have added that the mod indices don't indicate any means of improving the model. The values are all quite low. I have tried several of the highest ones and they make no difference in the fit.
 Bengt O. Muthen posted on Tuesday, September 17, 2013 - 1:10 pm
A low CFI is sometimes seen with variables that have low correlations so that the independence model doesn't fit too badly.

Otherwise, many small misspecifications can be the cause; that would suggest that it's worth trying ESEM.
 Nancy Lewis posted on Tuesday, September 17, 2013 - 1:13 pm
Also, I should add that the model structure was built based on EFA results.
 Nancy Lewis posted on Tuesday, September 17, 2013 - 1:17 pm
Yes, I considered that the problem could be low correlations. This doesn't seem to be the case.

Within each factor, the item inter-correlations are medium to large. The factors are moderately correlated with each other.

I have also run a 3-factor model on the data using a factor structure proposed by the scale's authors. It had poor fit on the CFI, SRMR and RMSEA.

Is there anything else that could cause CFI to be low?

Thank you.
 Bengt O. Muthen posted on Tuesday, September 17, 2013 - 1:21 pm
Some of the small EFA cross-loadings may be significant and can produce CFA misfit.
 Xuecheng Liu posted on Monday, September 30, 2013 - 11:55 am
We are conducting CFA (estimator=ML). All indicators are continuous. The following are the fit indices (part):

RMSEA = 0.064 (CI = 0.062, 0.066)
CFI = 0.743
TLI = 0.710
SRMR = 0.066

Since CFI and TLI are low, we turn to use Hu and Benther's two index presentation strategy, focusing on RMSEA and SRMR: RMSEA < 0.06 and SRMR < 0.09. Our question is that our RMSEA = 0.064, slightly larger than the cut-point 0.06. Is it still possible to say that our model and data have sufficient fit?

---
Further, we dichotomized two indictors (to one factor), so, two categorical variables are in CFA. The following are fit indices (part):

(Estimator=WLSMV)
RMSEA = 0.061 (CI = 0.059, 0.064)
CFI = 0.685
TLI = 0.643
WRMR = 2.780

We found that SRMR does not provided in this case, but replaced by WRMR. How to evaluate the fit now?

Many thanks,

Xuecheng
 Linda K. Muthen posted on Monday, September 30, 2013 - 1:43 pm
I would not dichotomize the two indicators. Try using ESTIMATOR = MLR which is robust to non-normality. Perhaps that is one problems. You may also want to start with an EFA to see if your CFA is viable for the data.
 RuoShui posted on Friday, February 07, 2014 - 1:22 pm
Dear Dr. Muthen,

I am using Satorra-Bentler chi-square test (using MLR as estimator) to test the difference between my SEM mediation model and the direct path model without mediators.

I know that chi-square test is sensitive to sample size. But is the chi-square difference test also sensitive to sample size? Do I need to consult other fit indices such as delta CFI as suggested by cheung & Rensvold (2002) and chen (2007)?

Thank you very much!
 Linda K. Muthen posted on Friday, February 07, 2014 - 6:25 pm
This is a good question for a general discussion forum like SEMNET.
 Natalie Bohlmann posted on Wednesday, March 26, 2014 - 11:46 am
were changes made between version 6.1 and 7.11 to how fit statistics were being calculated? We have a model that was originally run in 6.1, having updated to 7.11 I ran the model again. The coefficients and SE change a very small amount (.001 - .003 on average), but the values for model fit have changed substantially. Chi square is 69.67, df = 25 in version 6.1 and 144.56, df = 41 in 7.11. Relatedly, version 6.1 output CFI, TLI, RMSEA, and SRMR are .94, .83, .10, and .04 respectively. In version 7.11 they are .87, .76, .12 and .09. Yet I have confirmed that it is the same exact model. I also had a friend who still has 6.1 rerun the model with consistent findings.
 Linda K. Muthen posted on Wednesday, March 26, 2014 - 11:53 am
Please send the two outputs and your license number to support@statmodel.com so I can see your model and estimator.
 Chris Cambron posted on Wednesday, April 09, 2014 - 6:46 pm
Dr. Muthen,
I've read through the forum and manual and can't find a reason for my issue. Using the MLR estimator, MPLUS doesn't provide an RMSEA. Using the MLM estimator on the same model, it does.

Any possible reason for this that I can look into would be much appreciated.
Thank you!
 Linda K. Muthen posted on Thursday, April 10, 2014 - 5:59 am
Please send the output and your license number to support@statmodel.com.
 Kaleena Burkes posted on Monday, April 21, 2014 - 8:44 pm
Good Evening Dr.s Muthen,

I'm a newbie at Mplus and structural/simultaneous equation modeling and I am currently trying to run a CFA with count variables as my indicators however I am not receiving my CFI, TLI, and RMSEA. Obviously I am doing something wrong. Could you provide your insight?

below is my code:

Data:
File is

!LISTWISE=ON;

Variable:
NAMES ARE male crimeser hispanic black white curoffm curoffs curoffr
curoffov curoffb curoffp curoffd curoffw curoffo prprison prsupvio
reconv1 reconv2 reconv3 ageberec timeserv reimpr1 reimpr2 reimpr3
reimpr1n reimpr2n reimpr3n married emplstud female poliforc recon123
housserv educserv emplserv healserv housreco educreco emplreco STUDENT2
employ1 emplstat povrate unemploy unemrate passrate;

USEVARIABLES ARE housreco educreco emplreco;
COUNT IS housreco educreco emplreco;
Missing are all .;


Analysis:
Type = GENERAL;

Model:
reenserv BY housreco educreco emplreco;

OUTPUT:
TECH1 TECH8;
!standardized sampstat;
 Linda K. Muthen posted on Tuesday, April 22, 2014 - 6:19 am
Chi-square and related fit statistics are not available when means, variances, and covariances are not sufficient statistics for model estimation. This is the case with count variables. Nested models can be compared because -2 times the loglikelihood difference is distributed as chi-square.
 SY Khan posted on Tuesday, April 22, 2014 - 10:42 am
Hi Dr. Muthen,

I am running a CFA of my Independent variables with binary indicators using WLSMV estimator in Mplus version 7.11.

Theory supports four correlated factors. All indicators load heavily onto their respective constructs (0.60-0.9). Their respective R-sqaured values are also above 0.38 and all cosnstructs have high discriminant validity (AVE); meaning that all constructs are different from eachother. However, the overall fit for the CFA remains low:

Chi-sq= 4710.814 with df=318
RMSEA=0.072
CFI=0.878
TLI=0.865

One of the fators have low correlation (0.142, 0.109, 0.005) with other 3 factors, while the other three factors have inter-construct correlations of 0.668, 0.694 and 0.797.

I have done an EFA also and as a large part of theory says the practices do not load in meaningful way as such based on theory. Hence, I have grouped practices based on theory. I have done individual CFA of the four constructs out of which three factors give good model fit of above 0.915 CFI/TLI based on the items grouped based on theory.

Can you guide me on why the overall CFA model fit may be low when each construct has high factor loadings and AVEs and how to improve it? Could the low correlation of one factor may be the reason of not so good overall model fit?

Is it appropriate to proceed with SEM based on these fit statistics?

Many thanks for your help and guidance.
 Linda K. Muthen posted on Tuesday, April 22, 2014 - 2:02 pm
The model fit is not good. The EFA results tell you that perhaps the items are not valid measures of the construct they were developed to measure. See the Topic 1 course handout and video where we go through an EFA in detail.
 j guo posted on Tuesday, May 27, 2014 - 5:56 pm
Hi Dr. Muthen,

I ran a CFA model using robust maximum likelihood (MLR)as estimator in MPLUS. I was wondering if I can calculate RMSEA by hand based on Chi-Square, df and scaling correction factor.

Thank you very much.
 Bengt O. Muthen posted on Tuesday, May 27, 2014 - 6:00 pm
Don't you have RMSEA in the output? Which version are you using?
 j guo posted on Tuesday, May 27, 2014 - 8:57 pm
I do have RMSEA. I just wonder how to calculate it by hand based on MLR.

Thank you.
 Kathrin Dehmel posted on Thursday, May 29, 2014 - 2:57 am
Dear Dr. Muthen,
I was running a very simple CFA with 3 indicators and 1 factor. Is it right, that the model is just-identified so global model fit cannot be assessed? Or is it possible to modify one indicator, so that I get global model fit indices? And how do I have to act, if I use only 2 indicators – I know, that’s not possible normally, but I read, that it is possible to use it for a CFA with more than one factor? So is it right, that in this case I only report the global fit indices of the CFA with more factors?
Thank you so much,
K. Dehmel
 Linda K. Muthen posted on Thursday, May 29, 2014 - 6:21 am
A model with three indicators is just-identified so model fit cannot be assessed. A model with two indicators is not identified. If it is in a model with another factor, it can be identified by borrowing information from other parts of the model. I would recommend having at least four indicators.
 Tihomir Asparouhov posted on Thursday, May 29, 2014 - 10:16 am
J Guo

The RMSEA with estimator=MLR is already adjusted. What you get in the output is based on the MLR chi-square value.
 Chie Kotake posted on Thursday, May 29, 2014 - 2:06 pm
Hi,

I am running a multiple group analysis using latent variable A predicting an observed continuous variable b. I have 3 continuous control variables (all observed) included in the model as well.

First, I ran the CFA with these model testing for measurement invariance, and the model fit was pretty good (CFI=.94, RMSEA = .045, etc) at each step. The chi square testing also confirmed each step of measurement invariance.

The general input I used for CFA was:
A BY v1 v2 v3 v4;
A@1.0;
v1 WITH v3;
v3 WITH v4;

b WITH A;

However, when I moved into the SEM model(I swiched WITH to ON), where latent construct A is predicting manifest variable b, the model fit suddenly became poor, and chi square also became significantly larger.

Here are my questions:
1) Why is this happening?
2) I've learned to compare my SEM model to strong invariance model to find out if my models are fitting well -- given the significant change in the model fit/chi square, does this mean I cannot do this anymore?
3) Is there a way to fix this?
3) Is chi square testing still appropriate using model fit chi square? If not, what is the step for comparing models now?

Thank you again for all your assistance!
 Linda K. Muthen posted on Thursday, May 29, 2014 - 2:40 pm
Please send the two outputs and your license number to support@statmodel.com.
 Kathrin Dehmel posted on Friday, May 30, 2014 - 12:57 am
Dear Dr. Muthen,

thank you so much for your fast answer. Now I specified a model, the model fit is good - could you please check, if it is right, I am not sure if such a model fit is possible cause of my small size of the sample. Could I send you my output?

Thank you again!
 Linda K. Muthen posted on Friday, May 30, 2014 - 11:25 am
If you are a registered user with a current upgrade and support contract, send the output to support@statmodel.com. Although interpretation of results is not part of Mplus support, I can see if I see any glaring errors.
 Eulalia Puig posted on Thursday, February 12, 2015 - 8:50 am
Hello,

I am trying to run a second-order CFA with continuous variables (scale is 0-5 for all variables).

The model is:

ANALYSIS:
TYPE = GENERAL;
ESTIMATOR = ML;

MODEL:
A by a1* a2 a3 a4;

B by b1* b1 b3 b4;;

C by c1* c2 c3 c4 c5 c6 c7 c8 c9;

D by A* B C;

A@1 B@1 C@1 D@1;

And the output I get for model fit, which I what I am interested in is:

MODEL FIT INFORMATION

Number of Free Parameters 60

Loglikelihood

H0 Value -4993.516

Information Criteria

Akaike (AIC) 10107.032
Bayesian (BIC) 10271.204
Sample-Size Adjusted BIC 10081.564
(n* = (n + 2) / 24)

Degrees of Freedom 149

Where are the model fit indices? I thought that X2 & RMSEA would be given. I understand CFI & TLI have to be calculated, but where are the X2 and RMSEA?

Thanks!
 Bengt O. Muthen posted on Thursday, February 12, 2015 - 12:15 pm
Did you declare your variables as categorical? Did the H1 model not converge? If that doesn't explain it, please send your output to support along with your license number.
 Heather Gilmartin posted on Friday, March 06, 2015 - 8:14 am
Dear Linda,

I have run a CFA using WLSMV on a small model - 4 categorial indicators, one latent variable that was developed through an EFA. The fit statistics are good, but I"m wondering if they are too good. I've never had a model with these results:

Chi-square 1.967
df = 2
p. = .374

RMSEA = 0.000
CFI = 1.000

The factor loadings are strong (.78-.99) and error terms are small (.40-.01).

Could the model fit the data this well? Or could this indicate the model may be better represented as a composite model?

Thanks,
 Bengt O. Muthen posted on Friday, March 06, 2015 - 8:40 am
This can happen with small sample size and low correlations, so that you have low power to reject the model.
 Heather Gilmartin posted on Friday, March 06, 2015 - 8:56 am
Thank you. The sample size was 307, correlations .45-.71. Should I keep this model or add in indicators that had previously worked in the model, but made the model fit less perfect.

I have just run the model in the full sample of 614, but the outcome was the same.
 Bengt O. Muthen posted on Friday, March 06, 2015 - 10:57 am
Getting rid of misfitting indicators can certainly cause such a good fit. I wouldn't do this kind of item trimming unless items have a truly bad fit.
 Kira McCabe posted on Sunday, May 10, 2015 - 7:36 pm
I have a question regarding model fit with different estimators. My question is to explain model fit discrepancy when I use ML vs. MLR.

I ran an LGM with time invariant covariates. Model 1 was my basic model, and Model 2 added one additional (and theoretically relevant) time invariant covariate.

With an ML estimator, my difference in Chi-Square test showed that Model 2 had significantly worse fit than Model 1. However, with a MLR estimator, my -2LL test showed that Model 2 had significantly better fit than Model 1.

Question: Is the difference in these findings a reflection of skewed variables in my model? I'm just curious how different estimators can yield opposite results.

Thank you for your help!
 Bengt O. Muthen posted on Sunday, May 10, 2015 - 8:09 pm
If you did the MLR-specific -2LL difference test shown on our website, then the difference between the ML and MLR testing is due to the skewness.
 Dharmi Kapadia posted on Friday, May 29, 2015 - 8:17 am
Hello,

I am fitting a CFA model using the WLMSV estimator, using version 7.11.

For some reason, when I use the command

OUTPUT: Residual;

I only get raw residual correlations; standardized and normalized values are not printed in the output. Can you think why this is happening?

My MODEL command is:
positive by clpersa3 clpersb3 clpersc3 clpersd3 clpersg3 clpersh3 clpersm3 clperso3;
lacksupp by clperse3 clpersi3 clpersj3 clpersn3;

And the factor items clpersa3 - clpers03, are categorical (4 categories).

Thanks,
Dharmi
 Bengt O. Muthen posted on Friday, May 29, 2015 - 10:19 am
Those are available only with continuous outcomes.
 Laura Patricia Villa Torres posted on Monday, June 08, 2015 - 6:49 pm
Good night,
I am working on a CFA for a scale. The data comes from 14 different schools, so I decided to use type=complex.
When I treat the data as cluster, the RMSEA improves substantially but the CFI and TLI worsen substantially as well.

So, my questions are if there is something related to clustering that helps RMSEA but harms the CFI/TLI? Should I look at other indexes?

Last question is if apart from the modification indices, what else can I do to improve the model?

Thanks!
Laura


Here you have the results:

Non-clustered:

RMSEA (Root Mean Square Error Of Approximation)
Estimate 0.066
90 Percent C.I. 0.065 0.068
Probability RMSEA <= .05 0.000
CFI/TLI
CFI 0.940
TLI 0.930

Clustered:


RMSEA (Root Mean Square Error Of Approximation)
Estimate 0.029
90 Percent C.I. 0.027 0.031
Probability RMSEA <= .05 1.000

CFI/TLI
CFI 0.871

TLI 0.845
 Linda K. Muthen posted on Tuesday, June 09, 2015 - 10:20 am
There is no reason clustering would affect one fit statistic.

With only 14 clusters, I would recommend using 13 dummy variables as covariates in the model to control for non-independence of observations. The recommendation for clustered data is a minimum of 30-50 clusters.
 Kyle Thomas posted on Friday, June 12, 2015 - 2:23 pm
Hi there,

I am looking at the dimensionality of an attitude construct in two data sets, both of which contain ordinal measures. In the first data set I ran an EFA and found considerable evidence that there are two factors and after rotation it was clear that the items coalesced in a meaningful and substantively predictable way.


I then attempted to confirm this two factor structure in a separate data set using CFA. The CFI (.975) and TLI (.968) all indicate good fit. But the chi-square is significant (p < .001, n - 1,700) and the RMSEA also suggests poor fit (= .105). It is worth noting that when I run a one factor model the fit is substantially worse (CFI = .891; TLI = .91; RMSEA = .184).

Because of this I followed some of your previous suggestions of going back and running an EFA on this data set and, again, there is a clear indication that there are two factors and that the items come together in a similar way to the first data set. What could explain why the models are fitting the data poorly in the CFA? Do you have any suggestions on how to proceed?
 Bengt O. Muthen posted on Friday, June 12, 2015 - 6:00 pm
I would go the route of the paper on our website:

Muthén, B. & Asparouhov, T. (2012). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313-335.

See also

Asparouhov, T., Muthén, B. & Morin, A. J. S. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Accepted for publication in Journal of Management.
 Sebastian Köhler posted on Thursday, September 03, 2015 - 1:33 am
Dear Bengt and Linda,

I am doing a CFA with 10 categorical (Likert-type) indicators (WLSMV estimator), 4 first-order factors and 2 second-order factors in a sample of N=13,500, as shown below:

f1 BY u1 u2;
f2 BY u3 u4;
f3 BY u5 u6;
f4 BY u7-u10;

ff1 BY f1 f2;
ff2 BY f3 f4;

The overall model is identified and the estimation terminates normally. The model has 55 free parameters (chi-square = 587.55, df=30, p<.001). In contrast, the model with only 4 first-order factors and no second order factors has 56 free parameters (chi-square = 593.960, df=29, p<.001). RMSEA, CFI and TLI indicate close fit for both models.

1. Do I understand correctly, that the second-order factors are locally under-identified because I'd need >2 first-order factors loading on each?

2. Elsewhere in this forum I read that such models are only identified since they "use information from other parts of the model". I wonder what that piece of information is and whether you have a reference for this? I fitted an alternative solution with only 2 first-order factors that seems to do the trick without these problems, but I need to explain why the second-order model is problematic because it has been reported in the literature before as the correct measurement model for this particular scale.

Best wishes, Seb
 Linda K. Muthen posted on Thursday, September 03, 2015 - 5:24 am
All first- and second-order factors with two indicators are identified only because they borrow from other parts of the model. They have negative degrees of freedom and are not identified when they are alone. See an SEM book like the Bolen book for a discussion of identification.
 Martijn Van Heel posted on Thursday, October 22, 2015 - 12:01 pm
Hello,

I'm new to MPlus and I'm running a CFA with MLR as estimator. However, I don't get any model fit information, except for the loglikelihood of H1. The rest of the output is there as it should.

I have the feeling that I'm overlooking something very obvious but I can't figure out what.
Thanks in advance.
 Linda K. Muthen posted on Thursday, October 22, 2015 - 1:26 pm
Please send the output and your license number to support@statmodel.com.
 Ejlis posted on Monday, October 26, 2015 - 10:40 am
Hi,
I have two questions:
1) Is it ok to have two indicators on the same latent variable which are coded in opposite direction
(then one of the indicators will be negatively related to the latent factor)?
2) It seems to me that it is most usual to constrain factor loadings equal over time when running a longitudinal model (thus intercepts do not need to be equal). Is this right? If you run a model with only sumscores ( not latent varables) should these be constrained and how to do it ?

Thank you,
 Bengt O. Muthen posted on Monday, October 26, 2015 - 3:45 pm
You may want to ask these general SEM questions on SEMNET.
 Nicole Tirado Strayer posted on Monday, February 08, 2016 - 12:51 pm
Can you help me understand how to compare latent variable models that use the same manifest variables but that specify different factor structures?

Specifically, my simplest model has 13 manifest variables and a single factor, my second model has the same 13 manifest variables but has 2 factors where factor 1 is estimated with vars 1-6 and factor 2 is estimated with vars 7-13. My third model again uses the same 13 manifest variables but has a bifactor structure where there is one general factor (all 13 vars load onto this model), and one specific factor that is predicted by vars 7-13.
 Bengt O. Muthen posted on Monday, February 08, 2016 - 1:29 pm
I would simply use BIC.
 Nicole Tirado Strayer posted on Monday, February 08, 2016 - 1:50 pm
Is it a problem that the bifactor model isn't nested in model 2? I am using the same 13 manifest vars across all models, but models 2 and 3 are not nested. When comparing model fit, do I simply select the model with the lowest BIC?
 Linda K. Muthen posted on Monday, February 08, 2016 - 4:30 pm
Models that are not nested but have the same set of dependent variables can be compared using BIC. The lower BIC is the better BIC.
 Nicole Tirado Strayer posted on Wednesday, February 17, 2016 - 4:43 pm
In order to compare nested models in MPLUS, do I have to constrain the covariance between latent variables to 1, or can I let them covary?
 Bengt O. Muthen posted on Wednesday, February 17, 2016 - 4:56 pm
I don't know why you ponder constraining covariances to 1.
 Nicole Tirado Strayer posted on Wednesday, February 17, 2016 - 9:55 pm
My apologies. To be more clear, I am using Chi square difference test to examine relative fit of two models (1) one factor CFA model with 11 manifest variables; and (2) two factor CFA model with the same 11 manifest variables. In order to make the one factor model fully nested within two factor solution, do I need to constrain the covariance between two latent factors in the two factor model to be equal 1?
 Bengt O. Muthen posted on Thursday, February 18, 2016 - 4:57 pm
The correlation should be 1. But there is a technical question of whether the chi-square diff test is ok because your nested model is then on the border of the admissible parameter space.
 htm posted on Saturday, February 27, 2016 - 7:15 pm
I have a three factor solution such that:

F1 BY A1 A2 A3 A4;
F2 BY B1 B2 B3 B4;
F3 BY C1 C2 C3 C4;

F1 WITH F2 F3;
F2 WITH F3;

Is it valid to explore alternate models (in this case, let's say a two-factor model where F1 remains the same and F2 is comprised of F2 and F3) using the following code:

F1 BY A1 A2 A3 A4;
F2 BY B1 B2 B3 B4 C1 C2 C3 C4;

F1 WITH F2;

Obviously, in the above case, I'd compare the models descriptively using BIC.

If the above solution is not tenable, should I instead use the model test approach (in other words, test if the correlation between F2 and F3 is equal to 1)?

Thank you in advance.
 Linda K. Muthen posted on Sunday, February 28, 2016 - 5:57 am
If it is unclear how many factors are represented by your data, I would suggest an EFA.
 htm posted on Sunday, February 28, 2016 - 6:23 am
Hello Linda,

Thank you for your response. I suppose I should elaborate -- the study I am working on is a multi-sample study where I conducted an EFA and then tested the results of the EFA using a CFA with an independent sample. I am confident that the identified factor structure is the "correct" one.

However, a reviewer suggested that I could strengthen my manuscript by empirically showing that the proposed factor structure (as specified in the CFA) fits the data better than possible alternatives (i.e., collapsing two factors into one and so on).

I guess I'm just wondering if that is most validly accomplished by actually collapsing the indicators into combined factors or placing constraints on the correlations between the factors using nested model tests.

Thanks again.
 Linda K. Muthen posted on Sunday, February 28, 2016 - 4:01 pm
You should ask this on a general discussion forum like SEMNET.
 Erin Albrecht posted on Friday, March 18, 2016 - 9:56 am
Hello,

I am performing a CFA, estimating three manifest variables as indicators of one latent construct. My model is just identified, so I am unable to get model fit information. Is it still permissible to use this latent factor in a larger model that includes path analyses? Or, is there a way to tailor my CFA so that I can get an overidentified model and receive model fit estimates? I realize this would have to be based on theory, but I'm not sure how to proceed.

Thank you for your time,
Erin
 Linda K. Muthen posted on Friday, March 18, 2016 - 10:48 am
This question is more appropriate for a general discussion forum like SEMNET.
 Dex  posted on Tuesday, May 10, 2016 - 10:11 pm
Hello,

I was wondering under Monte Carlo output, how do Mplus calculate the Expected Percentiles for other fit indices, such as CFI (since they do not have a theoretical statistical distribution-like chi-square)?

Thanks
 Linda K. Muthen posted on Wednesday, May 11, 2016 - 1:42 pm
For all fit statistics except chi-square, the normal distribution is used to obtain the critical values of the test statistic. See pages 412-13 of the user's guide on the website.
 Ashley posted on Thursday, July 28, 2016 - 3:44 pm
How to I get the RMSEA for the null model in my output? I'm doing a CFA on 20 imputed datasets.
 Linda K. Muthen posted on Friday, July 29, 2016 - 2:05 pm
With imputed data, only chi-square for continuous outcomes has been developed. No other fit statistics have been so you can't get RMSEA in this case.
 Scott J Peters posted on Thursday, November 10, 2016 - 2:00 pm
A while back I ran data collected on three different instruments through CFAs using their respective models. These models vary greatly since one instrument has about 80 items while another has only 11. When I first ran the analyses on version 5 I got very different results from when I just ran them on 6.12. The SRMRs were much better (smaller) and the CFA/TLI of the longer two instruments were terrible (now .60s - .80s compared to upper .80s before).

Two questions:
1. Were there major changes to how fit indices were computed in versions 5 vs. 6?
2. With instruments that vary so much in length / # of parameters, which fit indices work best? I'm concerned about indices that penalize for model complexity.

thanks
 Tihomir Asparouhov posted on Friday, November 11, 2016 - 11:15 am
There was a bug related to this which was corrected in Version 6. The bug is regarding the fit indices you mention when the Bollen-Stine bootstrap chi-square is computed, i.e., bootstrap = (residual).

Incidentally all the quantities you point out are not bootstrap related, you can just compute these using regular ML (in either Mplus versions) without the bootstrap command. The two quantities that the bootstrap affects are the "Bootstrap p-value" and the standard errors.
 Scott J Peters posted on Friday, November 11, 2016 - 12:53 pm
That's what I was worried about - that there was some change or bug between the two versions. I'll plan on ditching the version 5 results.

I was aware of the bootstrap effect. We did it originally because of the standard errors, but now that I've run the analyses in Version 6 w/o bootstrapping, those errors seem to be gone.
 Scott J Peters posted on Friday, November 11, 2016 - 12:59 pm
The other lingering question I still have is which indices to compare across instruments of very different size (items / DF). I'm looking at multi-group CFA across instruments that vary quite a bit. In such a case is SRMR the focus since it doesn't include a penalty for model complexity?
 Bengt O. Muthen posted on Friday, November 11, 2016 - 4:58 pm
I think that is a research question. Perhaps someone on SEMNET has studied this.
 jintana jankhotkaew posted on Tuesday, July 04, 2017 - 9:28 am
Hi,
I run CFA model by using three observed categorical variables with one factor and allowing one covariance. The model did not report goodness of fit: CFI and TLI. However, when I deleted covariance from the model, Mplus gives me the goodness of fit value. The problem in the first model is reported as below.
THE DEGREES OF FREEDOM FOR THIS MODEL ARE NEGATIVE. THE MODEL IS NOT
IDENTIFIED. NO CHI-SQUARE TEST IS AVAILABLE. CHECK YOUR MODEL.

THE MODEL ESTIMATION TERMINATED NORMALLY

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE
COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL.
PROBLEM INVOLVING THE FOLLOWING PARAMETER:
Parameter 10, PHYSICAL

Note that physical is a latent variable that I created.
Best regards,
Jintana
 Linda K. Muthen posted on Tuesday, July 04, 2017 - 11:24 am
A factor with three indicators is just-identified. Fit cannot be assessed nor can a covariance between two factor indicators be identified.
 jintana jankhotkaew posted on Wednesday, July 05, 2017 - 6:30 am
Dear Linda,
Thank you very much for reply. What is minimum observed variables that can estimate goodness of fit and allow a covariance?

Best regards,
Jintana
 Linda K. Muthen posted on Wednesday, July 05, 2017 - 6:42 am
Four.
 Ana Canario posted on Tuesday, August 01, 2017 - 6:43 am
Dear Dr. Muthen,
We are performing confirmatory factor analysis with MPlus to validate an international questionnaire to the Portuguese population, using a sample of about 500 participants. Following the author’s original paper, we made one factor CFA for each subscale and two first order CFAs congregating groups of factors. In the individual factor CFAs, however, we often had poor RMSEA values (some of them higher than .10), and low TLI values (such as .81). We are guessing that this may be related to the high Chi-square statistics and low df (5) as suggested by Kenny, Kanishan and McCoach (2015). Can you help us with this? Any suggestion?
Thank you in advance.
 Bengt O. Muthen posted on Tuesday, August 01, 2017 - 5:49 pm
This is a typical problem when the original work is based on EFA.

Apart from checking modification indices for improving the model, you can try less restrictive models than CFA, such as ESEM or BSEM. See the left margin Special Mplus Topics pages on our website.

You may also want to confer with SEMNET on these general analysis strategy matters.
 virna gutierrez posted on Friday, September 01, 2017 - 12:56 pm
Dear Dr. Muthen,

I have been struggling to get good fit for an instrument that we want to validate to a Chilean population. The original paper (Davis 1993, empathy) only used EFA. Two other papers validated the instrument for Spain and Chile but had poor model fit and also significant correlations between factors. Can we better move to ESEM instead? Besides that I tried to estimate It is just to report that the instrument is valid to use it with posterior analysis in our paper. Thanks.
 Bengt O. Muthen posted on Friday, September 01, 2017 - 1:10 pm
ESEM is the same as EFA unless you have other parts in the model such as covariates, other outcomes, or multiple groups. Perhaps you are thinking of this latter aspect - in that case I think ESEM would be good.
 Martijn Van Heel posted on Wednesday, November 08, 2017 - 6:52 am
Dear Dr. Muthen

I am running a CFA, which gives the fit indices RMSEA = .043 and SRMR = .067.
However, when I run this model separately for gender (N male = 173, N female = 320), these fit indices increase to RMSEA = .050 and SRMR = .085.

I am having difficulty understanding why these fit indices change since I didn't make any model specification per gender.
What factors influence the value of RMSEA and SRMR?
Many thanks in advance!
 Linda K. Muthen posted on Thursday, November 09, 2017 - 5:53 am
The fit of the model in the overall sample will not be the same for different groups unless the groups and overall sample are the same. When you change the data being analyzed, the fit statistics will change.
 Martijn Van Heel posted on Thursday, November 09, 2017 - 7:04 am
Thank you for the response.
Yes, in this case, the data is the same.
The overall data set is split up into groups. The only difference are about 10 cases of which the gender variable is missing. It seemed to be a rather large difference in fit statistics for (about) 10 cases, but maybe that is indeed the cause.
Thanks again.
 Linda K. Muthen posted on Thursday, November 09, 2017 - 10:43 am
The data for each group are not the same. There is no relationship between the fit to the full sample and each group. This would only be the case if the sample of males and females are random samples from the total sample.
 arif özer posted on Tuesday, April 17, 2018 - 8:12 am
Can I learn to calculate
Fmin,
AIC,

in WLSMV estimation with categorical vaiable?

chisquare -2df or +2(free paremetre) arent applicable.

any comment to add to mplus syntax
 Bengt O. Muthen posted on Tuesday, April 17, 2018 - 4:43 pm
I don't think you can get AIC from WLSMV.
 Javed Ashraf posted on Sunday, May 06, 2018 - 2:10 am
Can you please recommend a good non technical (in statistical terms) book on Bayesian SEM using examples from Mplus?
 Bengt O. Muthen posted on Sunday, May 06, 2018 - 3:33 pm
I recommend our book - see

http://www.statmodel.com/Mplus_Book.shtml
 Hebah Almulla posted on Wednesday, May 30, 2018 - 4:32 am
Hi,

I read through the above posts and I’m still not sure if I understood the nested models

I ran three CFA models using the same 18 items
1-factor model (all the 18 items load into one factor)
3-factor model (the same 18 items load into three correlated factors) (factor 1 = 7 items, factor 2= 7 items, factor 3= 4 items)
5-factor model (the same 18 items load into five correlated factors) (factor 1 = 4 items, Factor 2= 3 items, factor 3= 3 items, factor 4 = 4 items, factor 5 = 4 items)

Are those models nested? Can I use the Chi-square difference test to compare them?

Thanks,

Hebah
 Tihomir Asparouhov posted on Wednesday, May 30, 2018 - 3:02 pm
1. The first and the second models are nested. The second and the third models are nested if factors 1 and 2 in model 3 use the same 7 items as factor 1 in model 2 and if factors 3 and 4 in model 3 use the same 7 items as factor 2 in model 2.

2. You can use the Chi-square difference test but beware that it can extract more factors than needed due to boundary conditions. See

Hayashi, K., Bentler, P. M., & Yuan, K. H. (2007). On the likelihood
ratio test for the number of factors in exploratory factor analysis. Struc-
tural Equation Modeling, 14, 505-526.

I would recommend using lower p-value as the cut off for significance such as 0.01 instead of 0.05. Alternatively use BIC or conduct your own simulations to evaluate the LRT in your situation as it is done in the above article.
 Jenny posted on Wednesday, August 08, 2018 - 11:11 am
Hi,

I'm running CFA models using MLR and assessing fit using CFI, RMSEA, NNFI, and SRMR. A reviewer commented saying that "A maximum likelihood with ROBUST estimation method was adopted for examining fit indices for the proposed models. However, normal rather robust fit estimates were reported for each model".

What are the robust fit estimates that they are referring to? How do I find this in the output?

Thanks very much,
Jenny
 Bengt O. Muthen posted on Wednesday, August 08, 2018 - 5:48 pm
The reviewer is referring to the test of fit section of the output - MLR gives a robust chi-square test.
 Jenny posted on Thursday, August 09, 2018 - 2:16 am
Thanks Bengt. In addition to the robust chi-square test, can I report on CFI, RMSEA, NNFI, and SRMR given in the output? The reviewer implied these were normal fit estimates?
 Bengt O. Muthen posted on Thursday, August 09, 2018 - 2:10 pm
CFI and RMSEA that Mplus prints both build on the chi-square test so they are also robust in the sense that chi-square they build on is robust due to MLR.
 Jenny posted on Wednesday, August 15, 2018 - 2:38 am
Thanks Bengt! Another comment was to consider reporting coefficient omegas as an alternative to Cronbach's alphas for scale/subscale item scores. Can Mplus compute coefficient omegas? Thanks in advance.
 Bengt O. Muthen posted on Wednesday, August 15, 2018 - 3:36 pm
It doesn't compute omega automatically but you can do it using the Model Constraint command - see the FAQ on our website:

Reliability - Omega coefficient in Mplus
 Mira Patel posted on Thursday, August 23, 2018 - 6:53 pm
Hi,

I ran a one-factor CFA with 8 categorical variables.

My output is as follows:

Chi-Square Test of Model Fit

Value 312.689*
Degrees of Freedom 18
P-Value 0.0000

RMSEA (Root Mean Square Error Of Approximation)

Estimate 0.253
90 Percent C.I. 0.232 0.295
Probability RMSEA <= .05 0.000

CFI 0.982 TLI 0.971

Looking at this it seems that the CFI/TLI is good. But my chi-square doesn't indicate good fit. Is this true? What's the best way to interpret this finding?

I added a few residual correlations (>0.80) which did end up increasing the p-value as well as the CFI/TLI...don't know if that is something I should do.

Also is there a good reference that shows how to interpret the outputs from Mplus for CFAs (with categorical variables)?
 Bengt O. Muthen posted on Friday, August 24, 2018 - 5:53 pm
This model fits very poorly. See our Topic 2 Short Course video and handout on our website describing input and output.
 Elke Sekeris posted on Thursday, February 21, 2019 - 11:29 pm
Hi,

I am using CFA to test four different models:
- 1 factor model with all 32 indicators on one factor
- 2 factor model with 8 indicators on one factor (Exact) and the other 24 indicators on a second factor (approximate)
- 2 factor model with 16 indicators on one factor (arithmetic) and 18 indicators on (approximation)
- 3 factor model wit 8 indicators on EA, 8 indicators on CE and 18 indicators on AA.

These models are non-nested, I assume, so in order to be able to compare them I need AIC/BIC values and I can only get them using ML. However, when I use this estimator the X² can not be computed because the frequency table is too large. When I try TECH10 in the output the message is the same: tech10 can not be computed because the frequency table is too large.

Is there any way to solve this?

Kind regards,
Elke
 Bengt O. Muthen posted on Friday, February 22, 2019 - 2:13 pm
Why not use BIC? If you like, please send your Tech10 output to Support along with your license number.
 Martin Volker posted on Monday, April 08, 2019 - 1:54 pm
I am looking for strategies to potentially compare six different CFA models for the same instrument in the same sample. The instrument is contains 58 items—each rated on a four-point scale. The sample is 250 students with Autism Spectrum Disorder who were rated by the special education staff who work with them. Because of the nature of the sample and the emphasis of the items on pathology, the item rating distributions are non-normal—and they are all largely non-normal in the same way (i.e., more scores toward the bottom and fewer toward the top of the rating scale). Given the ordinal nature of the item ratings and the non-normal item distribution, I used the WLSMV robust CFA estimation procedure.

Several of the factor models I want to compare are not nested within another. I understand from prior posts that the BIC reported using the MLR estimator is appropriate for comparing non-nested models.
My question:
Is it reasonable to report WLSMV fit indices and WLSMV model parameter estimates, but also supplement them with the BIC values from MLR?
The robust nature of WLSMV was the reason for using it over MLR in the first place. The RMSEA, CFI, and TLI in the WLSMR runs indicate reasonable fit for several models. However, none of the models fit according to the much poorer fit estimates through MLR.
 Bengt O. Muthen posted on Monday, April 08, 2019 - 4:51 pm
If you don't have too many factors (say < 4), you can use MLR while specifying the variables as Categorical and thereby get BIC.
 Martin Volker posted on Monday, April 08, 2019 - 8:48 pm
The models consist of:

4 factors
5 factors
Two different 6-factor models
7 Factors
9 Factors

These models came from the research literature on this instrument.

The 5-factor model, one of the 6-factor models, and the 7-factor model can be tested against each other using the WLSMV DIFF Test because they can be specified as nested models.

However, this will not work for the 4-factor model or the 9-factor model.

When I run each of the models through WLSMV the 9-factor model has the lowest RMSEA (i.e., .062) and SRMR (.082), and highest CFI = .94 and TLI = .938. When I run each of the models through MLR, then RMSEA, SRMR, CFI, and TLI are terrible for all models (e.g., CFI in the .70s or lower .80s). BUT the AIC and BIC are lowest for the 9-factor model.

I worry that the non-normal ordered categorical data are seriously throwing the MLR fit indices off. Would these data issues impact absolute AIC and BIC values? Could it render the AIC and BIC less useful for model comparison? Or would AIC and BIC still maintain at least the same rank order (with such non-normal data vs relatively normally distributed data) across the six CFA models being compared?

Or am I out of options for cross-model comparisons?

Thank you so much for considering these questions.
 Bengt O. Muthen posted on Tuesday, April 09, 2019 - 5:39 pm
Perhaps you are not using Categorical= in your MLR run - that would distort the results if the non-normality is due to floor or ceiling effects.

You can try Bayes which works with many factors and gives a PPP fit index also with categorical outcomes.
 Ann Neely posted on Tuesday, June 23, 2020 - 11:44 am
Is there a way to receive the RMSEA confidence intervals for a two-level CFA?
 Tihomir Asparouhov posted on Tuesday, June 23, 2020 - 4:00 pm
It is not available in the current version.
 Claire Smith posted on Monday, August 17, 2020 - 11:47 am
Hello,

I am running a CFA and using modification indices (with statements) to determine optimal fit. It has been working thus far, but when I add any additional with statements to the ones I already have, model fit information is no longer present and the following message appears. Is there a limit to how many with statements can be included?

" THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE
COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL.
PROBLEM INVOLVING PARAMETER 146."
 Bengt O. Muthen posted on Monday, August 17, 2020 - 4:49 pm
Yes, there is a limit - after which the model is no longer identified.
 Luisa Solms posted on Wednesday, October 21, 2020 - 1:45 am
Hi everybody,

I am running a CFA and my RMSEA indicates bad fit why CFI and SRMR are good. If I allow residual variances between indicator variables of the same factor to covary I can improve the RMSEA value easily. However, I am wondering if this bad practice?

Also, I am not sure why the RMSEA value is high? What are the most common reasons?

Many thanks,
Luisa
 Bengt O. Muthen posted on Wednesday, October 21, 2020 - 4:42 pm
This is a good question for SEMNET.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: