WLSMV estimator chi-square difference... PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 Sheung-Tak Cheng posted on Tuesday, December 18, 2018 - 6:03 am
I would really appreciate help with a paper in which alternative measurement models were compared using the modified chi square difference test for WLSMV estimator. The reviewer argued that the chi square difference test is biased by the large sample size (N=600) and asked how much a chi square difference is meaningful. How may this question be answered? And are there alternatives to the chi square difference test for the WLSMV estimator? Just for information, all chi square difference tests had p < .001.
 Bengt O. Muthen posted on Tuesday, December 18, 2018 - 5:23 pm
This is general chi-2 test issue, not specific to WLSMV. The chi-2 test is sensitive to small deviations from the model when the sample size is not small. You may want to discuss this on SEMNET. My take is to do an approximate check of how sensitive the test is for your data-model situation. You can use Modindices to see which parameters need to be freed to get a reasonable chi-2 fit and then you see how much your key parameters have changed. If they have changed only in substantively ignorable ways, you could surmise that your original model was fine for practical purposes and that chi-2 was oversensitive. Not everyone would agree with this approach, however.
 Sheung-Tak Cheng posted on Thursday, December 20, 2018 - 7:36 pm
Dear Linda, Thank you very much for your response. I dealt with the issue by randomly selected one-third of the sample so that the N became ~200 and re-ran the analysis - the results were the same and all chi-square difference ps were .0000 on the output. I submitted the manuscript and the reviewer came back with the following comment:

"I am afraid that the authors are not understanding what I am asking. Forget about the statistical tests. Statistical significance is not the issue. I am asking much better is 3 factors versus 2 factors, and 2 factors versus 1 factor in terms of variance accounted for or sensitivity or specificity?"

What might be sensitivity or specificity when evaluating relative fit of competing models. For the point on variance explained, do I add up the column labeled "R square" in the output and divide the sum by the total no. of indicators (assuming each indicator's variance is 1)? Many thanks in advance for your reply.
 Bengt O. Muthen posted on Friday, December 21, 2018 - 3:37 pm
This is a good general analysis question for SEMNET.
 Sheung-Tak Cheng posted on Saturday, December 22, 2018 - 7:04 am
I have sent the message to the SEMNET listserv and waiting for replies. For now, may I get a quick opinion about the sum of R squares? In the EFA situation, my understanding is that R squares of items cannot be summed to obtain total variance explained if the factors are correlated. Of course in my model the factors were free to correlate and are in fact strongly correlated. Hence I cannot add up the R square values to obtain total variance explained?
 Bengt O. Muthen posted on Saturday, December 22, 2018 - 7:28 am
The reviewer's opinions sound a bit out of date/off target. Variance accounted for is a concept suitable for principal component analysis (PCA) where this is the primary goal and the uncorrelated components make it easy to add up component contributions. For EFA it is not the primary goal - explaining correlations is. Nevertheless, with orthogonal factors you can mention how the factors contribute explained variance - it is a descriptive of the factor model even though not the goal. Maybe that's your way to appease the reviewer.

Sensitivity and specificity is with respect to a classification but I don't know that you have that situation.
 Sheung-Tak Cheng posted on Sunday, December 23, 2018 - 10:06 pm
Dear Dr. Muthen,

First of all, Merry X'mas to you.

Cameron McIntosh responded to my SEMNET message and gave me some formulas for calculating AIC and BIC for WLSMV. To do so, I would need "sum of squared residuals" and "minimum value of the fitting function (discrepancy observed and model-implied moments). Where may I find these values on the Mplus output? Many thx for your help.
 Bengt O. Muthen posted on Monday, December 24, 2018 - 9:38 am
The minimum function value is printed in the latest versions of Mplus. The residuals are obtained by the Output option Residual.
 Sheung-Tak Cheng posted on Monday, December 24, 2018 - 12:22 pm
I have version 8. But I can't search for the word "minimum" or "minimum function value". Can you please tell me where to locate this value?

For sum of squared residuals, do I take the residual variance of each item, square it, then add them up? That's it?
 Bengt O. Muthen posted on Monday, December 24, 2018 - 12:57 pm
Version 8.2 says:

Optimum Function Value for Weighted Least-Squares Estimator

Value 0.24587142D-02

But you can use 8.0 that you have if you request Tech5 and then look at the left column's function value for the last iteration.

The residual itself has to be squared and summed up.

Note however that as Cam said on SEMNET, this fit measure is not well investigated (or universally accepted; at least not yet).
 Sheung-Tak Cheng posted on Monday, December 24, 2018 - 3:00 pm
Many thanks for the guide indeed. I intend to use the AIC as a last resort, and even if I do, I do not intend to report it in the paper itself for the reason you mentioned. It will only be calculated as another reference for the reviewer. It is comforting to find that so far in all of SEMNET commentaries, there isn't one that comes up with established methods to address the reviewer's comment. Under the circumstances, we have to go for the most plausible conclusion, which is clear to me. Merry X'mas to you again!
 Sheung-Tak Cheng posted on Thursday, December 27, 2018 - 9:53 pm
Dear Dr. Muthen, sorry to bother you again. I found that when addign up the sum of R square for each item and the sum of residual variance = the total no. of items. So that means the sum of R squares = total variance explained? But I thought the R squares cannot be added up in models with correlated factors. Another way of putting my question is: Is the sum of residual variance really the total residual?
 Bengt O. Muthen posted on Sunday, December 30, 2018 - 1:07 pm
Q1 and Q2: Yes. This overall total variance explained is the same for uncorrelated and correlated factors - they are just two different rotations of the same "explained" Lambda*Psi*Lambda' part of the estimated covariance matrix.

What you can't do is to apportion the variance due to each correlated factor.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: