I would really appreciate help with a paper in which alternative measurement models were compared using the modified chi square difference test for WLSMV estimator. The reviewer argued that the chi square difference test is biased by the large sample size (N=600) and asked how much a chi square difference is meaningful. How may this question be answered? And are there alternatives to the chi square difference test for the WLSMV estimator? Just for information, all chi square difference tests had p < .001.
This is general chi-2 test issue, not specific to WLSMV. The chi-2 test is sensitive to small deviations from the model when the sample size is not small. You may want to discuss this on SEMNET. My take is to do an approximate check of how sensitive the test is for your data-model situation. You can use Modindices to see which parameters need to be freed to get a reasonable chi-2 fit and then you see how much your key parameters have changed. If they have changed only in substantively ignorable ways, you could surmise that your original model was fine for practical purposes and that chi-2 was oversensitive. Not everyone would agree with this approach, however.
Dear Linda, Thank you very much for your response. I dealt with the issue by randomly selected one-third of the sample so that the N became ~200 and re-ran the analysis - the results were the same and all chi-square difference ps were .0000 on the output. I submitted the manuscript and the reviewer came back with the following comment:
"I am afraid that the authors are not understanding what I am asking. Forget about the statistical tests. Statistical significance is not the issue. I am asking much better is 3 factors versus 2 factors, and 2 factors versus 1 factor in terms of variance accounted for or sensitivity or specificity?"
What might be sensitivity or specificity when evaluating relative fit of competing models. For the point on variance explained, do I add up the column labeled "R square" in the output and divide the sum by the total no. of indicators (assuming each indicator's variance is 1)? Many thanks in advance for your reply.
I have sent the message to the SEMNET listserv and waiting for replies. For now, may I get a quick opinion about the sum of R squares? In the EFA situation, my understanding is that R squares of items cannot be summed to obtain total variance explained if the factors are correlated. Of course in my model the factors were free to correlate and are in fact strongly correlated. Hence I cannot add up the R square values to obtain total variance explained?
The reviewer's opinions sound a bit out of date/off target. Variance accounted for is a concept suitable for principal component analysis (PCA) where this is the primary goal and the uncorrelated components make it easy to add up component contributions. For EFA it is not the primary goal - explaining correlations is. Nevertheless, with orthogonal factors you can mention how the factors contribute explained variance - it is a descriptive of the factor model even though not the goal. Maybe that's your way to appease the reviewer.
Sensitivity and specificity is with respect to a classification but I don't know that you have that situation.
Cameron McIntosh responded to my SEMNET message and gave me some formulas for calculating AIC and BIC for WLSMV. To do so, I would need "sum of squared residuals" and "minimum value of the fitting function (discrepancy observed and model-implied moments). Where may I find these values on the Mplus output? Many thx for your help.
Many thanks for the guide indeed. I intend to use the AIC as a last resort, and even if I do, I do not intend to report it in the paper itself for the reason you mentioned. It will only be calculated as another reference for the reviewer. It is comforting to find that so far in all of SEMNET commentaries, there isn't one that comes up with established methods to address the reviewer's comment. Under the circumstances, we have to go for the most plausible conclusion, which is clear to me. Merry X'mas to you again!
Dear Dr. Muthen, sorry to bother you again. I found that when addign up the sum of R square for each item and the sum of residual variance = the total no. of items. So that means the sum of R squares = total variance explained? But I thought the R squares cannot be added up in models with correlated factors. Another way of putting my question is: Is the sum of residual variance really the total residual?
Q1 and Q2: Yes. This overall total variance explained is the same for uncorrelated and correlated factors - they are just two different rotations of the same "explained" Lambda*Psi*Lambda' part of the estimated covariance matrix.
What you can't do is to apportion the variance due to each correlated factor.