I have been running a series of CFAs on a behavioral assessment instrument that uses a Likert-type rating scale (0-3). The instrument is designed to be used to screen for ADHD, oppositional defiant disorder and conduct disorder. I have a sample of 2996. There are 29 items, four first-order factors and two second-order factors. The polychoric correlation matrix printed in the output shows that almost all of the correlations within the .50 to .92 range. The number above .85 is approximately 35 (quick count).
I have been running these models using the default WLSMV estimator. However, since I cannot use the robust chi-square for a chi-square difference test, I decided to try the models using the WLS estimator.
When I ran the first WLS model the results are disturbing. All of the cells in the "Residuals for Covariances/Correlations/Residual Correlations table are negative. One of the residual variances (from the R-square table) is negative.
Is the difference between the WLSMV and WLS results due to differences in the way the asymptotic covariance matrix is used in the model estimation? If so, can you provide a brief description (probably wishful thinking on my part) of what causes this? Could you also provide a reference or two? Thanks for your help.
WLSMV uses the diagonal of the weight matrix in the estimation whereas WLS uses the full weight matrix. WLS and WLSMV use the full weight matrix to compute standard errors and chi-square. Neither estimator uses a fitting function that attempts to minimize the residuals. But, because WLSMV uses the diagonal weight matrix to get the estimates, the residuals tend to be closer to zero than using the WLS estimator. A reference is Muthen, DuToit, and Spisic. See Mplus Discussion references.
Anonymous posted on Wednesday, March 21, 2001 - 4:51 am
For ordinary data,in Muthen's three step estimatethe,the weight(w) of the fit function are the same for WLS and WLSMV ? I know in WLSMV and WLS ,the formula of asymptotic covariance of estimator and the chi-square are not the same,but their weight are the same ? In WLSM and WLSMV,their estimate are the same, but in WLS and WLSMV ,are their estimate the same?
The weight matrix itself is the same for WLS, WLSM, and WLSMV. It is how it is used that differs. For paramter estimates, WLS uses the entire weight matrix, while WLSM and WLSMV use only the diagonal of the weight matrix. For standard errors and tests of fit, WLS, WLSM, and WLSMV use the entire weight matrix, but they use it is different ways.
Scott Grey posted on Tuesday, August 21, 2001 - 8:55 pm
I have not been able to locate the reference given for this topic (Muthen, DuToit, and Spisic). Has it been published somewhere other than Psychometrika or is it still in press? If so where can we obtain a copy?
Anonymous posted on Thursday, August 14, 2003 - 6:52 pm
As part of this discussion, you noted a reference to a paper by Muthen, du Toit and Spisic. This is listed as a 1997 paper that has been "accepted for publication" at Psychometrika. However, after a careful search, I've never been able to find this in print.
Could you please tell use the current status of this paper? Thank you very much.
You can get a copy of this paper by emailing firstname.lastname@example.org. It was never revised and resubmitted.
Anonymous posted on Thursday, March 24, 2005 - 8:05 am
One question, does WLSMV use the polychoric correlation matrix if there are categorical variables assigned in the program?
bmuthen posted on Thursday, March 24, 2005 - 1:40 pm
Yes, unless you have covariates in the model in which case the better approach of probit regression-based model fitting is used instead of the correlation-based approach (see e.g. the Mplus web site for Muthen, 1984 in Psychometrika).
page 498 of your 1995 (Psychometrika) article, equation 35 follows with a line saying "that is the asymptotic variance matrix of (22) of M" ... I can understand up to this, that equation 35 of your 1995 article and equation 22 of 1984(Psychometrika) article .... BUT what I can NOT understand are the followings
1. Don’t we start the third-stage of your proposed 3-steps estimation from equation 16 of 1984 article, rather than equation 23
2. Your 95 article says equation 22 of your 1984 article is the asymptotic variance, then what does equation 23 stand for (assuming single group)
3. now coming to your 1997 article exclusively ... please rectify me if I got u wrong
I. "sigma hat" is actually equation-48 II. we use this value of "sigma hat" in equation-10 III. the "W's" of euation-10 are the diagonal element of "sigma hat"... this is what you have newly proposed IV. Standard errors of the entire parameter vector will thus be chosen from the "a inverse" multiplied by "what is written on the RHS of the equation-10"
Ooops ... I guess now I got it, we need to get back the structural equation parameter from the reduced form (which effectively finishes along with equation 22 of ur 1984 paper) ... am I right !
it looks like the way it works is quite similar to the 3SLS that we usually use in Econometrics, of course the very difference that ur scenario deals with "measurement error portion" unlike regular econometrics stuff, makes ur work more realistic, and hence more difficult to solve
thank you professor ... for those papers, ur software and more importantly for ur online help, which is simply stupendous
bmuthen posted on Monday, April 25, 2005 - 4:06 pm
Sounds like you got it. Amemiya's Advanced Econometrics book discusses related estimation approaches. See also page 318, referring to Lung-Fei Lee's work.
Sanjoy posted on Tuesday, April 26, 2005 - 2:50 pm
after giving a search about Dr. Lung-Fei's work I found this helpful document avaible online
I'm going to look at Amemiya's book for a better understanding ...thanks for ur advice once again
bmuthen posted on Tuesday, April 26, 2005 - 3:06 pm
Oh yes, I forgot that book, which is on my shelf - that is a good reference.
jad posted on Wednesday, January 18, 2006 - 12:40 pm
I want to know what is Muthen's three step estimate used for ordinary data mentioned in the message posted by( Anonymous posted on Wednesday, March 21, 2001 - 4:51 am ) in this page, can you give me a reference, I want to do a CFA for ordinary data . thanks
EXCUSE ME, MY QUESTION WASN'T CLEAR, MY QUESTION IS ABOUT THE FIRST AND SECOND STAGES(ML), AND THE SIGMA1 AND SIGMA3, AND HOW CAN I ANALYSE THE POLYCHORIC CORRELATIONS, CAN I OBTAIN IT IN MY OUTPUT? THERE IS THE FIRST TIME THAT I USE MPLUS, AND I WANT TO DO THE SAME METHODOLOGY THAT WE USED ION YOUR ARTICLE MUTHÉN, B.(1984).
You don't have to do anything to use this methodology. Mplus automatically uses it. If you want the polychoric correlations, ask for SAMPSTAT in the OUTPUT command. You can also save them. See the SAVEDATA command.
JAD posted on Tuesday, January 24, 2006 - 11:06 am
THANKS; ANOTHER QUESTION ABOUT THE SAME ANALYSIS (CFA FOR CATEGORICAL INDICATORS), IF I HAVE A LOW FREQUENCIES IN SOME CATEGORIES OF FACTOR INDICATORS, CAN I THIS INFLUENCE MY OUTPUT , DO I COLLAPSE THEM , I HAVE SEE IN SAME MESSAGE THAT I MUST HAVE AT LEAST 5% IN FREQUENCIES, IS IT FOR MY CASE?
I hope this is not too basic a question, but what would be the advantage of using WLS over WLSMV or vice versa when my DV is ordinal categorical and I have many factors with ordinal categorical indicators? In other words, why should I use one or the other? Or any other estimators for that matter?
In the following paper, WLS was found to be inferior to WLSMV which is why we use WLSMV as the Mplus default for categorical outcomes:
Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes.
anonymous posted on Wednesday, November 04, 2009 - 8:17 am
I am regressing a latent factor on several continuous covariates. The latent factor is measured by three categorical indicators. Thus, I am using WLSMV. Am I in a position to interpret the unstandardized coefficients as indicating "...a standard deviation unit change in y* per a a one unit change in x.", since the dependent variable is the continuous latent variable?
Moreover, when I ask Mplus for the standardized coefficients, it reports these without standard errors. I've tried to obtain these standard errors by putting "STDYX" in the OUTPUT command line, but the standard errors still don't appear. I have downloaded the latest version of Mplus, but it makes no difference.
We do not give standard errors for standardized estimates for the weighted least squares estimator when there are covariates in the model. You can use maximum likelihood or you can use MODEL CONSTRAINT to define the standardized coefficients and you will then obtain standard errors. See Example 5.20.
anonymous posted on Thursday, November 05, 2009 - 7:27 am
Thank you, Linda. In the set up I outlined in my previous post, when f1 is regressed on the covariates, is f1 treated as a continuous latent variable? I ask in order to report the appropriate interpretation of the raw coefficients. For instance, the regression coefficient for "cons" is 1.14. Does this mean that as "cons" increases by one unit, there is a 1.14 standard deviation increase in f1?
Factors are continuous variables. When they are dependent variables, simple linear regressions are estimated. The raw linear regression coefficient is the change in the factor for a one unit change in the covariate. The standardized regression coefficient, StdYX, is the change in the factor in factor standard deviation units for a standard deviation change in x.
anonymous posted on Thursday, November 05, 2009 - 8:21 am
Thank you, Linda. I still wish to make sense of the change in the factor that occurs for a one unit change in a covariate (using the raw coefficients). Does that change in the factor occur in standard deviation units? Or, is it determined by the scale of the indicator that is fixed to 1.0 in order to give the factor its metric?
I have a question about the differences between using WLSMV and ML with a binary dependent variable. I used WLSMV first, but then used ML to get the standardized coefficients. However, some variables that were not significant in the WLSMV were significant in the ML model. Do you know why this might happen?