I am running a CFA with categorical observed variables. However, one of the residual variances for the categorical observed indicator is negative. Could you please let me know what might have went wrong?
The fit indices are Chi-Square Test of Model Fit
Value 0.511* Degrees of freedom 2** P-Value 0.7744 CFI/TLI
You need to make some adjustment to your model to avoid the negative residual variance. Perhaps it is overfitted in some way.
Daniel posted on Tuesday, March 01, 2005 - 5:23 pm
Linda or Bengt, I am running a multiple-population LGM, divided by genotype (two groups). When I include the covariates, I get negative residual variances for two of my observed ordered categorical variables (four categories) in one of my groups. I ran the same model with theta parameterization, and neither of the negative residual variances was significant. I tried to lower the factor loadings in the Delta parameterization model, to get rid of the negative residual variances, but so far it hasn't worked. Is there a problem publishing papers using Theta instead of Delta parameterization? What is the big difference between the two parameterizations except for the ability to get residuals with the Theta parameterization?
bmuthen posted on Tuesday, March 01, 2005 - 11:43 pm
I think you are saying that you get negative (albeit not significant) residuals with the Theta parameterization. Then the solution would be to fix those at zero. You don't have that flexibility in the Delta parameterization. The delta parameterization can be advantageous in some settings mentioned in conjunction with the examples in web note #4, but you shouldn't hesitate to use the Theta parameterization.
Daniel posted on Wednesday, March 02, 2005 - 11:45 am
Yes Bengt, that is exactly what I was asking. Thank you very much.
I am doing a CFA with categorical observed variables. Some of the residual variances were negative when I used the default parameterization, so I tried THETA parameterization. However, I do not see the residual variances and their standard errors in the output, so I don't know how to tell if they are significant or not. Also, in this parameterization, I am getting a few threshold values (including for the variable that had negative residual variance in the delta parameterization) that are more than 10 times higher than the others and the variance of one of my latent variables is 100 times higher than the others. My questions 1) How do I get the residual variances in the output to see if they are significant? 2) Are the high thresholds and variance of the l.v. of concern?
In a single group analysis with the Theta parameterization, the residual variances are fixed to one. I think you should go back to the Delta parameterization and change your model so that you do not obtain negative residual variances. If you model is not stable, changing the parameterization is not the solution.
By changing the model, do you mean throwing out items or changing the factor loadings as you have mentioned in other posts? I have items in my model that have very low endorsement, does this cause instability? Otherwise, what could cause a model to be unstable?
Sanjoy posted on Saturday, June 11, 2005 - 3:15 am
This is our model (D3i's and D43 D53 D63 are the 5 point ordinal, BID1A is 0/1 and Xi’s share some common elements) and we are doing a Single Group Analysis … also we are using DELTA Method. …(using THETA parameterization, we were getting “*****” for some of the SE values, shifting to DELTA saves from that, however though results do not improve)
RI by D31-D33; BEN by D43 D53 D63; BID1A on RI BEN X1; RI on BEN X2; BEN on RI X1;
Now under DELTA parameterization I got this....
Residual Variances RI 0.285 0.204 1.397 BEN 1.132 1.592 0.711
Q1. Under Delta parameterization we calculate “theta or the residual variances” as remainder, which is = scale factor – (loading factor^2)*variance of latent variable … we assume scale factor equal to 1, … now, since “BID1A” is single indicator variable, factor variance is set to1… is not it! … and so also will be its factor loading … and in that case “residual variance” should be Zero … however I’m getting it as negative … can you explain me please
Q2. Could you please suggest why we are having negative residual variance and some possible remedies? ... I don't want to change the model, because the same model fits a different data set very nicely.For both the data set we use exactly the same survey format
Q3. “Undefined” and the value following undefined indicates what? …I got the same “undefined” answer under THETA parameterization
Thanks and regards
bmuthen posted on Saturday, June 11, 2005 - 12:10 pm
See the section "The Scaling Parameters of Delta" in Appendix 2 of the technical appendices on the Mplus web site for a thorough exposition of Delta calculations. A negative residual variance suggests model misspecification. Undefined R2 means that a residual variance is negative.
Thank you professor ... I read the relevant portion of MPlus Technical Appendix, things are really becoming clear now ...I have three very quick questions and a query
Q1. "delta" the scaling matrix is a "Diagonal Matrix" ... isn't it? ...then why do we need "diagonal of delta" ...it makes thing slightly confusing
Q2. the product "delta*var[Y*|X]*delta" gives us the correlation matrix (in a sense I found the off-diagonal elements will then represent correlation) ... now your equation 39 stating it as covariance matrix for (Y*scaled), are they same?
Q3. in equation 45, you mentioned about var[Y*], i.e. unconditional variance of Y* ... are we using unconditional variance at all in any part of our analysis, given the premise that MPlus works on conditional normality
Queries regarding assumption about error distribution!
1."THETA" our (p*p matrix) associated with measurement error (epsilon) AND "PSI" our (m*m matrix) associated with Structural equation error (zeta)- aren't both of them diagonal matrix?
2. can we accommodate heteroscedasticity(coming from individual heterogeneity) in MPlus
Thanks and regards
bmuthen posted on Wednesday, June 15, 2005 - 1:56 pm
I will answer questions 1-3 when I return. Neither Theta nor Psi are diagonal matrices. Yes, heteroscedasticity can be handled using random slopes.
Sanjoy posted on Wednesday, June 15, 2005 - 8:14 pm
Thank you Prof.…the reason I asked Q2 is that some authors showed concerns about the usage of Correlation instead of Covariance ...it's true that Correlation matrix circumvents the problem regarding interpretation of unit of measurement etc., however, quoting Prof. Joreskog …"analysis of correlation matrix is problematic in several ways, this may (a) modify the model being analyzed (b) produce incorrect chi-square and other goodness of fit (c) give incorrect SE ….. To obtain correct asymptotic SE in LISREL for a correlation structure when the correlation matrix is analyzed, the WLS matrix must be used" (Joreskog, Page 400, 401; Quality and Quantity, 1990(24)) … I’m sure in MPlus these issues have been dealt very thoroughly …but I’m afraid because I think I have missed them …
Besides, … from your 8th June explanation to (Jaume Aguado Carné’s question) “both WLSMV and LISREL’s DWLS, they have similar philosophies, but use different asymptotic approximations in estimating the asymptotic covariance matrix of the estimated sample statistics used to fit the model (i.e. the weight matrix)” … I was wondering the reasons behind such development in differences, I suppose one is Mplus based on conditional normality given X, while LISREL works with both (Y* and X)
Thanks and regards
BMuthen posted on Sunday, June 19, 2005 - 11:36 am
Analyses of correlation matrices can be problematic both because they may distort parameter estimates and because special standard error computations being necessary. Mplus has several features for doing these analyses correctly. With categorical outcomes and either multiple groups or multiple time points, problems of analyzing correlation matrices are circumvented by allowing for different groups or different time points to differ with respect to their y* variances through the use of the Delta matrix. For an early statement of this, see my 1981 Psychometrika article with Christoffersson. See also, my 1984 Psychometrika article which describes the chi-square and standard error computations through WLS.
The differences between Mplus and Lisrel in terms of asymptotics I cannot comment on because I have not delved into how Lisrel does this.
For modeling differences between Mplus and Lisrel for categorical outcomes, see Web Note 4.
Sanjoy posted on Tuesday, June 21, 2005 - 10:59 pm