I have structural model where y=beta*x+lambda*u, and z=delta*u+error. Where y is an observed outcome, x's are observed sociodemographic characteristics, u are latent variables observed from z's. I have 4 latent variables u and the structure of correlation between the u's ranges from .02 to .70. If for the latent variables u I free the correlation that are higher than 0.60 and fix to zero the remaining ones Can I rely on the estimates? Do I have collinearity problems?
You don't necessarily have collinearity problems, but it seems that it would be an empirical matter to check this out, perhaps by adding one factor (and its indicators) at a time into the y equation. You could also try to lower the factor correlations, perhaps by allowing a less restrictive loading matrix and residual covariance structure. You may also want to consult SEMNET with this type of question.
Anonymous posted on Thursday, June 27, 2002 - 12:02 pm
I've run a SEM with categorical mediating variables, which I've had Mplus treat as latent continuous variables. The correlations between several of the error terms are small and insignficant and I'm considering fixing them to zero.
How strong an assumption is that the correlations between error terms for latent variables is zero ? Should the error terms always be allowed to be freely correlated if they've been modeled as having the same x variables ?
bmuthen posted on Thursday, June 27, 2002 - 12:22 pm
Unless theory postulates otherwise, my personal feeling is that residual correlations among dependent variables should be included. If they are insignificant, fine; let's keep them anyway and report them as insignificant, meaning that any left-out predictors are uncorrelated.
Mary Campa posted on Wednesday, March 23, 2005 - 11:51 am
Is there any paper that talks about including residual correlations among dvs; both the justification and exactly what is happening??
I have a situation where I have included the residual correlations among dvs and even though they are not significant including them makes the standard error of the estimates change dramatically.
I have three dvs, two of which are mediating the third, and it is these regression standard errors that are changing.
Any help /reference would be greatly appreciated.
BMuthen posted on Wednesday, March 23, 2005 - 4:46 pm
No reference comes to mind although I'm sure books like Bollen's SEM book discuss correlated errors. Generally speaking, allowing for residual correlations channels some of the correlations between variables through the residuals and therefore can alter the regression relationships between the variables and their standard errors. In a situation like you describe, it appears that your model is sensitive to minor changes and may be less likely to be replicated in a new sample.
I am running an SEM with continuous latent variables (the IV's and 3 DVs where the first two are mediators. Two of the IVs are highly correlated which seem to cause the estimates to go crazy. Is this correlation also causing the issues with the variances and residuals as well as the undefined R-square results?
Variances T 1.000 0.000 999.000 999.000 C 1.000 0.000 999.000 999.000 R 1.000 0.000 999.000 999.000
Residual Variances ETH 999.000 999.000 999.000 999.000 TR 0.351 0.284 1.237 0.216 INTER 0.556 0.108 5.138 0.000
ETH Undefined 0.13257E+01 TR 0.649 0.284 2.290 0.022 INTER 0.444 0.108 4.101 0.000
I ran SEM model including one latent variable (F1) and there was a warning which said; THE LATENT VARIABLE COVARIANCE MATRIX(PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES.CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE "y1".
Could you help me understand what is going on here? The model is as follows.
x1 WITH x2@0; F1@1; F1 WITH x1@0; F1 WITH x2@0; F1 BY y1 y2(1); y1 ON x1 x2; y2 ON x1 x2; y3 ON x1 x2 y2; y4 ON x1 x2 y1 y2 F1;
x1, x2, y1, y2, y3, y4 are all observed continuous variable except for y2 which is observed dummy variable.
See the results in your output. I suspect y1 has a negative residual variance which makes the model inadmissible. You need to change the model. If this is not it, please send the full output and your license number to email@example.com.
I added a latent variable to my model to confirm convergent validity of a second order construct. Now, I get the same message that "THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE." Indeed, two correlations have absolute values are greater than one:
MAS2 WITH FEM -1.781 0.761 -2.339 0.019
ACHV WITH FEM 0.877 0.490 1.791 0.073 MAS2 3.742 1.123 3.332 0.001
I have tried make minor changes to the model, but feel like I'm shooting in the dark. What ordinarily causes this to happen? And do you have any general recommendations or references that would guide me to a solution?
Hi, I'm unsure whether I have specified my model correctly. I have three latent variables x, y and z. I regress y and x on z and want to estimate the residual covariance between y and x when having accounted for the common variance they share with z.
I've specified the model as follows: y ON z; x ON z; y WITH x;
Hi, I've got an additional question about residuals. I have four indicators on the predictor side which all load one latent variable F1 - three tests (a, b and c) and HSGPA (d). On the criterion side I have three indicators which are all grades (x, y and z) and load one latent variable F2.
I want to regress F2 on F1. I expect an additional "methods effect" from indicator d which I would estimate as a path from d's residual to F2.
The path for res ON F1 is .183 (unstandardized) in the model above and .213 (unstandardized) in a model where F2 is only regressed on F1.
What confuses me even more is that the path for F2 ON F1 changes a lot depending on whether I add res as an additional predictor or not. As I understand it, the variance of d that is explained by F1 is the same in both models. Hence, F1 is defined in the same way in both models. What I add in model 1 is the path from the residual of d on F2. Why does that change the relation between F1 and F2? Why would the path for F2 ON F1 get smaller when F2 ON res is added - actually getting smaller than the latter?
Have I specified something in a wrong way when establishing my model? Or am I overlooking some important point? Thanks for your help, it's very much appreciated!