I am evaluating several models featuring 3 latent factors (f1, f2, f3) predicting one outcome per model (e.g., y). I have noticed that with many of these models if I constrain the paths of the factors predicting the outcome to be equal to one another, the path coefficients will be quite small (e.g., unstandardized = .1, standardized = .04) and statistically significant (signifying a small unique effect for each of these factors on the outcome of interest). However, if I estimate these same models without equality constraints placed on the paths described above, the path coefficients are larger than above (e.g., unstandardized = .3, standardized = .14) but have larger standard errors as those mentioned above (e.g., .32 vs. .02), which leads to these paths all being non-significant (suggesting no unique effect for any of the factors on the outcome of interest).
My question regards the standard errors of path coefficients. As mentioned above, when the paths are freely estimated, the standard errors are much larger than when the paths are constrained to be equal to one another. Is it a general rule that paths which are constrained to be equal will have smaller standard errors than paths that are freely estimated? If so, why? If not, can anyone give any suggestions as to why it may be happening with my data?
Here is some more background on my data and models which may be helpful: All of the indicators of the latent factors (and most of the outcomes) are binary. The models are being estimated using robust WLS in Mplus (v3). The data (n = 4,000 or so) were collected using a complex sampling design (i.e., weights, clusters, strata) which is being modeled with the application of weight, strata, and cluster variables, and the use of the “complex” command in Mplus.
Thank you in advance, Jim
bmuthen posted on Wednesday, August 10, 2005 - 12:18 pm
Typically SEs will be smaller for models that are more restrictive - if the restrictions fit the data well (otherwise all bets are off). This is because you have a higher ratio of information/parameters. The fact that the point estimates also change, however, might point to model misspecification problems. For data that match a model of equal unstandardized slopes, a model with no such equalities applied should give approximately the same point estimates.
Thanks for your advice. Could you give any further recommendations as to how to locate the source of misspecification (or if there is a particular sort of misspecification that would lead to the varying point estimates)?
I was under the impression that the unstandardized constrained parameter estimate(s) reflected a (rough) average of the unstandardized unconstrained parameters from the corresponding unconstrained model. I was further under the impression that the constrained model did not represent a significant decrement in model fit from the unconstrained model because the unconstrained parameters were "close enough" to the ("averaged") constrained parameter. For example, lets say my unconstrained parameters are -.2, .3, and .2; and when I constrain these parameters to be equal, I get an unstandardized estimate of .1 (i.e., the average of the unconstrained parameters; which is again, close enough the the unconstrained estimates to not lead to a decrement in model fit). I would appreciate if you could let me know how off my thinking is in this matter. Thanks so much.
bmuthen posted on Thursday, August 11, 2005 - 10:57 am
If your unrestricted model fits reasonably well in terms of fit indices, you could use DIFFTEST to see that the restricted model doesn't fit significantly worse than the unrestricted. If it doesn't, the results are a bit strange. If the unconstrained, raw, estimates are not far away from the average in terms of their SEs, you would think the constrained model would give as the common estimate approximately this average.