It is fairly well-known that true differences in residual variation across groups confounds cross-group comparisons in logit and probit regression (e.g., Allison, 1999). This is because the residual variance of y* is fixed at 1 (probit) or pi-squared/3 (logit) for identification purposes. Thus one needs to control in some way for true differences in residual variation when doing cross-group comparisons, or all bets are off.
Therefore, I assume that the THETA parameterization is precisely one way of doing this (although I do not think the connection has been made explicitly in the literature). Under PARAMETERIZATION=THETA the residual variances of the y*'s are fixed at 1.0 in one group and free in the other. Does this strategy effectively "control" for possible differences in residual variance and allow you to say that any significant differences in coefficients (e.g., tested with equality constraints or just manual z-tests of the differences) are likely to be "true", as opposed to artifacts of differences in residual variation? (I know this would only currently deal with the probit case in Mplus).
Thank you for any clarification/confirmation,
Allison, P.D. (1999). Comparing logit and probit coefficients across groups. Sociological Methods & Research 28(2), 186-208.
Yes, I think you are looking at this correctly. I saw a related question by you on SEMNET and was going to respond, but ran out of time. The residual variance can be thought of as "existing", but not always being separately identified but confounded with the other curve parameters (slope and intercept/threshold) - the larger the residual variance, the flatter the curve, i.e. the more attenuated the relationship is, which makes sense.
Both the Theta and the Delta parameterizations accomodate the possibly different residual variances. A delta parameter is a function of lambda, psi and theta. So if lambda and psi are group-invariant, delta can still be different due to group-varying thetas.
How (if at all) does this related to the discussion of "The Special Case of One Factor" in Muthen & Christoffersson (1981) on page 411? Aren't you and Allison talking about the same problem?
Doesn't the MIMIC model implicitly constrain the residual variances (using the theta parameterization) to be equal across groups, making this model, at best, overly restrictive, and at worst, potentially misleading with respect to group differences in factor means?
Q1. Yes, I think we are talking about the same thing.
Q2. I think you mean a MIMIC where a covariate is a grouping variable. In that case, yes I think that there is a certain risk of a distortion. MIMIC can be a useful first step to see which groupings appear most important, followed by the more flexible multiple-group approach.
Tait Medina posted on Sunday, December 07, 2014 - 4:54 pm
I am wondering how the Allison problem discussed above is handled by the Alignment method for dichotomous indicators, which I am very interested in using. Are the residual variances fixed at one in all groups in this approach? I might be misreading the documentation, but it looks like the Theta parameterization is used, and the resid vars are fixed in all groups. If yes, might true differences in the variances of the y*'s, which are constrained to be invariant for identification, be biasing the free parameters and thus the across-group comparisons?
We are currently using residual variance fixed to 1 for binary. There are a number of variations as you point out in the underlying assumptions about group invariance with binary and categorical items. These variations are not implemented with alignment currently but could be implemented in the future and certainly are possible and make sense in some situations.
If you relax the residual variance assumption with binary items you can assume that all thresholds are invariant - there is an H1 model that has invariant thresholds and unequal residual variances and has the same log-likelihood. If all thresholds are invariant however you get factor mean=0.
Thus you can explain observed group difference by differences in the factor mean, or by difference in the residual variances but probably not by both. I am not categorically disputing the possibility but it is definitely a more advanced optimization.
As is alignment fits 2*P*G parameters with 2*G parameters. If we add the residual variance to the mix we will have 2*G+P*G. I am not sure how well that will work in practice or even in simulated studies, how well these parameters will be identified and how easy it will be to get a global minimum for the alignment optimization.
Tait Medina posted on Monday, December 08, 2014 - 7:39 pm