If one were to regress class membership derived from a latent profile analysis of a dependent variable upon a latent variable comprised of indicators from independent variables that, for example share common scale endpoints, would this seem a reasonable test for common method bias?
It seems to me that this is a somewhat more rigorous test than the Harman one-factor test that is commonly cited in our literature. However, I've not seen any application/citation of MPlus in this context ...
Thanks for any insights you might be able to offer on this subject.
I am wrestling with understanding this question. I don't see the connection between Harman's test - which I don't know but must refer to latent variable modeling with continuous latent variables (factors) - and your model which has a latent class variable regressed on a factor with multiple indicators (which in turn have "common scale endpoints"). I guess it boils down to: what is the common method bias that you mention and why do latent classes play a role here?
Let me clarify and elaborate. I apologize for the need to split my question into two posts.
Common methods variance is that variance attributable to measurement method rather than to the constructs the measures represent. Bagozzi and Yi (1991) note method variance is likely to be one of the main sources of systematic measurement error. It can arise from sources such as item content, scale type, response format, halo effects, social desirability.
In the past, researchers in my area have typically satisfied reviewer concerns over CMB with what has become known as Harman's One-Factor test. This test basically consists of running a factor analysis of all suspect IV and DV indicators. If the first unrotated factor accounts for a relatively small portion of the total variance (no more than 50%, but the smaller the better), the implication is that CMB is not likely to be a significant problem. Increasingly, this test is insufficient to pass muster at our top journals.
As an alternative, I am interested in the potential of finite mixture modeling as a post hoc means of assessing CMB. Let's say we have latent dependent variable (Y), measured by v1, v2 and v3. Our results support hypotheses in which the Y was regressed upon three IVs, X1 (measured by v4, v5 and v6), X2 (measured by v7, v8 and v9) and X3 (measured by v10, v11 and v12). In this case, let's say all indicators (v1 through v12) utilized the same scale type.
Presumably (Y) contains some degree of unobserved heterogeneity of an undetermined source. The question is whether this heterogeneity is explained by our use of a common measurement method.
So I'm thinking my MPlus syntax would look something like this:
VARIABLE: NAMES ARE v1-v12; USEVARIABLES ARE v1-v12; Classes = c(2); ANALYSIS: TYPE IS MIXTURE; MODEL: %overall% y by v1 v2 v3; CMB by v4-v12; C#1 on CMB; %C#1% [y]; y;
If the logistic regression of class membership upon the common method factor is not significant, would this not provide some degree of evidence that CMB is not likely to be affecting the results? At the minimum, I thinking this might be a better test than Harman's factor analytic approach.
I hope this clarifies my question. Thanks in advance for your response.
I'm thinking that if method bias IS indeed a problem, then this will most likely result in a class with means and/or variances in y that differ from that of a class in which method bias is not a factor.
The latent variable with all IV indicators (which share scale types)would represent CMB. So if the logistic regression of class on CMB is not significant, this would provide some evidence of this not being an issue.
So the methods bias class would have the CMB factor and the other class not? I am not sure this is likely to work out well because I am quite skeptical about the Harman test to begin with. Having a factor influence all items in a model would seem like it could pick up all kinds of sins beyond methods bias - model misspecifications of other kinds. For instance, there could be left-out demographic covariates influencing several indicators, or there could be indicators for the IV factors directly influencing indicators for the DV factors (have different relations than the factor relations prescribe). In general, however, I am in favor of formulating models with one class following a simple model and another class formed by those who don't fit this model. But in practice there is the complication that the mixture would pick up other differences not related to this, such as outlying observations or different ranges of non-normal outcomes.
Seth Frndak posted on Thursday, May 31, 2018 - 11:45 am
I think my question is related to this discussion. I am considering including multiple observed variables that are part of the same computerized assessment tool in a LPA. For example, I wish to include number of errors made during the task and the longest length of memorized sequences during the task (Please see the spatial span task). Understandably, the number of errors and the ability to memorize a long sequence are interrelated. You could say that there is common method variance between these two variables. However, from a psychological standpoint, they are also measuring different latent constructs.
My question is: should I allow for covariation between these variables, and will that be enough to account for common method variance? Alternatively, if I do not include the covariance, how will this affect my model, theoretically?