i have a question concerning fixed parameters within formative (or causal) measurement models (FMM) like SES. my aim is to perform a SEM with exogenous FMM and endogen ordinal reflective measurement model (RMM). the model fit of WRMR = .907 looks appropriate, and comparable parameter estimates structure were obtained by a PLS-model. my problem for now is the following: apparantly the FMM regression weights are very sensitive due to the fixing value. the following model provide a value of .01 as a nice approximation.
is there a rule-of-thumb for the fixing value within FMMs? here is the corresponding model:
categorical: pride1 pride2 pride3; model: repu by; repu on firstname.lastname@example.org repu2 repu3; repu@0; pride by pride1 pride 2 pride3; pride on repu;
linda, thank you for the immediate reply. i have an additional question concerning the 'statistical fit' of formative indicators with a potential regard to scale purification. let's assume one has no access to PLS (within a theory 'building'-approach), then there would be no guidance for setting the metric via a discrete indicator-weight (in order to obtain trustful t-values).
is there an alternative to the PLS-pre-'testing' respectively would you denote this procedure as appropriate?
Hi Linda and/or Bengt, A student and I are trying to fit our first causal indicator measurement model. From the syntax on slide 246 of the Mplus Short Courses Topic 1 Handout, it looks to us as if there is no disturbance term for f - your formative construct. Are we interpreting that correctly? If so, this would seem to us to be more akin to what Bollen would call a composite indicator model than a causal indicator model and we would be interested in guidance regarding syntax we should use to give the construct a disturbance term? Thanks very much!
Yes, the formative model we give the specification for is what Bollen-Bauldry (2011) Psych Meth call composite indicators. What they refer to as causal indicators can simply be specified as a MIMIC-type model; no special syntax needed. See their Figure 4, where the causal indicators behave like regular covariates - to me, it is more of a conceptual distinction.
Bollen and Bauldry suggest on p.279 (or p.14) looking at the indicator's unique validity variance, which they define as the difference between the r-square for eta with all causal indicators and r-square for eta, less causal variable x_i.
Mplus provides r-square for the latent variable. How can I get (or compute) the second value to obtain the unique validity variance?
I think you would have to do 2 different runs and get the 2 R-square values.
Lois Downey posted on Thursday, June 30, 2016 - 1:20 pm
I used a suggestion from Bollen and Bauldry's article to build the following syntax for estimating a model with causal indicators:
USEVARIABLES = y1 y2 x1-x4; CATEGORICAL = y1 y2; MODEL: Factor by; Factor on x1@1 x2-x4; y1 y2 on Factor; y1 with y2@0;
Although that is a different method for achieving model identification than the one you suggest in your course handout, it avoids having to use a composite variable between the indicators and the latent variable of interest, and the model ran as expected.
I then wanted to test the model for between-group invariance of the factor indicators. To do that, I added a grouping variable and made changes to the MODEL statement as follows:
GROUPING = country (0=US 1=Canada);
MODEL: Factor by; Factor on x1@1 x2 (1) x3 (2) x4 (3); [Factor@0]; y1 y2 on Factor; y1 with y2@0;
MODEL Canada: [Factor];
However, this model resulted in an error message, indicating that the model may not be identified – and pointing to the factor intercept in the Canada group as the problematic component.
What additional constraint(s) does the model need for statistical identification?
Your single-group model seems to have a free residual variance for the factor - is that really identified?
Your two-group model is not identified because the two intercepts of y1, y2 in their regressions on the factor cannot be identified together with the factor intercepts. It would be identified if you hold those y intercepts equal across the groups.
Lois Downey posted on Saturday, July 02, 2016 - 9:18 am
I get a solution for the single-group model, so it is presumably identified. It includes estimates for 14 free parameters: the factor on the 3 causal indicators the 2 outcomes on the factor the 8 thresholds for the 2 outcomes the residual variance for the factor
With regard to the additional constraints needed for the two-group model: Since the two y-variables are polytomous, do I constrain the thresholds, rather than the intercepts, to equality between groups? (I did that earlier, and that model was identified. I just wasn't sure whether that constraint was "reasonable.")
I see now that you have 2 y's that the factor points to and that are uncorrelated conditioned on the factor so they are like 2 factor indicators which of course makes the factor residual variance identified - this is just a MIMIC model if you fix the y1 on factor coefficient instead of fixing the x1 coefficient.
Right, hold the thresholds equal across groups.
Lois Downey posted on Tuesday, July 05, 2016 - 10:29 am
Thanks. One more question. I think that in testing for between-group measurement invariance with reflective indicators, one is supposed to fix the delta scale factors at 1.0 in the first group, and estimate them in the remaining groups. Should this also be done when the indicators are causal?
No, they act as covariates. Deltas are only for DVs.
Lois Downey posted on Wednesday, July 06, 2016 - 12:39 pm
Yes -- I meant delta scale factors for the outcome variables. But I think one shouldn't constrain those in any way. Correct?
In terms of parameterization, is there any reason not to use theta parameterization for these models with causal indicators (and for testing them for between-group invariance)? I suspect that there is a reason why delta parameterization is the Mplus default, and there may be some advantage to using it. However, residual variance is much easier to understand than scale factors, so is appealing to some of us who aren't statistically sophisticated.
I am following a procedure discussed by Diamantopoulos (2011) MISQ for handling formative indicators using CB-SEM. The model contains 6 IVs, 1 mediator, and 1 outcome variable. All variables are measured with formative items. I use the @1 command to fix the error variance of each of the variables.
I hope this (@1) is the correct command to handle these types of items.
Dear Drs Muthen, I have a model with 3 latent variables: P->B->D (all latent ‘effect’ variables, 4 ordinal indicators each) and I also have 1 observed continuous moderator variable (FI) that moderates the P->B path. As a first step I am trying to fit a measurement model(MM). My understanding is that I need to first A) Fit the MM with the latent constructs (i.e. P, B, D), and then B) I somehow need to incorporate an observed moderator variable (FI) into the MM. Given that I plan to use FI as a moderator variable, I am not sure how to incorporate it into the MM. I read from earlier posts that I need to include it as a covariate using an ‘ON’ statement. If my measurement model is: P BY P1-P4; B BY B1-B4; D BY D1-D4;
Do I incorporate FI into the MM by 1) regressing all latent variables on FI or by only regressing B on FI, since FI will soon moderate the P->B path? 2) do I only regress the latent variable(s) on FI, or do I need to regress all indicators of latent variable(s) onto FI, or is it both? 3) Do I need to constrain variance of any factor to 0 or set a metric of an observed FI to 1? 4) Does the interaction variable between a latent and observed variable – result in a latent or observed interaction term? I am sorry for many questions, I read the Mplus handouts but none of 4 formative models perfectly resembled this case.
Addition to the last one: since the moderator is an observed continuous variable, or would that rather be correct to simply correlate it with all latent variables within the measurement model by using a 'WITH' command?