I am attempting to test a moderated mediation model where X->M->Y (all observed variables) is moderated by w, a binary variable (gender). I know how to examine this with a multiple group approach but am attempting the approach described in Preacher, Rucker & Hayes (2007) in Figure 2 model 5 where W influences paths a and b of the mediation chain.
With some help from Preacher & Hayes, I have modeled this as pasted below. However, all of the estimates SEs, ps, CIs are exactly the same for all of the paths with the exception of X->M and M->Y. This seems very odd to me. Can you point me to what I might be doing wrong? Thank you.
ANALYSIS: BOOTSTRAP = 1000;
MODEL: M WITH MW; M ON X (a1) W XW (a3); Y ON M (b1) X W XW MW (b2);
MODEL CONSTRAINT: NEW(eff1 eff2); eff1=(a1+a3*1)*(b1+b2*1); eff2=(a1+a3*2)*(b1+b2*2); OUTPUT: CINTERVAL(bcbootstrap);
Thank you. The results make much more sense now. Now it is only the M on W and M on MW results that are identical. Does it make sense that this would be the case? Might this be because the MW cross-product involves a binary variable (coded 1 & 2)?
Drs. Muthen, I am attempting to fit a Moderated Mediation model with simple (single level) survey data. Specifically, I wish to examine the moderating influence of a dichotomous variable (Mod) effect coded (-1, +1) on the 'A', 'B', and 'Cprime' paths as seen in Edwards and Lambert (2007; Model H). I realize that the multigroup function of Mplus may be used in this specific instance(e.g. a categorical moderator), however I would like to retain the more traditional 'cross product' approach for two reasons: 1) To replicate OLS estimates for all paths, 2) I am often interested in similar models with continuous moderators where multigroup analysis is not appropriate. IV and Mod were centered prior to importing into Mplus, and product terms were created using a DEFINE statement. I use the following code:
MODEL: Med on Mod IV IVxMod; ! 'A' Path Y on Mod IV IVxMod Med ModxMed; ! 'B' and 'Cprime' Paths
MODEL INDIRECT: Y IND Med IV;
The resulting model has df = 1 and a significant ChiSq. Examination of RESIDUAL matrix suggests a notable covariance between Med and ModxMed. Adding a covariance parameter results in a not-positive first-order derivative matrix.
I have run into this problem before in the past, and I have a hunch that it stems from the fact that Med is an endog variable. Any advice? Thanks
If you obtain standard errors, the message most likely comes from the fact that the mean and the variance of your binary variable are not orthogonal. You can ignore the message if this is the case.
Hong Deng posted on Thursday, May 10, 2012 - 8:10 pm
Dear Drs. Muthen, I’m trying to test a moderated mediation model with nested data. Data for all my variables were collected from individual and the level of interest is individual too. Basically it is a 1-1-1 mediation model (x-m-y)with a level 1 moderator (w). It should be an easy one if my data isn’t nested in several clusters. Is it possible to test such a model in Mplus? How should I write my Syntax. Thanks very much.
Moderation can be handled by a multiple group analysis if the moderator is categorical or by creating an interaction term between the moderator and the variable being moderated. You can use TYPE=COMPLEX to take the non-independence of observations into account. Modify Example 3.11 and add CLUSTER to the VARIABLE command and TYPE=COMPLEX to the ANALYSIS command.
I am attempting to conduct a moderated mediation analysis where all variables are latent variables formed with continuous indicators. In my model, the independent variable (X) functions as a moderator of the b1 path (the path from the mediating variable to outcome). This model is illustrated in Preacher, Rucker & Hayes (2007) - Figure 2, Model 1.
I get an error message that says: MODEL INDIRECT is not available for TYPE=RANDOM.
1. Is it possible to do moderated mediation with a latent interaction variable?
2. Any guidance as to how to generate syntax for this model would be greatly appreciated.
This type of model is discussed in Section 3 and Section 5 of
Muthén, B. (2011). Applications of causally defined direct and indirect effects in mediation analysis using SEM in Mplus.
which is on our web site under Papers, Mediational Modeling. As stated in Section 3, Model Indirect cannot be used for this type of model. Section 5 shows how to define the direct and indirect effects instead using Model Constraint.
Dear Drs. Muthen, I am currently running a moderated mediation model (X -> A -> Y with the pathway from A to Y being moderated by gender) using the following syntax:
Y on X; A on X(Med1); Y on A(Med2);
Y on Gender A*Gender;
MODEL CONSTRAINT: NEW(Indir1); Indir1=Med1*Med2;
This model revealed significant moderated mediation; however, correlations suggested that moderation could also be expected on the pathway between X and A. When I checked for moderation of this pathway it also revealed significant moderated mediation. I've been told that presenting a model with both moderated pathways is inappropriate and that I need to check to see which model is best; however I am unsure of how to do this. Do you have any advice or syntax for such a problem?
I would suggest a 2-group analysis with gender as the grouping variable. You can then easily test if the two paths are the same or different across gender. I can't see why a model with both paths differing across gender would be inappropriate.
Luisa Rossi posted on Thursday, October 25, 2012 - 4:32 am
Dear Drs Muthen,
I have been running some mediation analyses and found that the x-->y relationship was mediated by m.
I would now like to see if the a and b paths of the indirect models are different depending on whether respondents score high or low on a number of personality measures.
So far, I have used multiple group comparisons with difftest to assess this.
I am now thinking there may be a better way to do it but I am not sure. Can you help?
Why are you dissatisfied with your approach? Is it because your moderator is continuous and you are categorizing it? See Example 3.18 in the Version 7 Mplus User's Guide on the website for another approach to moderated mediation.
Luisa Rossi posted on Thursday, October 25, 2012 - 10:08 am
Thanks for the tip! I will look at the example. I am unsure whether multiple group comparison is the most effective way to look at moderated mediation or whether reviewers may criticize it.
With latent variables, you should use the XWITH option for interactions. DEFINE is for observed variables. You don't include it in the MODEL command but above or below it.
Luisa Rossi posted on Wednesday, October 31, 2012 - 5:15 am
AS I am interested in the potential moderating role of w on path a and b, I adapted the model 5 example in Preacher, Rucker, and Hayes (2007) but I used the XWITH command to obtain the two interaction terms
mw | m XWITH w; xw | x XWITH w;
and included these interaction terms in the model as they suggest:
MODEL: y on m (b1) x w mw (b2) xw; m on x (a1) w xw (a3);
I find that there is no evidence for a moderating role of either mw or xw (Ps>.05).
However, when I had run the multiple group comparisons using w as the grouping variable I had found that it moderated path a (not b)...
I am slightly confused on why this might be... can you help? are the 2 approaches not 100% comparable?
These two approaches should yield identical results. Please send the two outputs and your license number to email@example.com.
C. Lechner posted on Monday, January 14, 2013 - 7:58 am
Dear Drs. Muthén,
I am testing a moderated mediation model where the moderator is a latent variable; i.e., there is a latent interaction using the XWITH command involved. Because this requires TYPE=RANDOM, the STANDARDIZED output is not available in these analyses. However, I would like to report R-square values for my outcomes. --> Is there any way to obtain R-square values for these analyses?
See the FAQ Latent Variable Interactions on the website.
C. Lechner posted on Thursday, January 17, 2013 - 7:38 am
Thanks, Linda. I have a follow-up question:
Using the formulas provided in the FAQ, I programmed myself an Excel spreadsheet that calculates R-square, change in R-square, and standardized path coefficients. It reproduces the numbers from your example on p. 6 in the FAQ sheet perfectly well.
However, I wonder how one would generalize the equations from the FAQ sheet to include covariates. In my model (otherwise identical to Fig.2 on p.7), all three latent variables are regressed on a set of covariates. -> How do I get the total variances of the latent variables in this case?
My approach was to sum all the product terms of the squared regression weights and the variance of the respective predictor, plus the residual variance of the latent variable (which I get in the output). E.g., for a latent variable eta2, regressed on eta1 and covariates x1 and x2, where b1 to b3 denote regression coefficients and zeta2 the residual variance of eta2, the total variance would be obtained by computing: Var(eta2) = b1^2 * Var(x1) + b2^2 * Var(x2)+ b3^2 * Var(eta1) + Var(zeta2).
However, this seems to systematically underestimate r-square. -> Have I overlooked anything? -> Is there any way to directly get the total variance of a latent variable that is regressed on a set of covariates in the Mplus output?
You need the variances of the two factors in the interaction. I think you are using the residual variances. If the variances are not available in TECH4, you will need to compute them.
C. Lechner posted on Thursday, January 17, 2013 - 9:42 am
TECH4 is unavailable for TYPE = RANDOM. I tried to compute the variances of all latent variables in the way described above. Referring to the notation in Figure 2 in the FAQ sheet, extended by two covariates x1 and x2 for each latent variable, I compute
where eta1 is regressed on eta2 (coefficient ß), eta3 is regressed on eta1 (coefficient ß1), eta2 (coefficient ß2), and their interaction (coefficient ß3), all three latent are regressed on the covariates x1 and x2 (coefficients bij), zeta_i denote residual variances for the latent variables; and where Var(eta1*eta2) = Var(eta1)*Var(eta2)+[Cov(eta1,eta2)]^2 and Cov(eta1,eta2) = ß*Var(eta2).
-> Is this correct? There must be something missing. R-squares are substantially smaller than the ones Mplus computes for a model without interaction.
In your eta1 equation you have to express eta2 in terms of the x's so that eta1 is written as a function of these x's.
You also have to take into account that the x's are correlated and add a term like for the et1 equation:
C. Lechner posted on Friday, January 18, 2013 - 1:04 am
Thanks you, Bengt – I think the covariance term is what I had overlooked. I'll add it and see whether the numbers add up to something that makes sense then.
As this involves a lot of manual computation when more covariates are involved, I wonder whether there is any easy workaround? E.g., could one simply estimate the factor variances in a measurement-part only model (without the covariates and structural paths) and use those as input for the calculations of standardized parameters and r-square in the final model? I'm afraid that would bias the variance estimates, wouldn't it?
Seems like you have set it up right. One thing you want to test is the difference between eff1 and eff2. To test significance of the moderation in the mediation, don't you want to test that a2*(b1+b2) is significant? Both the difference and this last term can be given NEW names so you get z-tests.
I wanted to ask a question about a post from the first author on this post regarding a model he created from your 2007 Conditional Indirect Effects Paper, working from "Model 5". In line with the model, I am confused about the role of eff1 and eff2.
Having performed previous mediations with the eff1 = a*b, this makes sense to me based on the M ON (X & XW) and Y on (M & MW) paths. I am confused about the eff2 role of multiplying each by 2 -- what would this be representing?
Thank you! Leslie
MODEL CONSTRAINT: NEW(eff1 eff2); eff1=(a1+a3*1)*(b1+b2*1); eff2=(a1+a3*2)*(b1+b2*2); OUTPUT: CINTERVAL(bcbootstrap);
I am attempting to complete a moderated mediation in which my predictor and mediator variables are latent and I have a categorical and latent outcome. The format follows Preacher’s Model 5 in which the moderator impacts both paths a and b. The moderator variable has 4 categories (Black, White, Hispanic, Other). Would it be best to utilize the multiple group function, comparing all 4 racial groups together? Though, I am unsure of how to complete group comparisons from there. Or would it be better to run models with several dummy codes representing the moderator?
I found most other researchers would calculate the critical ratios of differences(CRD) by dividing the difference between two estimates by an estimate of the standard error of the difference (Arbuckle, 2003). A CRD greater than 1.96 indicates that there was a significant difference between the two parameter estimates at p < 0.05.However, they usually do this in the Amos software. However, in the Mplus result part, I cannot find the standard error of the difference.
I am wondering how can you do this through the results provided by Mplus?
My question is, I want to do multi-group analysis to identify whether the path coefficients differ significantly between east and west. We compared the first model, which allows the structural paths to vary across cultures, with the second model, which constrains the structural paths across cultures to be equal to examine the cultural differences. All the other paths (i.e., factor loadings, error variances and structure covariances) were constrained to be equal. However, I found the factor loading are still different in the Mplus result part. Are there other things that need to be constrained equally?
By the way, do you have any sample code for me to do such a multiple group comparison of the mediation model?
You can use MODEL TEST to do the CRD test. See the user's guide.
See the Topic 1 course handout on the website under multiple group analysis to see the inputs for testing for measurement invarinace.
JOEL WONG posted on Tuesday, October 08, 2013 - 5:28 pm
I am attempting to test a first and second stage moderated mediation model. I have one predictor, one outcome, one mediator, and one moderator (W). W is hypothesized to moderate the relationship between the predictor and the mediator and the relationship between the mediator and the outcome.
UG ex 3.18 describes the case of a moderator Z that moderates the influence of the predictor X on M and X on Y. That involves creating X*Z in Define. A plot of the effects and their confidence bands are obtained by LOOP. So that's the first part of your question.
The second part is the moderation of the M->Y relationship. This calls for creating M*Z and regressing Y on it. LOOP could be used here as well.
Thank you for your previous suggestion in running my moderated mediation with 2 latent predictors, continuous mediator, and binary outcome. I have 4 groups, and have tested the model using the indirect command and bootstrapping (with BC confidence intervals). To compare the effect size of the indirect effects across groups. Would it be acceptable to examine whether the confidence intervals between groups overlaps as a means of testing for significant differences? Or is there a better way to test whether the effect sizes are significantly different (e.g. running a difftest where I constrain paths a and b and compare them to a non-constrained model)?
Also, when I run the mediation for the whole sample, the mediation is significant. However, when I run the mediation using multigroup, the indirect effect is no longer significant in any of my four groups. Would it be safe to say that this could be due to sample size issues (as, I have already established measurement invariance in my predictors)
Laura Baams posted on Tuesday, October 29, 2013 - 9:02 am
I am running a bootstrap multigroup mediation model with observed variables. Two predictors (x1, x2), two mediators (m1, m2) and one outcome (y). There are 5 groups for the multigroup part.
I have compared model fit of a model in which I constrain all paths to be equal across groups, and a model in which they are not equal, and variations of this. The model with the best fit, is the one where groups 1 and 2 are constrained to be equal, and group 3 and 4 are constrained to be equal.
I need to report the standardized estimates, but these are not equal for groups 1 and 2, or 3 and 4, while the unstandardized estimates are.
From other posts I understand that Mplus does not constrain standardized estimates, does this mean I cannot report standardized estimates in this case?
Is there a way I can still obtain standardized estimates (that are equal across groups 1 and 2; and 3 and 4)?
The standardized coefficients are standardized using different standard deviations for each group. This is why the coefficients are different. It is not because they are not constrained to be equal in the analysis.