Bonnie posted on Thursday, February 17, 2005 - 2:41 pm
I am running some SEM models with two mediators, then how can I decide whether there is mediation effect or not. When there is only one mediator, I can compare the indirect effect to direct effect to determine the mediation effect or test the product of coefficient of path b and a. Then if there are two mediators, can i also judge it by calculating the ratio of the total indirect effect to direct effect, like (b*c+d*e)/a? Thanks a lot!
bmuthen posted on Thursday, February 17, 2005 - 4:54 pm
Please search for David Mackinnon on Mplus Discussion where he discusses mediation and gives references to papers. And do a literature search on MacKinnon.
I am running a multiple mediator model involving 1 IV, 4 Mediators, and 1 DV. Initially, these models were run using the Preacher & Hayes SAS Macro for Multiple Mediators; however, 2 Meds are dichotomous and 1 is ordinal -- so I am re-running in Mplus to re-assess effects.
A couple questions: 1) I initially ran the model in Mplus without indicating categorical data to serve as a cross-check for output b/w the SAS Macro and MPlus Indirect Effects. Although results are similar, I see some differences between standard errors and resulting 95% CI. Any thoughts as to why this might be the case? I have specified to covary mediator error terms. The SAS Macro provides both normal theory tests and bootstrap tests, but the se estimates are slightly different from each (I'm using the bcbootstrap option).
2) I see from a search of the discussion postings that the logic/approach for dichotomous predictors using Model Indirect is still appropriate -- any issues with ordinal data (a 5-level ordinal response that is not normally distributed)?
An additional follow-up based on this path model (with no latent variables): when I indicate the appropriate mediators are categorical, I get a warning indicating: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE.... The problem variable is one of the dichotomous mediators.
1) Given I'm running path analysis (rather than an LV model) -- should I just interpret this warning to indicate the resulting covariance matrix is not positive definite and the problem is arising from manifest variable relations?
2) Tech4 does not reveal any correlations >= to 1, and I don't see any negative residual variances in the Model Results (for my DV or the 1 continuous Mediator). Is there some additional check that can be requested to try to identify the model problem?
In reference to my first post, might the discrepancy also be the result of the bootstrapping procedure (and resulting resampling that occurs) if the point estimates match but the standard error estimates differ somewhat?
In reference to my second post, I discovered that the warning disappears if I fix the correlations of mediator error terms to zero (i.e., don't model them with a WITH statement). Preacher & Hayes indicate that these should be estimated if the product of the coefficient method is used (as fixing at "0" can affect the validity of inferences), but indicate this is not a requirement of the bootstrapping method (since their confidence intervals are not dependent on the correlation of mediator terms). However, results of my Model Indirect Statement differ significantly across these two approaches. Which is most appropriate in this instance?
Regarding your first post, SAS may be using ML and you may be using MLR, which would give different SEs. I don't know why the bootstrap procedure of SAS would give different results; as you say when the point estimates are the same the difference lies in the bootstrap procedure. An ordinal mediator is still ok for indirect product effects since this considers its underlying latent response variable.
Regarding your second post, the model with correlated mediator residuals should be used a priori because it is likely that you have correlated, left-out predictors of the mediators. Otherwise you will get a seriously misspecified (and probably misfitting) model. You have a high correlation between 2 of your mediator residuals:
YCHPROV2 WITH YCHWITH3 0.775
and this correlation being much higher than the others may cause the non-pos-def res corr matrix. That may not be a serious problem given that your sample size is rather low - a larger sample might give pos-def. But you may ask why this 0.775 res corr occurs - that's a lot of common correlated left-out causes for these 2 mediators.
I've tried a few different approaches to address the 'not positive definite' warning, including: (a) dropping one or the other of the offending binary mediators, (b) creating an ordinal variable that combines the binary mediators in a logical progression, (c) switching the 3rd ordinal mediator for a conceptually similar variable that is continuous, and (d) removing the correlated mediator residuals from the model.
The later is the only solution that removes this warning (but the results are different -- with the correlated residual model behaving more like the results I get when I treat all the variables as continuous (i.e., don't specify that 2 mediators are binary).
Is there some other diagnostic option or modeling solution that I should examine? If not, is there any guidance/literature that can indicate when such a warning is "serious" and when it may indicate a problem that is "explainable" (e.g., may be the result of sample size issues/concerns)?
Relatedly -- is there any simulation research that examines the performance of categorical mediators and correlated error -- just wondering to what extent this issue is the result of issues related to the type of data?
It sounded to me like two of the mediators were similar and therefore might share left-out covariates to a large degree - left-out covariate influence ending up in the residual. So one solution is to include more relevant covariates for those mediators to reduce the residual (and hopefully the residual correlation). Another is to remove one of the two highly correlated mediators.
Dear Linda and Bengt, I think it is so wonderful that Mplus offers an option for testing mediation effects involving more than one mediator using the Delta Method. I thought the article published by Taylor, MacKinnon, & Tein (2008) in Organizational Research Method might be a good reference.
I tested indirect effects using VIA commend ( dy1 VIA ln7 mal;) and found that 3 mediation pathways were significant out of 8 possible mediation pathways. May I go by the p level .05 OR should I worry about using some kind of adjustment for a possibility of inflated Type I error due to conducting multiple significance tests?
Whenever you do more than one test, this should be taken into consideration by being conservative about your p-values. I don't know of any rule for your situation.
Emily Yeend posted on Tuesday, August 24, 2010 - 3:12 am
I'm running a mediation model containing one independent variable and one outcome variable (both continuous). The effect of the IV is suggested to be mediated by two mediators (one binary and one continuous). I have allowed for correlation between the residuals of the mediators which I find to be non-significant. I have a few questions.
Firstly, am I right in thinking that this correlation can now be dropped from the model or should it still be retained even though non-significant.
Secondly, I find that one of the proposed mediators does not actually carry an indirect effect, nor does the IV influence it. So my model now involves one mediator and a covariate of the outcome (the old mediator). This means I now have two exogenous variables (IV and covariate / old mediator) which Mplus will automatically allow to correlate. Does this then mean that the correlation I've specified between the residuals of the "mediators" is now actually the correlation between the mediator and the covariate? And presumably this no-longer needs to be in the model.
In a posting it is written, that Mplus uses the delta method, not the Sobel test. Im somewhat confused about the difference. For example MacKinnon (2008: 52) gives the SE as SE² = a²s²(b) + b²s²(a) and writes, that computer programs (Mplus, Lisrel, EQS) use this formula, based on the delta method. And as I know, this is also the SE for the Sobel test.
Or does Mplus use the formula SE² = a²s²(b) + b²s²(a) + s²(a)s²(b)
The MacKinnon (2008) book describes the Sobel method and the delta method for the indirect effect a*b in Section 4.14. See especially page 92. The delta method uses formula (4.27) with an added covariance term between the a and b estimates (see second line below 4.27). For some models, such as the mediation model for continuous observed variables, the covariance term is zero so that the delta method simplifies to the formula of (4.27). I believe (4.27) is what is referred to as the Sobel method. Mplus uses the delta method in Model Indirect and also in Model Constraint.
What is the best way to test mediation models with SEM?
I am testing a 2-wave latent difference score mediation model (2 mediators).
The problem is (just for t1, but similar for the full model):
First, I run a full mediation model. The paths from the independent latent F1 to the two latent mediators (M1 and M2) are significant and also the paths from the mediators to the latent dependent var F2 are significant. There are also sign. indirect effects.Then, I use the chi²-difference test to compare the full mediation model with a "partial mediation" model (+direct paths from F1 on F2). The results favour the full mediation model. --> this approach is similar to the ones outlined by Holmbeck (1997) or Cole and Maxwell (2003).
But in the partial mediation model, there are no signficant effects (F1->F2, M1->F2, M2->F2), although the standardized effects are quite "high" (about 0,2). I get big SE due to collinearity of the between F1, M1 and M2. Following, the indirect effects are also not significant.
Would I follow other recommondations like the ones outlined in Iacobucci et al (2007) and test direct and indirect paths simultanously in the first step, I would conclude that there is no mediation.
I'm running multiple mediation models and asking for the model indirects. In the output I'm not seeing the R-squared that will tell me how much each mediated effect contributes to the model. Is there code I need to add in to get this? Thanks!