Mediation and bootstrap standard errors PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Daniel posted on Friday, June 18, 2004 - 5:16 am
I am running several mediation models in which my dependent variable is ordered categorical. I am using the bootstrap method to estimate standard errors for the indirect effects, with the bootstrap analysis command. I asked for confidence intervals and am given the appropriate intervals. Can I use these intervals along with the effects to estimate odds ratios, or is this incorrect if my mediator is continuous?
 bmuthen posted on Friday, June 18, 2004 - 8:42 am
Which estimator are you using, WLSMV or ML?
 Daniel posted on Friday, June 18, 2004 - 9:02 am
I was using the default estimator. I believe it is WLSMV since I am modeling with categorical dependent variables, although I may be wrong.
 bmuthen posted on Friday, June 18, 2004 - 9:20 am
The default WLSMV works with probit regressions so the estimates are not directly in odds ratio metric. The indirect effects are with respect to a continuous y* variable behind the dependent observed categorical variable, where y* is the response propensity. I think this idea has been discussed in David MacKinnon's work.
 Daniel posted on Saturday, June 19, 2004 - 9:00 am
I have a question regarding the indirect effect. If the two arcs (paths) in the specific indirect affect (a to b [path a'] and b to c [path b']) are each significant (i.e., a' and b'), shouldn't the total indirect effect [a' * b')also be significant? Or is it possible for a' and b' to be significant without the specific indirect effect (a' * b') being significant?
 bmuthen posted on Saturday, June 19, 2004 - 11:59 am
Seems like this is possible because the indirect effect is a product of the two estimates and the SE of this product is a function not only of each of the two SEs, but also the covariance between the two estimates - which might be positive and therefore make the denominator of the test larger.
 Daniel posted on Monday, June 21, 2004 - 8:04 am
Ok, if I have the case where each path is significant, but the total indirect effect is not significant, what could I conclude about mediation?
 bmuthen posted on Monday, June 21, 2004 - 4:53 pm
I would say there is no significant mediation.
 Daniel posted on Tuesday, June 22, 2004 - 6:12 am
Thanks. My population is 913 for the study, and I am modeling mediation in an associative process model between two LGM, each with two random effects (trend and intercept), and about 5 covariates. The observed measures are ordered categorical. What would you suggest I set the bootstrap to (i.e., bootstrap=?) in the analysis command?
 Linda K. Muthen posted on Tuesday, June 22, 2004 - 8:21 am
There is no rule for this. You should experiment. Start with 250. Then try 500. Compare the standard errors to see if there is much difference.
 Daniel posted on Tuesday, June 22, 2004 - 10:49 am
I ran the bootstrap at 250, 300, 350, 400, and 450, and it ran fine each time, with each increase resulting in a proportional increase in run time. However, as soon as I run the bootsrap at 500, it runs for hours without end. Last night I tried to run it with 1000, and left the program running all night, after leaving work at about 4 PM. I returned to work the next morning, and it was still running. Why do you believe I cannot get a solution with values greater than or equal to 500? Does it have something to do with the associative processes or categorical outcome variables?
 Linda K. Muthen posted on Tuesday, June 22, 2004 - 11:04 am
Why don't you send the 400 run output, the 500 run input, and the data to support@statmodel.com so I can take a look at it.
 Tom Hildebrandt posted on Monday, June 27, 2005 - 8:09 pm
Using a WLSMV estimator, why would a chi-square test not be calculated using Dr. MacKinnon's bias-corrected bootstrap method of estimating SE and confidence intervals in a path analysis with multiple mediational pathways?
 Linda K. Muthen posted on Tuesday, June 28, 2005 - 8:04 am
There is no reason. We have so far only implemented bootstrap for standard errors.
 Tom Hildebrandt posted on Tuesday, June 28, 2005 - 9:22 am
Thank you very much for your quick response.

Would it appropriate then to report the chi-square goodness of fit test calculated when not using bootstrap function as long as the WLSMV estimator is used?
 Linda K. Muthen posted on Wednesday, June 29, 2005 - 7:19 am
Yes but you should make it clear that although the standard errors are bootstrapped, the chi-square is not.
 Charles Green posted on Monday, February 20, 2006 - 5:58 pm
I am currently running a mediational structural equation model dealing with domestic violence. The observed measures are primarily indices derived from self-report scales which have ranges as broad as 0 to 177 (i.e. The item endorsements are ordinal values, each of which represents a frequency range for specific behaviors (0 = 0-10, 1 = 10-20, etc.). These items are subsequently summed to produce the indices of interest for the current model.). I decided to model the data as continuous censored, but received the following error message:

INPUT READING TERMINATED NORMALLY

*** FATAL ERROR
Internal Error Code: GH1006.
An internal error has occurred. Please contact us about the error,
providing both the input and data files if possible.

I am forwarding the requested information to you.

In the meantime however, I am trying to resolve two questions regarding the model:

1) Regarding overall fit indices, I am contemplating the use of the MLR estimator, treating the data as continuous.
a) Given non-normal data and censoring
from below (at sero) to what degree
might this yield misleading
results?
b) Does applying a Bollen-Stine
bootstrap procedure provide a means
to address this more effectively?

2) Regarding the standard errors of the
parameter estrimates in the model I
would prefer to use a bootstrap
procedure since the this will provide
me with confidence intervals for the
indirect effects.
a) Do you detect anything problematic
with using the MLR approach for the
overall model fit indices, followed
by reporting confidence intervals
for the parameter estimates derived
from a bootstrap procedure?

Any guidance you might offer would be greatly valued.
 bmuthen posted on Monday, February 20, 2006 - 6:52 pm
1.
a. With a high degree of censoring (say > 25-50%), the SEs and chi-square based fit indices may be off. The basic problem is that the linear model assumed is wrong with strong censoring, so non-normality robustness in SEs and chi-square doesn't help. Overall fit indices are perhaps less important than getting the right parameter estimates and checking fit by 2*LL for nested, neighbouring models.

b. I don't think so.

2. That's fine.

a. Not in principle.
 Charles Green posted on Monday, February 20, 2006 - 7:24 pm
Thank you so much for you prompt answer. If I might clarify this: Indeed I do have proportions of censored observations that are above 25%.

The error message I reported evidently occurs in version 3.01 but has since been corrected in version 3.14. If I run the analysis using the updated (v. 3.14) program, specifying which variables are censored I would obtain appropriate log-likelihoods from which -2*LL could be used for tests of nested models.

Am I correct in saying that the log-likelihoods obtained without the censor specification would be misleading?

Having obtained the -2*LL, I can use the Baron and Kenny (1981) approach to evaluating mediation. However, would there be a problem with removing the censor specification and bootstrapping the parameter estimates so that I can use the MacKinnnon (2004) approach to obtaining indirect effects and confidence intervals?

Finally, is there some reference you would recommend where I might find a primer on bootstrapping specifically regarding how to choose among the different bootstrap confidence intervals?

Many thanks.
 bmuthen posted on Tuesday, February 21, 2006 - 3:23 pm
Yes, you are correct in that the loglik would be misleading if not taking censoring into account such as when using the censored approach.

You should use the same model for parameter estimation and testing as for the bootstrapping.

Efron, B. & Tibshirani, R.J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.

MacKinnon, D.P., Lockwood, C.M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99-128.
 Daniel Rodriguez posted on Wednesday, May 31, 2006 - 10:36 am
Hello,
I am back on the mediation analysis trail. This time, unlike most of the data I analyze, my sample size is not that large (n=376) and my ultimate outcome (smoking) is ordered categorical with four levels. I am running a SEM with measured variables only (no factors). Is it better to calculate standard errors with bootstrapping or Delta method in this case, due to the relatively small sample size?
 Linda K. Muthen posted on Thursday, June 01, 2006 - 6:40 am
I would use the default standard errors for the estimator you choose. I don't think that you would benefit from bootstrapping.
 Daniel Rodriguez posted on Thursday, June 01, 2006 - 12:24 pm
Thanks
 Daniel Rodriguez posted on Monday, July 17, 2006 - 9:39 am
Hello Linda and Bengt. I am asked by a reviewer to estimate the size of an effect in my model. I actually sent you this data before. My finding is that the significant indirect effect with 95% confidence interval is .054(.008,.101). You mentioned that this is a small effect. How should I word this in the results/discussion section to indicate the strength of this effect? I'd appreciate any clues if you have them. By the way, this was calculated with the delta method.
 Bengt O. Muthen posted on Monday, July 17, 2006 - 5:50 pm
To know how small it is, wouldn't you want to evaluate it in terms of the SD of the independent and dependent variables, so using a standardized value?
 Daniel Rodriguez posted on Tuesday, July 18, 2006 - 5:00 am
Ok, I see. Thank you very much.
DR
 Yi-fu Chen posted on Friday, July 21, 2006 - 7:13 am
Hi, Dr. Muthen,

I am working on a model to test mediation effects. I have two predictors, four mediators and two outcomes. The outcomes are all continuous. I've tried to use MODEL INDIRECT with BOOTSTRAP to estimate the standard errors of the indirect effect.

The question I have is that:
When I ran a recursive model in which outcome1 predicted outcome 2, the output of model indirect showed the standard errors of indirect effects for predictors via each mediator.
However, when I estimated the recipical relationship between the two outcomes, the output showed only the total indirect effect for each predictors, but no printouts for the contribution of each mediator.

I don't know if what I got is right for Mplus when recipical model are estimated.
Is there any way that I can get more detail indirect effect information for this kind of model?

I am using MPLUS 3.0.

Thanks!
 Linda K. Muthen posted on Saturday, July 22, 2006 - 11:33 am
I don't think this is possible. See the Bollen SEM book to check.
 Marco Haferburg posted on Monday, August 14, 2006 - 12:34 am
Dear Mplus-team, I have read in an article by MacKinnon and colleaques that there are different ways to calculate SE for indirect effects, using the delta method (e.g. Freedman & Schatzkin, 1992, or Olkin & Finn, 1995). I would be interested in which one is implemented in Mplus?

MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G. & Sheets, V. (2002). A comparison of methods to test mediation and other intervening variable effects. Psychological Methods, 7 (1), 83-104.
 Linda K. Muthen posted on Monday, August 14, 2006 - 8:25 am
We calculate standard errors for indirect effects using both the Delta method and bootstrap as described in the MacKinnon et al. aricle. I am not aware that there are different Delta methods.
 Claire Hofer posted on Thursday, December 07, 2006 - 8:02 am
Could you tell me about the differences in the bootstrap method between mplus version 3 and version 4? I am getting very different results: my model run in version 4 with the Bollen Stine bootstrap method matches closely what I get in the regular model using ml or mlr estimation but when I run the model in version 3 with the bootstrap method there, I get completely different results. We do have missing data. Could you tell me a little bit about why the results might be so different?
Thank you.
 Linda K. Muthen posted on Thursday, December 07, 2006 - 9:06 am
I don't know of any reason offhand there would be a difference. If you send your input, data, output, and license number to support@statmodel.com, I can take a look at it.
 Garth Rauscher posted on Wednesday, August 08, 2007 - 8:30 am
Dear Drs. Muthen,

We are running a mediation model with three exogenous variables - (1 continuous and two indicators for race/ethnicity), two mediating variables (one continuous and one dichotomous) and one outcome variable (dichotomous). For different paths we are calculating the mediation proportion, defined as the indirect effect divided by the total effect (indirect + direct effect). We would like to be able to calculate confidence intervals for mediation proportion by using the estimates from each individual bootstrapped dataset. The question is: Can M-Plus output into a separate dataset the individual bootstrapped estimates of the direct and indirect effects for a given model?
 Linda K. Muthen posted on Tuesday, August 14, 2007 - 4:32 pm
Mplus does not saved indirect effects and does not saved results from each bootstrap replication.
 Emily Blood posted on Tuesday, October 09, 2007 - 6:15 pm
Within the MC facility, is there a way to output indirect effect values and standard errors of indirect effects for each MC replication? I am currently outputting the parameters from each MC replication, but am not able to output the indirect effects and their standard errors from each replication, only the mean and se of all indirect effects from all MC replications. Is this possible in Mplus?
Thanks.
 Linda K. Muthen posted on Wednesday, October 10, 2007 - 1:34 pm
No, results from MODEL INDIRECT are not saved. The only way to obtain them would be to save all of the data sets and analyze them one at a time.
 Eric posted on Monday, June 16, 2008 - 10:20 pm
I am using cinterval(bcbootstrap) to get confidence intervals for indirect effects in a path analysis model with 4 mediators. Though I get confidence intervals for the specific indirect effects, the confidence intervals for the rest of the path estimates are all zeros. Does this mean that I should not trust the CIs for the specific indirect effects?
 Linda K. Muthen posted on Tuesday, June 17, 2008 - 6:10 am
It sounds like you are using an old version of the program. I think there may have been a problem some time ago. I suggest using Version 5.1.
 Eric posted on Tuesday, June 17, 2008 - 9:56 am
Is it possible to get more decimal places for the confidence intervals when using cinterval(bcbootstrap). One of my confidence intervals ranges from 0.000 to 0.050 and I would like to be able to say that the effect is significant. I have tried using the savedata command, but I am not sure what to ask for, since the results option does not seem to include the confidence intervals. Thanks for your help.
 Linda K. Muthen posted on Tuesday, June 17, 2008 - 12:06 pm
Confidence intervals are not saved. You can rescale your variables by dividing them by a constant using the DEFINE command.
 krisitne amlund hagen posted on Tuesday, September 09, 2008 - 2:33 am
Dear Drs. Muthen,
We're running a mediational SEM model, in which we have an X, a Y and two mediators, Ma and Mb. When we're running two seperate models, both Ma and Mb fully mediate the X-->Y relationship (using bootstrapped standard errors). But when we model both mediators in the same SEM model, the Ma mediator is no longer significantly related to Y. All other paths are significant, including the X-->Ma. We also ran an regression analysis and found that Ma predicts unique variance in Y after controlling for both X and Mb.
1. What can we conclude about Ma as a mediator of X-->Y?
2. Could it be that the finding that only Mb (and not Ma) mediates the X-->Y when tested in the same model, is a statistical artifact? And if that is the case, how does that happen?
3. Alternatively, if it is not an artifact, what can we conclude that Mb is a more important mediator than Ma when compared in the same model? How should one then report that Ma functioned as a mediator when tested alone and when tested in a regression model and was found to predict unique variance in Y.

I thank you in advance and for a great discussion board.
 Linda K. Muthen posted on Tuesday, September 09, 2008 - 9:00 am
If Ma and Mb are highly correlated, there may not be anything left in y to predict beyond what one of the mediators predicts.
 krisitne amlund hagen posted on Wednesday, September 10, 2008 - 7:09 am
Thak you so much for your prompt reply.
That's probably right, that the high correlation btw Ma and Mb messes this up. My question is still, though, what can we conclude about Mb as a mediator? Is it an artifact, that is, could it just as easily have been Ma that ended up with the significant path or neither?
If Ma and Mb are so highly correlated that the Ma --> Y path becomes non-sig., it doesn't explain why the Mb --> Y is significant? Nor why we found that the unique contribution of Ma was sig. after controlling for X and Mb in a regression model. Or does it?
 Bengt O. Muthen posted on Wednesday, September 10, 2008 - 8:44 am
This topic - without the mediation angle - is discussed in the linear regression literature under the heading multicollinearity. You may want to take a look at that. I don't think it is possible to conclude about the joint role of Ma and Mb in such a situation, only that each entered separately is a mediator. You may also want to consult the new mediation book by David MacKinnon to see if he has some wisdom on this topic.
 Metin Ozdemir posted on Friday, November 14, 2008 - 4:49 pm
I have question regarding MPlus output. I used bootstraping to test a mediation effect. On the output for Model Indirect command, I have columns for "Estimates S.E. Est./S.E. StdYX StdYX SE StdYX/SE."

Can you please explain me what StdYX, StdYX SE, and StdYX/SE refers to?

Which one is the test of indirect effect?

Thanks.

Metin
 Linda K. Muthen posted on Friday, November 14, 2008 - 4:57 pm
The test is the ratio of the estimate to the standard error of the estimate. Please see Chapter 17 for a description of the columns of the Mplus output and information about the various standardizations.
 miriam gebauer posted on Sunday, November 01, 2009 - 7:57 am
Hello,
can I use FIML to test mediation (with bootstrapping) or to model interaction (both for latent variables)?

And in the case of using multiple imputation how do I treat the fit values and indirect and direct coefficients? Can I just use the (so)-called rubin formular (which would be like a mean)?

Thanks for your help, Miriam
 miriam gebauer posted on Sunday, November 01, 2009 - 8:26 am
I would like to explain my post above a little more: maybe it is not so clear what I am trying to ask please exuse that:
I am trying to model mediation (with bootstrapping) and moderation (with interaction) but I have missings which I would like to impute: Now I have 5 datasets and Iam doing my analysis with all those data sets, because I cannot read in the 5 data sets at once, because beforementioned modelings won't allow that. But I don't know how to handle the coefficients or fit values of those 5 analysis could you me give me an advice how to handle this? (Is that done with the rubin formular?)
Further I read fiml is an appropriate way to handle missing data: but as far as I read here it's more used in multilevel or group analysis. So that was an idea that this could have been a way for my issue.
Thanks for your help, Miriam
 Linda K. Muthen posted on Sunday, November 01, 2009 - 9:43 am
You can use the IMPUTATION option of the DATA command to analyze a set of imputed datasets. Correct parameters estimates and standard errors are calculated. Fit statistics are provided.
 miriam gebauer posted on Monday, November 02, 2009 - 12:50 am
Maybe I did something wrong. But I had problems useing this command for the modeling interaction or mediation (bootstrapping).
 miriam gebauer posted on Monday, November 02, 2009 - 1:17 am
Maybe I did something wrong, but I had problems using the command IMPUTATION. This is the error I get

*** ERROR
MODEL INDIRECT is not allowed with TYPE=IMPUTATION.
The same error shows up when I try to model interaction.
So thats why I am modelling it with each of the five data sets and would like to ask if the fit values can be integrated by calcutating the mean of them?
 Linda K. Muthen posted on Monday, November 02, 2009 - 9:22 am
You cannot use MODEL INDIRECT with TYPE=IMPUTATION but you should be able to use XWITH. I would use MODEL CONSTRAINT with TYPE=IMPUTATION to define the indirect effects. Although the parameter estimates are simply an average across imputed data sets, the standard errors and chi-square are not and cannot be computed by hand. If you have further problems along this line, please send them along with your license number to support@statmodel.com.
 miriam gebauer posted on Wednesday, November 04, 2009 - 1:49 am
Thank you so much for your help. I will first try to model it with the commands you recommended. If this will not work out - I will come back to you later and send you my data.
 Marco DiBonaventura posted on Friday, February 11, 2011 - 2:23 pm
Hello--

I'm running a fairly basic mediation with a dichotomous IV, a dichotomous mediator, and a non-normal continuous DV (count data). I'm using the bootstrapping command and requesting the indirect effect. However, I can only seem to do this with the WLSMV estimator and was not able to specify a negative binomial distribution for the DV.

1. Is the skewness of the DV a problem, given that I'm boostrapping? If so, is there anything to be done, since I'm unable to execute the (NB) command?

2. The WLSMV estimates are much different than with ML. Is the interpretation of the estimates the same? Can I exponentiate them to get odds ratios of the IV --> M relationship?

Thanks for any help!
 Linda K. Muthen posted on Sunday, February 13, 2011 - 2:54 pm
1. In Mplus, indirect effects can be computed when mediators are categorical only using weighted least squares estimation.

2. WLSMV estimates are in a probit metric. ML estimates are in a logit metric. WLSMV estimates should not be exponentiated.
 yan liu posted on Sunday, August 28, 2011 - 9:34 am
Hi, Linda and Bengt

I am running a multilevel SEM mediation model:
mediator1=b*predictor;
mediator2=b1*predictor+b2*mediator1;
outcome=b1*predictor+b2*mediator1+b3*mediator2;

Try to calculate the indirect effects and test if it's significant, using the formula provided by Hayes (2009). I found for between level, although all the mediation effects were not significant, but the sum (total indirect effects) turned out to be signficant, which does not make sense to me. Is the way to test "indtotw" and "indtotb" correct? Thanks!

%WITHIN%
PNS ON teach (a1w);
movat ON teach (a2w);
movat ON PNS (a3w);
engage ON PNS (b1w);
engage ON movat (b2w);
engage ON teach;

%BETWEEN%
PNS ON teach (a1b);
movat ON teach (a2b);
movat ON PNS (a3b);
engage ON PNS (b1b);
engage ON movat (b2b);
engage ON teach;

MODEL CONSTRAINT:
NEW(ind1w ind2w ind3w ind1b ind2b ind3b indtotw indtotb);
ind1w=a1w*b1w;
ind2w=a2w*b2w;
ind3w=a1w*a3w*b2w;

ind1b=a1b*b1b;
ind2b=a2b*b2b;
ind3b=a1b*a3b*b2b;

indtotw= ind1w+ind2w+ind3w;
indtotb= ind1b+ind2b+ind3b;
 yan liu posted on Sunday, August 28, 2011 - 11:59 am
Just want to follow up my question just posted. The equation for computing the total effects of several mediation pathways can be find in Hayes (2009).

Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical Mediation Analysis in the New Millennium , Communication Monographs, 76(4), 408-420.
http://www.tandfonline.com/doi/pdf/10.1080/03637750903310360

Thanks a lot!
 Bengt O. Muthen posted on Sunday, August 28, 2011 - 2:20 pm
It looks correct to me. If your sample size is small you may want to try Bayesian analysis which allows indirect effects to have a non-normal distribution.
 yan liu posted on Sunday, September 11, 2011 - 11:33 am
Hi, Bengt

Thank you so much for your reply. Following up your suggestion to my question (posted above, Aug.28), I tried Bayes estimation. I added the following code to my original Mplus syntax

ANALYSIS:
TYPE = TWOLEVEL;
estimator = bayes;
process = 2;
fbiter = 10000;

However, I got error message as follows.
¡°Unrestricted x-variables for analysis with TYPE=TWOLEVEL and ESTIMATOR=BAYES must be specified as either a WITHIN or BETWEEN variable. The following variable cannot exist on both levels: TEACH¡±

(TEACH=predictor, PNS=mediator, movat=outcome)

Is something wrong with my code? Or I cannot use Bayes estimation for Preachers et al.'s multilevel SEM mediation approach because Bayes estimation doesn't allow a predictor to be at both levels? Thanks.
 Bengt O. Muthen posted on Tuesday, September 13, 2011 - 9:14 am
Bayes does not do the latent variable decomposition of the predictor variable, but uses the usual MLM approach. This means that you would have to specify TEACH as a Within variable. If you want it on Between as well, you have to create the cluster-mean version of the variable yourself (there is an Mplus option for this) and enter it as a Between variable.
 Michelle Finney posted on Thursday, October 06, 2011 - 4:24 pm
Hello,
I am estimating bootstrap CIs for testing indirect effects hypotheses. Is it possible to have contradictory results across "Total indirect effects" and "Standardized Total indirect effects." In the MPLUS output, using the first one as reference, indirect effects are significant, but using as a reference the second one, indirect effects are not significant.
Thanks for your response.
MF
 Linda K. Muthen posted on Thursday, October 06, 2011 - 9:27 pm
Standardized and raw parameters have different sampling distributions and can therefore have different significance levels.
 Patrick A. Palmieri posted on Monday, October 17, 2011 - 8:26 am
Hello. I have a multiple mediation path model with 1 IV, 2 meds, and 1 outcome. I want the specific indirect effect for each mediator and to contrast them to see if one is larger than the other. I also would like to do a simulation to determine the sample size needed to power this study.

Here is a program I am working from.

TITLE: 2 mediator example with contrast
DATA: FILE IS data.dat
VARIABLE: NAMES ARE x m1 m2 y;
ANALYSIS: m1 ON x(a1); m2 ON x(a2); y ON m1(b1);
y ON m2(b2); y ON x; m1 WITH m2;
MODEL INDIRECT: y IND m1 x; y IND m2 x;
MODEL CONSTRAINT: NEW(a1ba a2b2 con);
a1b1=a1*b1; a2b2=a2*b2; con=a1b1-a2b2;
OUTPUT: CINTERVAL (BCBOOTSTRAP);

It is similar to MPlus example program 3.16, but the latter doesn't include contrasts of specific indirect effects, and it specifies bootstrap in the analysis section rather than in the output section as in the code listed above. Do these programs otherwise do the same thing?

Also, I didn't see a MC counterpart to example 3.16 in the MPlus example programs folder - does one exist? or perhaps I accidentally deleted it sometime. Can you include that code or provide other assistance that might help with determining sample size for this analysis?
Thank you.
 Linda K. Muthen posted on Wednesday, October 19, 2011 - 11:21 am
The data for Example 3.16 comes from Example 3.11. The BOOTSTRAP option is not available with the MONTECARLO command.

If you want to test two indirect effects, define them in MODEL CONSTRAINT and use MODEL TEST to see if they are different.
 Patrick A. Palmieri posted on Wednesday, October 19, 2011 - 1:48 pm
Thank you.

Is there a way to perform a simulation using MPlus to calculate the sample size necessary to be able to detect a certain size specific indirect effect in a multiple mediation path model?
 Linda K. Muthen posted on Wednesday, October 19, 2011 - 3:00 pm
Yes. Use mcex3.11.inp as a starting point.
 Scott R. Colwell posted on Friday, November 11, 2011 - 12:55 pm
Is there a way to label the indirect effect (of say x -> m -> y) in a mediation in order to test the equality of the indirect effect across multiple groups using the Model Test command?
 Linda K. Muthen posted on Friday, November 11, 2011 - 1:59 pm
You would have to label the components of the indirect effect in the group-specific MODEL commands and define the indirect effects in MODEL CONSTRAINT.
 Scott R. Colwell posted on Friday, November 11, 2011 - 2:26 pm
Thank you. Do you mean calculate the product of the coefficients of the paths in MODEL CONSTRAINT?
 Linda K. Muthen posted on Friday, November 11, 2011 - 5:54 pm
Yes.
 Heike B. posted on Thursday, December 15, 2011 - 4:06 am
I am using WLSMV to estimate a manifest model with categorical endogenous variables (4 levels each). My sample is small (360 objects).

Linda recommended in a similar case here in the thread to use the default estimations rather than the bootstrap estimations to calculate the standard errors.

With respect to the p-values

1.) should I use the p-values of the default estimation to decide on significance or the confidence intervals / p-values from the bootstrap?

2.) What would be the rationale behind the recomendation?

3.) If I can use the bootstrap confidence intervals is there a possibility to derive one-sided intervalls from the two-sided intervals?

4.) If not - is there an other way to determine one-sided confidence intervals in MPLUS?

Many thanks in advance.
Heike
 Linda K. Muthen posted on Thursday, December 15, 2011 - 11:25 am
It's really up to you to decide on which p-values to use. You would need to investigate how to compute one-sided confidence intervals. Mplus does not compute them.
 Heike B. posted on Thursday, December 15, 2011 - 12:18 pm
Thank you, Linda. Does this mean that both the default estimation and the bootstrap work similarely well under my circumstances?

I mean are there some guidelines when one ore the other approach produces better results?

Many thanks in advance.

Heike
 Linda K. Muthen posted on Thursday, December 15, 2011 - 12:21 pm
It's difficult to say. All circumstances differ in many respects. You would need to do a Monte Carlo study that reflects your situation to answer that question.
 Kristine Amlund Hagen posted on Thursday, January 26, 2012 - 4:19 pm
Dear Drs. Muthen,
We are running a mediational model with bootstrapped standard errors in which variable X is predicting variable M, which in turn predicts Y. This model looks fine, with good fit indices, all paths and indicators significant and the total indirect effects also significant. Because the data are cross sectional and because it could be argued that the direction of effects is actually X to Y to M, we wanted to see if this alternative model would also fit the data. Output for this model showed that the fit indices were very good, but some of the indicators were no longer significant, and the Y to M path was no longer significant (even though in the original X-M-Y model the M to Y path was significant and the bivariate correlations between all the indicators are significant).
1.Does this mean that our original model is in fact better?
2.Why would the Y-M path no longer be significant in the alternative model?
3.Why is the number of bootstrapped drawn less than what we specified in the input?
4.We sometimes get the ‘THE RESIDUAL COVARIANCE MATRIX (THETA) IS NOT POSITIVE DEFINITE’ message and see that the residual variance of one of the indicators is negative, what is the appropriate action to deal with this?
5.Finally, why is 2-tailed p-value for the unstandardized estimates often times different from the 2-tailed p-value for the stdY and stdYX?
Thank you.
 Bengt O. Muthen posted on Friday, January 27, 2012 - 8:07 pm
1-2. The two models are different and fit differently to the covariance matrix of all variables. In your case, you should not base your choice of model on fit but substantive reasoning.

3. Send output to Support

4. This indicates that the model needs to be modified.

5. Unstand and stand coefficients have different sampling distributions and the assumption of a normal distribution may be differently well approximated in the two cases. If they differ, it may be better to use the unstand results.
 Kristine Amlund Hagen posted on Sunday, January 29, 2012 - 2:27 pm
Ok. Thank you.
Regarding question 1., the theory isn't all that clear here, and while we have reason to believe that the X-M-Y model is the better one substantially, it would be nice to test the alternative model as well, but since the two models are not nested, we figured we could look at the fit indices and path coefficients as an indication of which model best fits the data.
 Christoph Weber posted on Tuesday, February 28, 2012 - 7:00 am
Dear Drs. Muthen,
I'm running a mediation model (using the delta method). There are some significant indirect effects. But they are really small (eg. stand. b = .04, p<.05). Would you report such small effects? Is there an rule of thumb? It's clear, that indirect effects are small, but is there an cut point?

thanks

Christoph Weber
 Bengt O. Muthen posted on Wednesday, February 29, 2012 - 8:47 am
I would report also small effects. The size of effects can be discussed in Cohen's terms.
 Xu, Man posted on Thursday, March 22, 2012 - 8:15 am
Could I just follow up on this thread:

Since the regular significance testing of mediation effect might be biased, I try to get confidence interval from bootstrapping (I created mediation effect using NEW & MODEL CONSTRAINT).

1.There are two options of boostrapping apparently, the CINITERVAL (BCBOOTSTRAP) and CINTERVAL (BOOTSTRAP), which one is more suitable?

2. With sample size of around 3000 to 4000, what would be an appropriate number of bootstrapping?

I could not request indirect result because the analysis uses TYPE=RANDOM.

Thanks!

Kate
 Xu, Man posted on Thursday, March 22, 2012 - 10:57 am
oh, actually BOOTSTRAP cannot be used with TYPE=RANDOM.
but I had to have TYPE=RANDOM because I used TSCORE to adjust time at data collection - there is an embedded second order growth curve model and I am looking at mediators of the growth intercept and slopes.

Is there anyway around this, to get good standard errors for the mediation effect?
 Linda K. Muthen posted on Thursday, March 22, 2012 - 2:11 pm
I would use BCBOOTSTRAP with from 500-1000 draws.
 Xu, Man posted on Thursday, March 22, 2012 - 3:51 pm
Thank you. But it seems BCBOOTSTRAP cannot be used together with TYPE=RANDOM? In this situation, is there any way to get bootstraped standard error for parameters created using NEW & MODEL CONSTRAINT(mediation effect in my case)?

Thank you!
 Linda K. Muthen posted on Thursday, March 22, 2012 - 3:59 pm
No, there is not.
 Xu, Man posted on Thursday, March 22, 2012 - 4:04 pm
I see. I will stick to the given output then. Thanks for letting me know.
 Sofie Wouters posted on Friday, March 23, 2012 - 2:03 am
I was wondering what to report when examining indirect effects and their significance. Do you report the unstandardized or the standardized coefficients?
Because my direct effects are only displayed in a path model with beta's (standardized estimates), I thought it best to report the sobel/delta method test statistic and p-value from the standardized section of the 'indirect'-output, but is this correct?
 Linda K. Muthen posted on Friday, March 23, 2012 - 10:26 am
Whether to report unstandardized or standardized should be guided by the journal you plan to publish in. Whichever you report, you should report their standard errors and p-values. You should not use unstandarized p-values with standardized coefficients.
 Sofie Wouters posted on Monday, March 26, 2012 - 6:44 am
OK, thank you!
 Dustin Pardini posted on Tuesday, May 15, 2012 - 6:34 am
When running bootstrapped standard errors to test for mediation using theta parameterization I am getting confidence intervals indicating a significant indirect effect for the unstandardized estimates, but non-significant indirect effects for the standardized estimates. I am curious why this is occurring and how I should handle this in terms of reporting results. I have historically reported unstandardized coefficients.
 Linda K. Muthen posted on Tuesday, May 15, 2012 - 10:21 am
Raw and standardized coefficients have different sampling distributions so can have different significance levels. If you usually report raw coefficients, I would do that. I would not decide what to report based on significance.
 Jo Brown posted on Thursday, June 14, 2012 - 3:22 am
how many bootstrap cycles do you normally need to obtain accurate standard errors for the indirect effect?
 Linda K. Muthen posted on Thursday, June 14, 2012 - 11:15 am
This can differ depending on the data and model. I would experiment with different numbers until the results stabilize.
 Jo Brown posted on Friday, June 15, 2012 - 3:03 am
Thanks! I tried 1000, 5000 and 10000 and I must say that there is not much difference between these number of cycles. Could this be an argument in favour of using a 1000 cycles?
 Linda K. Muthen posted on Friday, June 15, 2012 - 12:54 pm
You might need only 500. Try that.
 Michelle Little posted on Friday, July 13, 2012 - 2:21 pm
Hello,

I have a question about using bias-corrected bootstrapping in mediation.

I ran a mediation model (multi-group, 1 latent IV, 2 latent mediators, 2 latent outcomes + 1 cov) without bootstrapping and found several moderate to large direct effects that were significant (p < .05 to p < .001) in one group as well as significant indirect effects in the same group. I used both ML and MLR and found this result. When I ran the same model with bootstrapping, some of those direct effects dropped to ns, yet some corresponding indirect effects are significant according to the bootstrapped result. I can't get a sense from reading whether it is customary to report the significance of direct effects from the bootstrap results, or from an analysis without bootstrapping? does anyone know a reference on this point?
It seems odd to have ns direct effects + significant indirects for the same path... Not sure how to explain that in my results.

Any help would be appreciated.

Thank you.
 Bengt O. Muthen posted on Friday, July 13, 2012 - 4:20 pm
Bootstrap SEs are often bigger so the ns directs is natural. I would report the bootstrapped SEs for all effects.

I don't see why it would be odd for a variable to have a ns direct effect and a sig indirect effect, it that is what you are asking - that represents complete mediation.
 Michelle Little posted on Friday, July 13, 2012 - 7:11 pm
Thanks for the fats reply. I should have pointed out that it was the b effect linking mediator to Dv that was ns,thus the concern. I am accustomed to finding joint effects significant when the indirect effects are.

Thanks for your help,
ML
 Michelle Little posted on Friday, July 13, 2012 - 7:12 pm
Sorry,thanks for the "fast" reply
 Bengt O. Muthen posted on Friday, July 13, 2012 - 8:15 pm
See also our FAQ:

11/18/11: Indirect effect insignificant while both paths significant
 Michelle Little posted on Friday, January 11, 2013 - 1:06 pm
Hello,

I have two pertaining to a peer review for an article.

1. I bootstrapped the CIs and SEs of direct/indirect effects for a mediation model with latent variables. I therefore couldn't use MLR. Is bootstrapping robust to violations of multivariate normality? I am reluctant to use the bias-corrected bootstrapping because of the conditions of my sample size and size of effects (per Fritz and MacKinnon recommendation).

2. I did multiple group mediation and compared undstandardized effects across groups. A reviewer asked about effect sizes. The standardized effects are not comparable, across groups - so I don't want to report them.
What is the best thing to report in this situation for an effect size, particularly for an indirect effect?

any help would be appreciated,

Michelle
 Bengt O. Muthen posted on Friday, January 11, 2013 - 4:41 pm
1. Yes.

2. By effect size in this context, I assume you mean: As X increases 1 SD (or changes from control to tx), Y changes ? SD. To compute this, I would take the unstandardized model coefficient estimates (a, b, c) for each group and use the X and Y SDs to compute the group-specific effect sizes which won't be the same since the SDs aren't the same in the different groups even if the model coeff estimates are the same.
 Michelle Little posted on Sunday, January 13, 2013 - 7:48 am
Thanks so much for your quick reply.

I just had one additional follow-up: The reviewer did not specify a particular effect size measure...I was thinking I could report the R-squared and standard betas for each group. But, given that these effect sizes are all group specific and my focus is entirely on group differences, I am reluctant to do this. In lieu of this, I could calculate a standard effect based on the pooled SD of X/Y for each group?.. And for the indirect effect it would be ab*(SDxpooled/SDpooledy). Mackinnon suggests ab*(SDx/SDy) but this is for a single group model. Also, I think I could use the same method for the direct effects (a effect and b effect)
Does this sound ok?
 Linda K. Muthen posted on Monday, January 14, 2013 - 10:19 am
This seems reasonable. I don't think there is only one acceptable way to do this. You may want to ask this question on a general discussion forum like SEMNET.

Bengt
 Milena Batanova Payzant posted on Friday, January 18, 2013 - 9:07 am
Hello,

I am running a path analysis with my predictor and 3 mediator variables at time 1 and two outcome variables at time 2, with a sample of 499. In my Analysis command, I indicated "BOOTSTRAP = 10000"; and in my Output command, I asked for STANDARDIZED MODINDICES (3.84)SAMPSTAT TECH1; CINTERVAL(BOOTSTRAP);

I want to make sure this is the correct syntax, and also, I'm unsure if the command is for percentile bootstrapping or bias-corrected bootstrapping. Given my sample size and that the predictor does NOT lead to the outcomes, which might be better to use in this case? If the CIs do not contain 0, can I assume mediation even though the predictor does not lead to the outcomes?
 Bengt O. Muthen posted on Friday, January 18, 2013 - 2:52 pm
If you say

CINTERVAL(BCBOOTSTRAP);

you get the bias-corrected version.

Before trusting the CIs, you want to make sure that your model fits, that is, that the direct effects are zero.
 Dexin Shi posted on Sunday, September 08, 2013 - 5:45 pm
Hello,

I am running a meditation analysis with a categorical mediator (no latent variable involved). I used both WLSMV and BC bootsrap. However, for one path (from the independent variable (x) towards to the categorical mediator), the two methods did not agree with each other on significant test. WLSMV gave p=0.819, where as BC bootstrap CI [0.399,2.496].To report the results, which one is recommended? Thank you for your help.
 Linda K. Muthen posted on Sunday, September 08, 2013 - 7:56 pm
WLSMV gives a symmetric confidence interval around a bootstrapped standard error. BCBOOTSTRAP gives a non-symmetric confidence interval around a bootstrapped standard error. This is why they may not agree.
 Stephanie Vezich posted on Saturday, September 21, 2013 - 1:44 am
Dear Drs. Muthen,

We have run several studies with this path model:

MODEL:
empathy ON cond (a1);
anc ON cond (a2);
liking ON empathy (b1)
anc (b2)
cond (c1);
gameinv ON liking (e1)
empathy (d1)
anc (d2)
cond(f1);

Gameinv is categorical, and we are using bootstrapping. Our reviewers were interested in an alternative theoretical model, leading us to test these path models:

MODEL:
empathy ON cond (a1);
liking ON cond (a2);
anc ON empathy (b1)
liking (b2)
cond (a3);
gameinv ON anc (c1)
empathy (d1)
liking (d2)
cond (a4);

MODEL:
empathy ON cond (a1)
liking (b1);
anc ON cond (a2)
liking (b2);
liking ON cond (c1);
gameinv ON empathy (e1)
anc (e2)
liking (f1)
cond(d1);

The relevant indirect paths are significant in all three. Is a way to argue statistically that our original proposed model is better (such as comparing goodness of fit, although the only fit statistic reported from bootstrapping has been WRMR), or should we make a theoretical argument?

Any advice would be much appreciated.

Stephanie
 Linda K. Muthen posted on Saturday, September 21, 2013 - 11:53 am
You can compare the fit of the models. With the BOOTSTRAP option, only standard errors are bootstrapped so we don't give fit statistics. You can run the three models without the BOOTSTRAP option to obtain the fit statistics.
 Stephanie Vezich posted on Sunday, September 22, 2013 - 11:11 pm
Thanks for your quick response! Which fit statistic would you recommend comparing across models?

The fit statistics reported when I eliminate bootstrapping are Chi-square, RMSEA, CFI, and WRMR, but it's my understanding that these cannot be used to compare non-nested models.
 Linda K. Muthen posted on Monday, September 23, 2013 - 2:12 pm
There is no way of testing which model is best compared to another statistically unless the models are nested. You can use any of the fit statistics listed above for comparison purposes. I would not use WRMR as it is an experimental fit statistic.
 Stephanie Vezich posted on Monday, September 23, 2013 - 6:17 pm
Great, thanks so much for your feedback.
 Melissa Kull posted on Saturday, November 09, 2013 - 10:50 am
Using raw data with a small amount of missing data, I've been running basic path models with one exogenous predictor, three mediators (all continuous, mostly normally distributed), and one outcome. I've been trying to run these models using a sampling weight (to adjust for non-response), although I'm not interested in stratification or clustering, so I have not identified these data as complex. When I try to estimate bootstrapped SEs in these models, the models will run using ML but will not run using MLR (which is the default for these models when the bootstrap is not applied). Can someone explain why this is happening and suggest some references with information on selecting the most appropriate estimator? I've looked over some of the MacKinnon articles cited in this thread but am not sure which whether my models are correctly specified with the ML estimator and bootstrapped SEs. Many thanks.
 Bengt O. Muthen posted on Saturday, November 09, 2013 - 6:24 pm
We don't do bootstrap with sampling weights. It is not clear how that should be done.
 Melissa Kull posted on Wednesday, November 13, 2013 - 8:06 pm
Dr. Muthen, Thanks for your response. It seems peculiar that the models are converging and providing estimates that have been similar to results from other iterations of these models that I've been running. I thought maybe the syntax was just ignoring the population weight, but when I took the weight out, the results were different. Despite this, I suppose these estimates are not to be trusted? Can you explain why or suggest a reading that indicates why bootstrapping should not work with population weights? This would be tremendously helpful as I move forward with trying to appropriately specify these models. Thanks very much for your assistance.
 Linda K. Muthen posted on Thursday, November 14, 2013 - 9:56 am
I think the issue here is that the BOOTSTRAP option is not available with TYPE=COMPLEX. It is available with the WEIGHT option. I think this explains what you are seeing.
 Patrícia Costa posted on Monday, February 24, 2014 - 6:33 am
Dear Drs Muthen,

I have ran a simple mediation as follows:


MODEL:
Perf2 ON Res;
Perf2 ON TWE;

Model indirect:
Perf2 IND TWE;

From the output below, I conclude that my model is saturated, and the paths are nonsignificant.

I have two questions:

(1) Why is the model saturated? I am unable to see how is it possible that I have 80+ parameters to estimate...

(2) Based on this, do I conclude that I have no evidences to support the mediation hypothesis?

Thank you in advance.


Chi-Square Test of Model Fit

Value 0.000
Degrees of Freedom 0
P-Value 0.0000

RMSEA
Estimate 0.000
90 Percent C.I. 0.000 0.000
Probability RMSEA <= .05 0.000

CFI 1.000
TLI 1.000

Chi-Square Test of Model Fit for the Baseline Model

Value 4.964
Degrees of Freedom 2
P-Value 0.0836

SRMR
Value 0.000
 Linda K. Muthen posted on Monday, February 24, 2014 - 10:29 am
Your model is not a mediation model. It should be

MODEL:
Perf2 ON Res;
Res ON TWE;

if Res is the mediator.
 Patrícia Costa posted on Wednesday, February 26, 2014 - 5:05 am
Dear Dr. Muthén,

Thank you for your answer. My model is:

X = Res
Mediator = TWE
Y = Perf2

Is there anything wrong? My questions remain.
 Linda K. Muthen posted on Wednesday, February 26, 2014 - 6:35 am
Then the model should be

twe ON x;
perf2 ON twe;
 Betsy Lehman posted on Sunday, March 23, 2014 - 4:39 pm
Dear Drs. Muthen,

I am hoping to look at indirect effects in a path analysis. I know that I can just request indirect effects using the IND command; however, per the Preacher and Hayes (2008) article, it appears that the bootstrapping method is recommended particularly for samples that are not normally distributed (as mine is not). With that said, though, I have used MLR estimator to help me manage my missing data and non-normality. When I tried to use bootstrapping as a way to identify indirect effects, I received the error message saying that bootstrapping can't be used with MLR.

I'm wondering what you might suggest in a situation like this. I'm imagining that I could either report the indirect effects that are provided without bootstrapping (e.g.just multiplying direct effects between mediators), or I could Not use MLR, and run the bootstrapping procedure to get the bootstrapped indirect effects.

Do you have thoughts about how to best proceed? Thank you so much!
 Linda K. Muthen posted on Monday, March 24, 2014 - 8:15 am
Use ML not MLR and you can do bootstrapping. All of the maximum likelihood estimators give the same parameter estimates. Bootstrapped standard errors are implemented in ML.

MLR and bootstrapped standard errors are usually very close.
 Betsy Lehman posted on Monday, March 24, 2014 - 9:43 am
Thanks for your help- will do!
 RuoShui posted on Sunday, April 06, 2014 - 5:58 pm
Dear Drs. Muthen,

I used ML and Bootstrapping in my SEM and asked for standardized in the output. However, there is no standard error or p value for standardized parameter estimates. Is this normal?
Is there anyway I can obtain them?

Thank you very much!
 Linda K. Muthen posted on Monday, April 07, 2014 - 6:19 am
Standardized estimates are not available using the BOOTSTRAP option.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: