Mediation and bootstrap standard errors PreviousNext
Mplus Discussion > Structural Equation Modeling >
 Daniel posted on Friday, June 18, 2004 - 5:16 am
I am running several mediation models in which my dependent variable is ordered categorical. I am using the bootstrap method to estimate standard errors for the indirect effects, with the bootstrap analysis command. I asked for confidence intervals and am given the appropriate intervals. Can I use these intervals along with the effects to estimate odds ratios, or is this incorrect if my mediator is continuous?
 bmuthen posted on Friday, June 18, 2004 - 8:42 am
Which estimator are you using, WLSMV or ML?
 Daniel posted on Friday, June 18, 2004 - 9:02 am
I was using the default estimator. I believe it is WLSMV since I am modeling with categorical dependent variables, although I may be wrong.
 bmuthen posted on Friday, June 18, 2004 - 9:20 am
The default WLSMV works with probit regressions so the estimates are not directly in odds ratio metric. The indirect effects are with respect to a continuous y* variable behind the dependent observed categorical variable, where y* is the response propensity. I think this idea has been discussed in David MacKinnon's work.
 Daniel posted on Saturday, June 19, 2004 - 9:00 am
I have a question regarding the indirect effect. If the two arcs (paths) in the specific indirect affect (a to b [path a'] and b to c [path b']) are each significant (i.e., a' and b'), shouldn't the total indirect effect [a' * b')also be significant? Or is it possible for a' and b' to be significant without the specific indirect effect (a' * b') being significant?
 bmuthen posted on Saturday, June 19, 2004 - 11:59 am
Seems like this is possible because the indirect effect is a product of the two estimates and the SE of this product is a function not only of each of the two SEs, but also the covariance between the two estimates - which might be positive and therefore make the denominator of the test larger.
 Daniel posted on Monday, June 21, 2004 - 8:04 am
Ok, if I have the case where each path is significant, but the total indirect effect is not significant, what could I conclude about mediation?
 bmuthen posted on Monday, June 21, 2004 - 4:53 pm
I would say there is no significant mediation.
 Daniel posted on Tuesday, June 22, 2004 - 6:12 am
Thanks. My population is 913 for the study, and I am modeling mediation in an associative process model between two LGM, each with two random effects (trend and intercept), and about 5 covariates. The observed measures are ordered categorical. What would you suggest I set the bootstrap to (i.e., bootstrap=?) in the analysis command?
 Linda K. Muthen posted on Tuesday, June 22, 2004 - 8:21 am
There is no rule for this. You should experiment. Start with 250. Then try 500. Compare the standard errors to see if there is much difference.
 Daniel posted on Tuesday, June 22, 2004 - 10:49 am
I ran the bootstrap at 250, 300, 350, 400, and 450, and it ran fine each time, with each increase resulting in a proportional increase in run time. However, as soon as I run the bootsrap at 500, it runs for hours without end. Last night I tried to run it with 1000, and left the program running all night, after leaving work at about 4 PM. I returned to work the next morning, and it was still running. Why do you believe I cannot get a solution with values greater than or equal to 500? Does it have something to do with the associative processes or categorical outcome variables?
 Linda K. Muthen posted on Tuesday, June 22, 2004 - 11:04 am
Why don't you send the 400 run output, the 500 run input, and the data to so I can take a look at it.
 Tom Hildebrandt posted on Monday, June 27, 2005 - 8:09 pm
Using a WLSMV estimator, why would a chi-square test not be calculated using Dr. MacKinnon's bias-corrected bootstrap method of estimating SE and confidence intervals in a path analysis with multiple mediational pathways?
 Linda K. Muthen posted on Tuesday, June 28, 2005 - 8:04 am
There is no reason. We have so far only implemented bootstrap for standard errors.
 Tom Hildebrandt posted on Tuesday, June 28, 2005 - 9:22 am
Thank you very much for your quick response.

Would it appropriate then to report the chi-square goodness of fit test calculated when not using bootstrap function as long as the WLSMV estimator is used?
 Linda K. Muthen posted on Wednesday, June 29, 2005 - 7:19 am
Yes but you should make it clear that although the standard errors are bootstrapped, the chi-square is not.
 Charles Green posted on Monday, February 20, 2006 - 5:58 pm
I am currently running a mediational structural equation model dealing with domestic violence. The observed measures are primarily indices derived from self-report scales which have ranges as broad as 0 to 177 (i.e. The item endorsements are ordinal values, each of which represents a frequency range for specific behaviors (0 = 0-10, 1 = 10-20, etc.). These items are subsequently summed to produce the indices of interest for the current model.). I decided to model the data as continuous censored, but received the following error message:


Internal Error Code: GH1006.
An internal error has occurred. Please contact us about the error,
providing both the input and data files if possible.

I am forwarding the requested information to you.

In the meantime however, I am trying to resolve two questions regarding the model:

1) Regarding overall fit indices, I am contemplating the use of the MLR estimator, treating the data as continuous.
a) Given non-normal data and censoring
from below (at sero) to what degree
might this yield misleading
b) Does applying a Bollen-Stine
bootstrap procedure provide a means
to address this more effectively?

2) Regarding the standard errors of the
parameter estrimates in the model I
would prefer to use a bootstrap
procedure since the this will provide
me with confidence intervals for the
indirect effects.
a) Do you detect anything problematic
with using the MLR approach for the
overall model fit indices, followed
by reporting confidence intervals
for the parameter estimates derived
from a bootstrap procedure?

Any guidance you might offer would be greatly valued.
 bmuthen posted on Monday, February 20, 2006 - 6:52 pm
a. With a high degree of censoring (say > 25-50%), the SEs and chi-square based fit indices may be off. The basic problem is that the linear model assumed is wrong with strong censoring, so non-normality robustness in SEs and chi-square doesn't help. Overall fit indices are perhaps less important than getting the right parameter estimates and checking fit by 2*LL for nested, neighbouring models.

b. I don't think so.

2. That's fine.

a. Not in principle.
 Charles Green posted on Monday, February 20, 2006 - 7:24 pm
Thank you so much for you prompt answer. If I might clarify this: Indeed I do have proportions of censored observations that are above 25%.

The error message I reported evidently occurs in version 3.01 but has since been corrected in version 3.14. If I run the analysis using the updated (v. 3.14) program, specifying which variables are censored I would obtain appropriate log-likelihoods from which -2*LL could be used for tests of nested models.

Am I correct in saying that the log-likelihoods obtained without the censor specification would be misleading?

Having obtained the -2*LL, I can use the Baron and Kenny (1981) approach to evaluating mediation. However, would there be a problem with removing the censor specification and bootstrapping the parameter estimates so that I can use the MacKinnnon (2004) approach to obtaining indirect effects and confidence intervals?

Finally, is there some reference you would recommend where I might find a primer on bootstrapping specifically regarding how to choose among the different bootstrap confidence intervals?

Many thanks.
 bmuthen posted on Tuesday, February 21, 2006 - 3:23 pm
Yes, you are correct in that the loglik would be misleading if not taking censoring into account such as when using the censored approach.

You should use the same model for parameter estimation and testing as for the bootstrapping.

Efron, B. & Tibshirani, R.J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.

MacKinnon, D.P., Lockwood, C.M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99-128.
 Daniel Rodriguez posted on Wednesday, May 31, 2006 - 10:36 am
I am back on the mediation analysis trail. This time, unlike most of the data I analyze, my sample size is not that large (n=376) and my ultimate outcome (smoking) is ordered categorical with four levels. I am running a SEM with measured variables only (no factors). Is it better to calculate standard errors with bootstrapping or Delta method in this case, due to the relatively small sample size?
 Linda K. Muthen posted on Thursday, June 01, 2006 - 6:40 am
I would use the default standard errors for the estimator you choose. I don't think that you would benefit from bootstrapping.
 Daniel Rodriguez posted on Thursday, June 01, 2006 - 12:24 pm
 Daniel Rodriguez posted on Monday, July 17, 2006 - 9:39 am
Hello Linda and Bengt. I am asked by a reviewer to estimate the size of an effect in my model. I actually sent you this data before. My finding is that the significant indirect effect with 95% confidence interval is .054(.008,.101). You mentioned that this is a small effect. How should I word this in the results/discussion section to indicate the strength of this effect? I'd appreciate any clues if you have them. By the way, this was calculated with the delta method.
 Bengt O. Muthen posted on Monday, July 17, 2006 - 5:50 pm
To know how small it is, wouldn't you want to evaluate it in terms of the SD of the independent and dependent variables, so using a standardized value?
 Daniel Rodriguez posted on Tuesday, July 18, 2006 - 5:00 am
Ok, I see. Thank you very much.
 Yi-fu Chen posted on Friday, July 21, 2006 - 7:13 am
Hi, Dr. Muthen,

I am working on a model to test mediation effects. I have two predictors, four mediators and two outcomes. The outcomes are all continuous. I've tried to use MODEL INDIRECT with BOOTSTRAP to estimate the standard errors of the indirect effect.

The question I have is that:
When I ran a recursive model in which outcome1 predicted outcome 2, the output of model indirect showed the standard errors of indirect effects for predictors via each mediator.
However, when I estimated the recipical relationship between the two outcomes, the output showed only the total indirect effect for each predictors, but no printouts for the contribution of each mediator.

I don't know if what I got is right for Mplus when recipical model are estimated.
Is there any way that I can get more detail indirect effect information for this kind of model?

I am using MPLUS 3.0.

 Linda K. Muthen posted on Saturday, July 22, 2006 - 11:33 am
I don't think this is possible. See the Bollen SEM book to check.
 Marco Haferburg posted on Monday, August 14, 2006 - 12:34 am
Dear Mplus-team, I have read in an article by MacKinnon and colleaques that there are different ways to calculate SE for indirect effects, using the delta method (e.g. Freedman & Schatzkin, 1992, or Olkin & Finn, 1995). I would be interested in which one is implemented in Mplus?

MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G. & Sheets, V. (2002). A comparison of methods to test mediation and other intervening variable effects. Psychological Methods, 7 (1), 83-104.
 Linda K. Muthen posted on Monday, August 14, 2006 - 8:25 am
We calculate standard errors for indirect effects using both the Delta method and bootstrap as described in the MacKinnon et al. aricle. I am not aware that there are different Delta methods.
 Claire Hofer posted on Thursday, December 07, 2006 - 8:02 am
Could you tell me about the differences in the bootstrap method between mplus version 3 and version 4? I am getting very different results: my model run in version 4 with the Bollen Stine bootstrap method matches closely what I get in the regular model using ml or mlr estimation but when I run the model in version 3 with the bootstrap method there, I get completely different results. We do have missing data. Could you tell me a little bit about why the results might be so different?
Thank you.
 Linda K. Muthen posted on Thursday, December 07, 2006 - 9:06 am
I don't know of any reason offhand there would be a difference. If you send your input, data, output, and license number to, I can take a look at it.
 Garth Rauscher posted on Wednesday, August 08, 2007 - 8:30 am
Dear Drs. Muthen,

We are running a mediation model with three exogenous variables - (1 continuous and two indicators for race/ethnicity), two mediating variables (one continuous and one dichotomous) and one outcome variable (dichotomous). For different paths we are calculating the mediation proportion, defined as the indirect effect divided by the total effect (indirect + direct effect). We would like to be able to calculate confidence intervals for mediation proportion by using the estimates from each individual bootstrapped dataset. The question is: Can M-Plus output into a separate dataset the individual bootstrapped estimates of the direct and indirect effects for a given model?
 Linda K. Muthen posted on Tuesday, August 14, 2007 - 4:32 pm
Mplus does not saved indirect effects and does not saved results from each bootstrap replication.
 Emily Blood posted on Tuesday, October 09, 2007 - 6:15 pm
Within the MC facility, is there a way to output indirect effect values and standard errors of indirect effects for each MC replication? I am currently outputting the parameters from each MC replication, but am not able to output the indirect effects and their standard errors from each replication, only the mean and se of all indirect effects from all MC replications. Is this possible in Mplus?
 Linda K. Muthen posted on Wednesday, October 10, 2007 - 1:34 pm
No, results from MODEL INDIRECT are not saved. The only way to obtain them would be to save all of the data sets and analyze them one at a time.
 Eric posted on Monday, June 16, 2008 - 10:20 pm
I am using cinterval(bcbootstrap) to get confidence intervals for indirect effects in a path analysis model with 4 mediators. Though I get confidence intervals for the specific indirect effects, the confidence intervals for the rest of the path estimates are all zeros. Does this mean that I should not trust the CIs for the specific indirect effects?
 Linda K. Muthen posted on Tuesday, June 17, 2008 - 6:10 am
It sounds like you are using an old version of the program. I think there may have been a problem some time ago. I suggest using Version 5.1.
 Eric posted on Tuesday, June 17, 2008 - 9:56 am
Is it possible to get more decimal places for the confidence intervals when using cinterval(bcbootstrap). One of my confidence intervals ranges from 0.000 to 0.050 and I would like to be able to say that the effect is significant. I have tried using the savedata command, but I am not sure what to ask for, since the results option does not seem to include the confidence intervals. Thanks for your help.
 Linda K. Muthen posted on Tuesday, June 17, 2008 - 12:06 pm
Confidence intervals are not saved. You can rescale your variables by dividing them by a constant using the DEFINE command.
 krisitne amlund hagen posted on Tuesday, September 09, 2008 - 2:33 am
Dear Drs. Muthen,
We're running a mediational SEM model, in which we have an X, a Y and two mediators, Ma and Mb. When we're running two seperate models, both Ma and Mb fully mediate the X-->Y relationship (using bootstrapped standard errors). But when we model both mediators in the same SEM model, the Ma mediator is no longer significantly related to Y. All other paths are significant, including the X-->Ma. We also ran an regression analysis and found that Ma predicts unique variance in Y after controlling for both X and Mb.
1. What can we conclude about Ma as a mediator of X-->Y?
2. Could it be that the finding that only Mb (and not Ma) mediates the X-->Y when tested in the same model, is a statistical artifact? And if that is the case, how does that happen?
3. Alternatively, if it is not an artifact, what can we conclude that Mb is a more important mediator than Ma when compared in the same model? How should one then report that Ma functioned as a mediator when tested alone and when tested in a regression model and was found to predict unique variance in Y.

I thank you in advance and for a great discussion board.
 Linda K. Muthen posted on Tuesday, September 09, 2008 - 9:00 am
If Ma and Mb are highly correlated, there may not be anything left in y to predict beyond what one of the mediators predicts.
 krisitne amlund hagen posted on Wednesday, September 10, 2008 - 7:09 am
Thak you so much for your prompt reply.
That's probably right, that the high correlation btw Ma and Mb messes this up. My question is still, though, what can we conclude about Mb as a mediator? Is it an artifact, that is, could it just as easily have been Ma that ended up with the significant path or neither?
If Ma and Mb are so highly correlated that the Ma --> Y path becomes non-sig., it doesn't explain why the Mb --> Y is significant? Nor why we found that the unique contribution of Ma was sig. after controlling for X and Mb in a regression model. Or does it?
 Bengt O. Muthen posted on Wednesday, September 10, 2008 - 8:44 am
This topic - without the mediation angle - is discussed in the linear regression literature under the heading multicollinearity. You may want to take a look at that. I don't think it is possible to conclude about the joint role of Ma and Mb in such a situation, only that each entered separately is a mediator. You may also want to consult the new mediation book by David MacKinnon to see if he has some wisdom on this topic.
 Metin Ozdemir posted on Friday, November 14, 2008 - 4:49 pm
I have question regarding MPlus output. I used bootstraping to test a mediation effect. On the output for Model Indirect command, I have columns for "Estimates S.E. Est./S.E. StdYX StdYX SE StdYX/SE."

Can you please explain me what StdYX, StdYX SE, and StdYX/SE refers to?

Which one is the test of indirect effect?


 Linda K. Muthen posted on Friday, November 14, 2008 - 4:57 pm
The test is the ratio of the estimate to the standard error of the estimate. Please see Chapter 17 for a description of the columns of the Mplus output and information about the various standardizations.
 miriam gebauer posted on Sunday, November 01, 2009 - 7:57 am
can I use FIML to test mediation (with bootstrapping) or to model interaction (both for latent variables)?

And in the case of using multiple imputation how do I treat the fit values and indirect and direct coefficients? Can I just use the (so)-called rubin formular (which would be like a mean)?

Thanks for your help, Miriam
 miriam gebauer posted on Sunday, November 01, 2009 - 8:26 am
I would like to explain my post above a little more: maybe it is not so clear what I am trying to ask please exuse that:
I am trying to model mediation (with bootstrapping) and moderation (with interaction) but I have missings which I would like to impute: Now I have 5 datasets and Iam doing my analysis with all those data sets, because I cannot read in the 5 data sets at once, because beforementioned modelings won't allow that. But I don't know how to handle the coefficients or fit values of those 5 analysis could you me give me an advice how to handle this? (Is that done with the rubin formular?)
Further I read fiml is an appropriate way to handle missing data: but as far as I read here it's more used in multilevel or group analysis. So that was an idea that this could have been a way for my issue.
Thanks for your help, Miriam
 Linda K. Muthen posted on Sunday, November 01, 2009 - 9:43 am
You can use the IMPUTATION option of the DATA command to analyze a set of imputed datasets. Correct parameters estimates and standard errors are calculated. Fit statistics are provided.
 miriam gebauer posted on Monday, November 02, 2009 - 12:50 am
Maybe I did something wrong. But I had problems useing this command for the modeling interaction or mediation (bootstrapping).
 miriam gebauer posted on Monday, November 02, 2009 - 1:17 am
Maybe I did something wrong, but I had problems using the command IMPUTATION. This is the error I get

The same error shows up when I try to model interaction.
So thats why I am modelling it with each of the five data sets and would like to ask if the fit values can be integrated by calcutating the mean of them?
 Linda K. Muthen posted on Monday, November 02, 2009 - 9:22 am
You cannot use MODEL INDIRECT with TYPE=IMPUTATION but you should be able to use XWITH. I would use MODEL CONSTRAINT with TYPE=IMPUTATION to define the indirect effects. Although the parameter estimates are simply an average across imputed data sets, the standard errors and chi-square are not and cannot be computed by hand. If you have further problems along this line, please send them along with your license number to
 miriam gebauer posted on Wednesday, November 04, 2009 - 1:49 am
Thank you so much for your help. I will first try to model it with the commands you recommended. If this will not work out - I will come back to you later and send you my data.
 Marco DiBonaventura posted on Friday, February 11, 2011 - 2:23 pm

I'm running a fairly basic mediation with a dichotomous IV, a dichotomous mediator, and a non-normal continuous DV (count data). I'm using the bootstrapping command and requesting the indirect effect. However, I can only seem to do this with the WLSMV estimator and was not able to specify a negative binomial distribution for the DV.

1. Is the skewness of the DV a problem, given that I'm boostrapping? If so, is there anything to be done, since I'm unable to execute the (NB) command?

2. The WLSMV estimates are much different than with ML. Is the interpretation of the estimates the same? Can I exponentiate them to get odds ratios of the IV --> M relationship?

Thanks for any help!
 Linda K. Muthen posted on Sunday, February 13, 2011 - 2:54 pm
1. In Mplus, indirect effects can be computed when mediators are categorical only using weighted least squares estimation.

2. WLSMV estimates are in a probit metric. ML estimates are in a logit metric. WLSMV estimates should not be exponentiated.
 yan liu posted on Sunday, August 28, 2011 - 9:34 am
Hi, Linda and Bengt

I am running a multilevel SEM mediation model:

Try to calculate the indirect effects and test if it's significant, using the formula provided by Hayes (2009). I found for between level, although all the mediation effects were not significant, but the sum (total indirect effects) turned out to be signficant, which does not make sense to me. Is the way to test "indtotw" and "indtotb" correct? Thanks!

PNS ON teach (a1w);
movat ON teach (a2w);
movat ON PNS (a3w);
engage ON PNS (b1w);
engage ON movat (b2w);
engage ON teach;

PNS ON teach (a1b);
movat ON teach (a2b);
movat ON PNS (a3b);
engage ON PNS (b1b);
engage ON movat (b2b);
engage ON teach;

NEW(ind1w ind2w ind3w ind1b ind2b ind3b indtotw indtotb);


indtotw= ind1w+ind2w+ind3w;
indtotb= ind1b+ind2b+ind3b;
 yan liu posted on Sunday, August 28, 2011 - 11:59 am
Just want to follow up my question just posted. The equation for computing the total effects of several mediation pathways can be find in Hayes (2009).

Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical Mediation Analysis in the New Millennium , Communication Monographs, 76(4), 408-420.

Thanks a lot!
 Bengt O. Muthen posted on Sunday, August 28, 2011 - 2:20 pm
It looks correct to me. If your sample size is small you may want to try Bayesian analysis which allows indirect effects to have a non-normal distribution.
 yan liu posted on Sunday, September 11, 2011 - 11:33 am
Hi, Bengt

Thank you so much for your reply. Following up your suggestion to my question (posted above, Aug.28), I tried Bayes estimation. I added the following code to my original Mplus syntax

estimator = bayes;
process = 2;
fbiter = 10000;

However, I got error message as follows.
¡°Unrestricted x-variables for analysis with TYPE=TWOLEVEL and ESTIMATOR=BAYES must be specified as either a WITHIN or BETWEEN variable. The following variable cannot exist on both levels: TEACH¡±

(TEACH=predictor, PNS=mediator, movat=outcome)

Is something wrong with my code? Or I cannot use Bayes estimation for Preachers et al.'s multilevel SEM mediation approach because Bayes estimation doesn't allow a predictor to be at both levels? Thanks.
 Bengt O. Muthen posted on Tuesday, September 13, 2011 - 9:14 am
Bayes does not do the latent variable decomposition of the predictor variable, but uses the usual MLM approach. This means that you would have to specify TEACH as a Within variable. If you want it on Between as well, you have to create the cluster-mean version of the variable yourself (there is an Mplus option for this) and enter it as a Between variable.
 Michelle Finney posted on Thursday, October 06, 2011 - 4:24 pm
I am estimating bootstrap CIs for testing indirect effects hypotheses. Is it possible to have contradictory results across "Total indirect effects" and "Standardized Total indirect effects." In the MPLUS output, using the first one as reference, indirect effects are significant, but using as a reference the second one, indirect effects are not significant.
Thanks for your response.
 Linda K. Muthen posted on Thursday, October 06, 2011 - 9:27 pm
Standardized and raw parameters have different sampling distributions and can therefore have different significance levels.
 Patrick A. Palmieri posted on Monday, October 17, 2011 - 8:26 am
Hello. I have a multiple mediation path model with 1 IV, 2 meds, and 1 outcome. I want the specific indirect effect for each mediator and to contrast them to see if one is larger than the other. I also would like to do a simulation to determine the sample size needed to power this study.

Here is a program I am working from.

TITLE: 2 mediator example with contrast
DATA: FILE IS data.dat
ANALYSIS: m1 ON x(a1); m2 ON x(a2); y ON m1(b1);
y ON m2(b2); y ON x; m1 WITH m2;
MODEL INDIRECT: y IND m1 x; y IND m2 x;
MODEL CONSTRAINT: NEW(a1ba a2b2 con);
a1b1=a1*b1; a2b2=a2*b2; con=a1b1-a2b2;

It is similar to MPlus example program 3.16, but the latter doesn't include contrasts of specific indirect effects, and it specifies bootstrap in the analysis section rather than in the output section as in the code listed above. Do these programs otherwise do the same thing?

Also, I didn't see a MC counterpart to example 3.16 in the MPlus example programs folder - does one exist? or perhaps I accidentally deleted it sometime. Can you include that code or provide other assistance that might help with determining sample size for this analysis?
Thank you.
 Linda K. Muthen posted on Wednesday, October 19, 2011 - 11:21 am
The data for Example 3.16 comes from Example 3.11. The BOOTSTRAP option is not available with the MONTECARLO command.

If you want to test two indirect effects, define them in MODEL CONSTRAINT and use MODEL TEST to see if they are different.
 Patrick A. Palmieri posted on Wednesday, October 19, 2011 - 1:48 pm
Thank you.

Is there a way to perform a simulation using MPlus to calculate the sample size necessary to be able to detect a certain size specific indirect effect in a multiple mediation path model?
 Linda K. Muthen posted on Wednesday, October 19, 2011 - 3:00 pm
Yes. Use mcex3.11.inp as a starting point.
 Scott R. Colwell posted on Friday, November 11, 2011 - 12:55 pm
Is there a way to label the indirect effect (of say x -> m -> y) in a mediation in order to test the equality of the indirect effect across multiple groups using the Model Test command?
 Linda K. Muthen posted on Friday, November 11, 2011 - 1:59 pm
You would have to label the components of the indirect effect in the group-specific MODEL commands and define the indirect effects in MODEL CONSTRAINT.
 Scott R. Colwell posted on Friday, November 11, 2011 - 2:26 pm
Thank you. Do you mean calculate the product of the coefficients of the paths in MODEL CONSTRAINT?
 Linda K. Muthen posted on Friday, November 11, 2011 - 5:54 pm
 Heike B. posted on Thursday, December 15, 2011 - 4:06 am
I am using WLSMV to estimate a manifest model with categorical endogenous variables (4 levels each). My sample is small (360 objects).

Linda recommended in a similar case here in the thread to use the default estimations rather than the bootstrap estimations to calculate the standard errors.

With respect to the p-values

1.) should I use the p-values of the default estimation to decide on significance or the confidence intervals / p-values from the bootstrap?

2.) What would be the rationale behind the recomendation?

3.) If I can use the bootstrap confidence intervals is there a possibility to derive one-sided intervalls from the two-sided intervals?

4.) If not - is there an other way to determine one-sided confidence intervals in MPLUS?

Many thanks in advance.
 Linda K. Muthen posted on Thursday, December 15, 2011 - 11:25 am
It's really up to you to decide on which p-values to use. You would need to investigate how to compute one-sided confidence intervals. Mplus does not compute them.
 Heike B. posted on Thursday, December 15, 2011 - 12:18 pm
Thank you, Linda. Does this mean that both the default estimation and the bootstrap work similarely well under my circumstances?

I mean are there some guidelines when one ore the other approach produces better results?

Many thanks in advance.

 Linda K. Muthen posted on Thursday, December 15, 2011 - 12:21 pm
It's difficult to say. All circumstances differ in many respects. You would need to do a Monte Carlo study that reflects your situation to answer that question.
 Kristine Amlund Hagen posted on Thursday, January 26, 2012 - 4:19 pm
Dear Drs. Muthen,
We are running a mediational model with bootstrapped standard errors in which variable X is predicting variable M, which in turn predicts Y. This model looks fine, with good fit indices, all paths and indicators significant and the total indirect effects also significant. Because the data are cross sectional and because it could be argued that the direction of effects is actually X to Y to M, we wanted to see if this alternative model would also fit the data. Output for this model showed that the fit indices were very good, but some of the indicators were no longer significant, and the Y to M path was no longer significant (even though in the original X-M-Y model the M to Y path was significant and the bivariate correlations between all the indicators are significant).
1.Does this mean that our original model is in fact better?
2.Why would the Y-M path no longer be significant in the alternative model?
3.Why is the number of bootstrapped drawn less than what we specified in the input?
4.We sometimes get the ‘THE RESIDUAL COVARIANCE MATRIX (THETA) IS NOT POSITIVE DEFINITE’ message and see that the residual variance of one of the indicators is negative, what is the appropriate action to deal with this?
5.Finally, why is 2-tailed p-value for the unstandardized estimates often times different from the 2-tailed p-value for the stdY and stdYX?
Thank you.
 Bengt O. Muthen posted on Friday, January 27, 2012 - 8:07 pm
1-2. The two models are different and fit differently to the covariance matrix of all variables. In your case, you should not base your choice of model on fit but substantive reasoning.

3. Send output to Support

4. This indicates that the model needs to be modified.

5. Unstand and stand coefficients have different sampling distributions and the assumption of a normal distribution may be differently well approximated in the two cases. If they differ, it may be better to use the unstand results.
 Kristine Amlund Hagen posted on Sunday, January 29, 2012 - 2:27 pm
Ok. Thank you.
Regarding question 1., the theory isn't all that clear here, and while we have reason to believe that the X-M-Y model is the better one substantially, it would be nice to test the alternative model as well, but since the two models are not nested, we figured we could look at the fit indices and path coefficients as an indication of which model best fits the data.
 Christoph Weber posted on Tuesday, February 28, 2012 - 7:00 am
Dear Drs. Muthen,
I'm running a mediation model (using the delta method). There are some significant indirect effects. But they are really small (eg. stand. b = .04, p<.05). Would you report such small effects? Is there an rule of thumb? It's clear, that indirect effects are small, but is there an cut point?


Christoph Weber
 Bengt O. Muthen posted on Wednesday, February 29, 2012 - 8:47 am
I would report also small effects. The size of effects can be discussed in Cohen's terms.
 Xu, Man posted on Thursday, March 22, 2012 - 8:15 am
Could I just follow up on this thread:

Since the regular significance testing of mediation effect might be biased, I try to get confidence interval from bootstrapping (I created mediation effect using NEW & MODEL CONSTRAINT).

1.There are two options of boostrapping apparently, the CINITERVAL (BCBOOTSTRAP) and CINTERVAL (BOOTSTRAP), which one is more suitable?

2. With sample size of around 3000 to 4000, what would be an appropriate number of bootstrapping?

I could not request indirect result because the analysis uses TYPE=RANDOM.


 Xu, Man posted on Thursday, March 22, 2012 - 10:57 am
oh, actually BOOTSTRAP cannot be used with TYPE=RANDOM.
but I had to have TYPE=RANDOM because I used TSCORE to adjust time at data collection - there is an embedded second order growth curve model and I am looking at mediators of the growth intercept and slopes.

Is there anyway around this, to get good standard errors for the mediation effect?
 Linda K. Muthen posted on Thursday, March 22, 2012 - 2:11 pm
I would use BCBOOTSTRAP with from 500-1000 draws.
 Xu, Man posted on Thursday, March 22, 2012 - 3:51 pm
Thank you. But it seems BCBOOTSTRAP cannot be used together with TYPE=RANDOM? In this situation, is there any way to get bootstraped standard error for parameters created using NEW & MODEL CONSTRAINT(mediation effect in my case)?

Thank you!
 Linda K. Muthen posted on Thursday, March 22, 2012 - 3:59 pm
No, there is not.
 Xu, Man posted on Thursday, March 22, 2012 - 4:04 pm
I see. I will stick to the given output then. Thanks for letting me know.
 Sofie Wouters posted on Friday, March 23, 2012 - 2:03 am
I was wondering what to report when examining indirect effects and their significance. Do you report the unstandardized or the standardized coefficients?
Because my direct effects are only displayed in a path model with beta's (standardized estimates), I thought it best to report the sobel/delta method test statistic and p-value from the standardized section of the 'indirect'-output, but is this correct?
 Linda K. Muthen posted on Friday, March 23, 2012 - 10:26 am
Whether to report unstandardized or standardized should be guided by the journal you plan to publish in. Whichever you report, you should report their standard errors and p-values. You should not use unstandarized p-values with standardized coefficients.
 Sofie Wouters posted on Monday, March 26, 2012 - 6:44 am
OK, thank you!
 Dustin Pardini posted on Tuesday, May 15, 2012 - 6:34 am
When running bootstrapped standard errors to test for mediation using theta parameterization I am getting confidence intervals indicating a significant indirect effect for the unstandardized estimates, but non-significant indirect effects for the standardized estimates. I am curious why this is occurring and how I should handle this in terms of reporting results. I have historically reported unstandardized coefficients.
 Linda K. Muthen posted on Tuesday, May 15, 2012 - 10:21 am
Raw and standardized coefficients have different sampling distributions so can have different significance levels. If you usually report raw coefficients, I would do that. I would not decide what to report based on significance.
 Jo Brown posted on Thursday, June 14, 2012 - 3:22 am
how many bootstrap cycles do you normally need to obtain accurate standard errors for the indirect effect?
 Linda K. Muthen posted on Thursday, June 14, 2012 - 11:15 am
This can differ depending on the data and model. I would experiment with different numbers until the results stabilize.
 Jo Brown posted on Friday, June 15, 2012 - 3:03 am
Thanks! I tried 1000, 5000 and 10000 and I must say that there is not much difference between these number of cycles. Could this be an argument in favour of using a 1000 cycles?
 Linda K. Muthen posted on Friday, June 15, 2012 - 12:54 pm
You might need only 500. Try that.
 Michelle Little posted on Friday, July 13, 2012 - 2:21 pm

I have a question about using bias-corrected bootstrapping in mediation.

I ran a mediation model (multi-group, 1 latent IV, 2 latent mediators, 2 latent outcomes + 1 cov) without bootstrapping and found several moderate to large direct effects that were significant (p < .05 to p < .001) in one group as well as significant indirect effects in the same group. I used both ML and MLR and found this result. When I ran the same model with bootstrapping, some of those direct effects dropped to ns, yet some corresponding indirect effects are significant according to the bootstrapped result. I can't get a sense from reading whether it is customary to report the significance of direct effects from the bootstrap results, or from an analysis without bootstrapping? does anyone know a reference on this point?
It seems odd to have ns direct effects + significant indirects for the same path... Not sure how to explain that in my results.

Any help would be appreciated.

Thank you.
 Bengt O. Muthen posted on Friday, July 13, 2012 - 4:20 pm
Bootstrap SEs are often bigger so the ns directs is natural. I would report the bootstrapped SEs for all effects.

I don't see why it would be odd for a variable to have a ns direct effect and a sig indirect effect, it that is what you are asking - that represents complete mediation.
 Michelle Little posted on Friday, July 13, 2012 - 7:11 pm
Thanks for the fats reply. I should have pointed out that it was the b effect linking mediator to Dv that was ns,thus the concern. I am accustomed to finding joint effects significant when the indirect effects are.

Thanks for your help,
 Michelle Little posted on Friday, July 13, 2012 - 7:12 pm
Sorry,thanks for the "fast" reply
 Bengt O. Muthen posted on Friday, July 13, 2012 - 8:15 pm
See also our FAQ:

11/18/11: Indirect effect insignificant while both paths significant
 Michelle Little posted on Friday, January 11, 2013 - 1:06 pm

I have two pertaining to a peer review for an article.

1. I bootstrapped the CIs and SEs of direct/indirect effects for a mediation model with latent variables. I therefore couldn't use MLR. Is bootstrapping robust to violations of multivariate normality? I am reluctant to use the bias-corrected bootstrapping because of the conditions of my sample size and size of effects (per Fritz and MacKinnon recommendation).

2. I did multiple group mediation and compared undstandardized effects across groups. A reviewer asked about effect sizes. The standardized effects are not comparable, across groups - so I don't want to report them.
What is the best thing to report in this situation for an effect size, particularly for an indirect effect?

any help would be appreciated,

 Bengt O. Muthen posted on Friday, January 11, 2013 - 4:41 pm
1. Yes.

2. By effect size in this context, I assume you mean: As X increases 1 SD (or changes from control to tx), Y changes ? SD. To compute this, I would take the unstandardized model coefficient estimates (a, b, c) for each group and use the X and Y SDs to compute the group-specific effect sizes which won't be the same since the SDs aren't the same in the different groups even if the model coeff estimates are the same.
 Michelle Little posted on Sunday, January 13, 2013 - 7:48 am
Thanks so much for your quick reply.

I just had one additional follow-up: The reviewer did not specify a particular effect size measure...I was thinking I could report the R-squared and standard betas for each group. But, given that these effect sizes are all group specific and my focus is entirely on group differences, I am reluctant to do this. In lieu of this, I could calculate a standard effect based on the pooled SD of X/Y for each group?.. And for the indirect effect it would be ab*(SDxpooled/SDpooledy). Mackinnon suggests ab*(SDx/SDy) but this is for a single group model. Also, I think I could use the same method for the direct effects (a effect and b effect)
Does this sound ok?
 Linda K. Muthen posted on Monday, January 14, 2013 - 10:19 am
This seems reasonable. I don't think there is only one acceptable way to do this. You may want to ask this question on a general discussion forum like SEMNET.

 Milena Batanova Payzant posted on Friday, January 18, 2013 - 9:07 am

I am running a path analysis with my predictor and 3 mediator variables at time 1 and two outcome variables at time 2, with a sample of 499. In my Analysis command, I indicated "BOOTSTRAP = 10000"; and in my Output command, I asked for STANDARDIZED MODINDICES (3.84)SAMPSTAT TECH1; CINTERVAL(BOOTSTRAP);

I want to make sure this is the correct syntax, and also, I'm unsure if the command is for percentile bootstrapping or bias-corrected bootstrapping. Given my sample size and that the predictor does NOT lead to the outcomes, which might be better to use in this case? If the CIs do not contain 0, can I assume mediation even though the predictor does not lead to the outcomes?
 Bengt O. Muthen posted on Friday, January 18, 2013 - 2:52 pm
If you say


you get the bias-corrected version.

Before trusting the CIs, you want to make sure that your model fits, that is, that the direct effects are zero.
 Dexin Shi posted on Sunday, September 08, 2013 - 5:45 pm

I am running a meditation analysis with a categorical mediator (no latent variable involved). I used both WLSMV and BC bootsrap. However, for one path (from the independent variable (x) towards to the categorical mediator), the two methods did not agree with each other on significant test. WLSMV gave p=0.819, where as BC bootstrap CI [0.399,2.496].To report the results, which one is recommended? Thank you for your help.
 Linda K. Muthen posted on Sunday, September 08, 2013 - 7:56 pm
WLSMV gives a symmetric confidence interval around a bootstrapped standard error. BCBOOTSTRAP gives a non-symmetric confidence interval around a bootstrapped standard error. This is why they may not agree.
 Stephanie Vezich posted on Saturday, September 21, 2013 - 1:44 am
Dear Drs. Muthen,

We have run several studies with this path model:

empathy ON cond (a1);
anc ON cond (a2);
liking ON empathy (b1)
anc (b2)
cond (c1);
gameinv ON liking (e1)
empathy (d1)
anc (d2)

Gameinv is categorical, and we are using bootstrapping. Our reviewers were interested in an alternative theoretical model, leading us to test these path models:

empathy ON cond (a1);
liking ON cond (a2);
anc ON empathy (b1)
liking (b2)
cond (a3);
gameinv ON anc (c1)
empathy (d1)
liking (d2)
cond (a4);

empathy ON cond (a1)
liking (b1);
anc ON cond (a2)
liking (b2);
liking ON cond (c1);
gameinv ON empathy (e1)
anc (e2)
liking (f1)

The relevant indirect paths are significant in all three. Is a way to argue statistically that our original proposed model is better (such as comparing goodness of fit, although the only fit statistic reported from bootstrapping has been WRMR), or should we make a theoretical argument?

Any advice would be much appreciated.

 Linda K. Muthen posted on Saturday, September 21, 2013 - 11:53 am
You can compare the fit of the models. With the BOOTSTRAP option, only standard errors are bootstrapped so we don't give fit statistics. You can run the three models without the BOOTSTRAP option to obtain the fit statistics.
 Stephanie Vezich posted on Sunday, September 22, 2013 - 11:11 pm
Thanks for your quick response! Which fit statistic would you recommend comparing across models?

The fit statistics reported when I eliminate bootstrapping are Chi-square, RMSEA, CFI, and WRMR, but it's my understanding that these cannot be used to compare non-nested models.
 Linda K. Muthen posted on Monday, September 23, 2013 - 2:12 pm
There is no way of testing which model is best compared to another statistically unless the models are nested. You can use any of the fit statistics listed above for comparison purposes. I would not use WRMR as it is an experimental fit statistic.
 Stephanie Vezich posted on Monday, September 23, 2013 - 6:17 pm
Great, thanks so much for your feedback.
 Melissa Kull posted on Saturday, November 09, 2013 - 10:50 am
Using raw data with a small amount of missing data, I've been running basic path models with one exogenous predictor, three mediators (all continuous, mostly normally distributed), and one outcome. I've been trying to run these models using a sampling weight (to adjust for non-response), although I'm not interested in stratification or clustering, so I have not identified these data as complex. When I try to estimate bootstrapped SEs in these models, the models will run using ML but will not run using MLR (which is the default for these models when the bootstrap is not applied). Can someone explain why this is happening and suggest some references with information on selecting the most appropriate estimator? I've looked over some of the MacKinnon articles cited in this thread but am not sure which whether my models are correctly specified with the ML estimator and bootstrapped SEs. Many thanks.
 Bengt O. Muthen posted on Saturday, November 09, 2013 - 6:24 pm
We don't do bootstrap with sampling weights. It is not clear how that should be done.
 Melissa Kull posted on Wednesday, November 13, 2013 - 8:06 pm
Dr. Muthen, Thanks for your response. It seems peculiar that the models are converging and providing estimates that have been similar to results from other iterations of these models that I've been running. I thought maybe the syntax was just ignoring the population weight, but when I took the weight out, the results were different. Despite this, I suppose these estimates are not to be trusted? Can you explain why or suggest a reading that indicates why bootstrapping should not work with population weights? This would be tremendously helpful as I move forward with trying to appropriately specify these models. Thanks very much for your assistance.
 Linda K. Muthen posted on Thursday, November 14, 2013 - 9:56 am
I think the issue here is that the BOOTSTRAP option is not available with TYPE=COMPLEX. It is available with the WEIGHT option. I think this explains what you are seeing.
 Patrícia Costa posted on Monday, February 24, 2014 - 6:33 am
Dear Drs Muthen,

I have ran a simple mediation as follows:

Perf2 ON Res;
Perf2 ON TWE;

Model indirect:
Perf2 IND TWE;

From the output below, I conclude that my model is saturated, and the paths are nonsignificant.

I have two questions:

(1) Why is the model saturated? I am unable to see how is it possible that I have 80+ parameters to estimate...

(2) Based on this, do I conclude that I have no evidences to support the mediation hypothesis?

Thank you in advance.

Chi-Square Test of Model Fit

Value 0.000
Degrees of Freedom 0
P-Value 0.0000

Estimate 0.000
90 Percent C.I. 0.000 0.000
Probability RMSEA <= .05 0.000

CFI 1.000
TLI 1.000

Chi-Square Test of Model Fit for the Baseline Model

Value 4.964
Degrees of Freedom 2
P-Value 0.0836

Value 0.000
 Linda K. Muthen posted on Monday, February 24, 2014 - 10:29 am
Your model is not a mediation model. It should be

Perf2 ON Res;

if Res is the mediator.
 Patrícia Costa posted on Wednesday, February 26, 2014 - 5:05 am
Dear Dr. Muthén,

Thank you for your answer. My model is:

X = Res
Mediator = TWE
Y = Perf2

Is there anything wrong? My questions remain.
 Linda K. Muthen posted on Wednesday, February 26, 2014 - 6:35 am
Then the model should be

twe ON x;
perf2 ON twe;
 Betsy Lehman posted on Sunday, March 23, 2014 - 4:39 pm
Dear Drs. Muthen,

I am hoping to look at indirect effects in a path analysis. I know that I can just request indirect effects using the IND command; however, per the Preacher and Hayes (2008) article, it appears that the bootstrapping method is recommended particularly for samples that are not normally distributed (as mine is not). With that said, though, I have used MLR estimator to help me manage my missing data and non-normality. When I tried to use bootstrapping as a way to identify indirect effects, I received the error message saying that bootstrapping can't be used with MLR.

I'm wondering what you might suggest in a situation like this. I'm imagining that I could either report the indirect effects that are provided without bootstrapping (e.g.just multiplying direct effects between mediators), or I could Not use MLR, and run the bootstrapping procedure to get the bootstrapped indirect effects.

Do you have thoughts about how to best proceed? Thank you so much!
 Linda K. Muthen posted on Monday, March 24, 2014 - 8:15 am
Use ML not MLR and you can do bootstrapping. All of the maximum likelihood estimators give the same parameter estimates. Bootstrapped standard errors are implemented in ML.

MLR and bootstrapped standard errors are usually very close.
 Betsy Lehman posted on Monday, March 24, 2014 - 9:43 am
Thanks for your help- will do!
 RuoShui posted on Sunday, April 06, 2014 - 5:58 pm
Dear Drs. Muthen,

I used ML and Bootstrapping in my SEM and asked for standardized in the output. However, there is no standard error or p value for standardized parameter estimates. Is this normal?
Is there anyway I can obtain them?

Thank you very much!
 Linda K. Muthen posted on Monday, April 07, 2014 - 6:19 am
Standardized estimates are not available using the BOOTSTRAP option.
 Rianne van Dijk posted on Wednesday, July 30, 2014 - 8:02 am
Dear Drs. Muthen,

if I understand correctly, you mention in the discussion above that it is preferred to use bootstrapping (in combination with the ML estimator) instead of using the MLR estimator when indirect effects are of interest and non-normal data is used? Also, the two methods should produce comparable Se's.

However, when I run my model (longitudinal mediation model based on skewed data), I get significant results when I use the MLR but not when bootstrapping is used (i.e., concerning estimates of both direct and indirect effects).. I'm wondering how this could be (perhaps sample size? N = 172) and why bootstrapping is preferred over the usage of MLR?

Thank you in advance for your reply.
 Bengt O. Muthen posted on Wednesday, July 30, 2014 - 3:58 pm
Bootstrapping allows the indirect effect to have a non-normal sampling distribution (this is quite apart from the non-normality of the outcomes) so that a non-symmetric confidence interval can be used. Even though MLR gives good SEs, it leads to using symmetric confidence intervals.

You can also check by using Bayes, which also gives non-symmetric confidence intervals.
 Katherine Winham posted on Friday, September 12, 2014 - 4:22 am

Forgive me if this is a repeat question.

If I have a mediation where each path is significant, but the total indirect effect is zero, do I conclude that there is no mediation? I bootstrapped the standard errors, and the CI did not include zero.
 Bengt O. Muthen posted on Friday, September 12, 2014 - 6:04 pm
If the CI does not include zero, the effect is significant.
 Elizabeth Munoz posted on Thursday, October 09, 2014 - 12:11 pm
I was wondering if I could get some clarification on the bootstrapping method. Is the standard "Bootstrap = " statement in Mplus v.6 a parametric bootstrap or does it do case resampling?
Thank you.
 Tihomir Asparouhov posted on Thursday, October 09, 2014 - 2:43 pm
Bootstrap = 500; will give you the standard bootstrapping method.

Mplus can also do residual bootstrapping using the command

See page 620 in the User's guide for more details.
 Bengt O. Muthen posted on Thursday, October 09, 2014 - 4:31 pm
So our standard bootstrapping is the case resampling.
 Margarita  posted on Thursday, March 12, 2015 - 3:47 am
Dear Dr. Muthén,

I ran a mediation model with 5000 bootstraps, and while some indirect effects were found to be non-significant (in unstandardised, STDYX, STDY, and STD formats), there were not any 0 in the bias-corrected C.I. results, except in STDY format. Is such a discrepancy normal? In that case what should be reported?

Thank you for all your help.
 Bengt O. Muthen posted on Thursday, March 12, 2015 - 8:24 am
That happens not infrequently. The MacKinnon book recommends the bootstrap bias-corrected CI approach. Another look at this is obtained by Estimator=Bayes and its CIs.
 Margarita  posted on Thursday, March 12, 2015 - 8:32 am
I see. Thank you for your reply. If you have the time, I would like to ask you some more things.

1. Should the results be interpreted on all 90%, 95% , and 99% CI?

2. Is it better to look at the unstandardised or standardised results?

3. The degree to which an indirect effect is significant or not, should be based on the indirect effect results or the CI?

I really appreciate your help!
 Bengt O. Muthen posted on Thursday, March 12, 2015 - 3:49 pm
You should read the standard literature on mediation using MacKinnon's book as well as the book by Hayes.

1. 95% is the most commonly used.

2. That's a big question and I refer to the literature I mentioned.

3. You want to consider a CI for the indirect effect.
 Margarita  posted on Thursday, March 12, 2015 - 4:04 pm
I know Hayes suggests unstandardised but it is not clear if any of the two is preferable.

Thank you very much Dr. Muthén. Your help is greatly appreciated.
 Bengt O. Muthen posted on Thursday, March 12, 2015 - 6:39 pm
I think both are fine if used properly.
 Margarita  posted on Friday, March 13, 2015 - 6:50 am
Dear Dr. Muthén,

Thank you for your replies.

I apologise for posting so many questions, but you are probably the most appropriate people to ask this. I could not find any references that address problems like the following:

I have a serial mediation model in which 3 paths are shown to be non-significant (p>.05) with bootstrapping ML.

However, the ML BC-Bootstraps 95% CI have no zeros.

Also the Bayes 95% CI have no zeros.

Should I conclude that the paths are in fact significant? I don't understand why such a discrepancy exists.

Once again, thank you for everything.
 Margarita  posted on Friday, March 13, 2015 - 1:27 pm
Dear Dr. Muthén,

I know I shouldn't be posting a second message, but I thought I should save you time by saying I figured out what went wrong.

So thank you anyway.
 anonymous Z posted on Monday, March 16, 2015 - 2:29 pm
Dear Dr. Muthén,

I was using bootstrapping to test mediation. The model fit well. I noticed that the 95% confidence interval of the standardized indirect effect include zero (no medication) whereas the unstandardized indirect effect exclude zero (mediation holds). Which one should I use?

Thank you very much!
 Bengt O. Muthen posted on Monday, March 16, 2015 - 3:42 pm
Try Estimator = Bayes and see if you find the same difference (use biter = (2000)). From that run, also check if one of the two posterior distributions is more non-normal looking than the other.
 anonymous Z posted on Tuesday, March 17, 2015 - 8:40 am
Dear Dr. Muthen,

Thank you very much for your prompt response.

I added the syntax as follow but got a warning message saying that “MODEL INDIRECT is not available for analysis with ESTIMATOR=BAYES.”

biter = (2000);

Also you suggest to check two posterior distributions, what part of the output tells about that?

 Bengt O. Muthen posted on Tuesday, March 17, 2015 - 11:06 am
Instead of Model Indirect you have to use Model Constraint to express the indirect effect in terms of parameter labels given in Model.

You also have to standardize in Model Constraint, dividing by the SD of the DV and multiplying by the SD of the IV. Again, using parameter labels for model parameters.
 Candy Yang posted on Sunday, March 22, 2015 - 8:04 pm
I ran bootstrapping to test mediating effect. However, the lower bound is 0.000. Is this a positive or negative number? Does the confidence interval include 0? The output is as follow. Thanks very much!

Specific indirect
Lower .5% Lower 2.5% Lower 5% Estimate Upper 5% Upper 2.5% Upper .5%
ACTCOMM -0.001 0.000 0.000 0.001 0.008 0.009 0.016
 Bengt O. Muthen posted on Monday, March 23, 2015 - 1:57 pm
The lower bound of 0.000 means that your effect is including zero and is therefore insignificant. You want the lower bound to be clearly away from zero and not consider 3rd decimals.
 Rianne van Dijk posted on Sunday, April 12, 2015 - 1:06 pm
Dear Drs. Muthen,

previously I asked a question about bootstrapping, because when running my mediation model all my significant results (both direct and indirect) disappeared when applying the bootstrap method. You suggested to use a Bayesian estimation method to check my results, and luckily both my direct and indirect effects were confirmed in these analyses. Now I'm wondering how this difference in results (bootstrap vs. bayes) could appear as they both produce non-symmetric confidence intervals? What's the difference? Could it have anything to do with sample size (in my case N = 172, with 48 free parameters)?

Thank you for your response.
 Bengt O. Muthen posted on Sunday, April 12, 2015 - 5:01 pm
Here are the things I would look for:

What does the Bayes posterior distribution of the indirect effect look like? Is it very non-normal looking? Would a symmetric interval give a different significance result? How many bootstraps are used? What does the histogram for your DV look like? Does a linear model look reasonable when you plotting the individual residuals for the DV against the predictor?
 Samantha posted on Monday, April 20, 2015 - 7:50 am
Is it still the case that bootstrapped confidence intervals for indirect effects are not saved and therefore it is impossible to request more than 3 decimal places? My question is very similar to the above post from "Eric" on June 16, 2008 and I wanted to make sure that there were no recent developments in the available options.

I saw Linda's response and suggestion to rescale the variables using the DEFINE command. However, unless I am missing something, that does not seem like a way to obtain more decimal places for the standardized coefficient confidence intervals.

Thank you!
 Linda K. Muthen posted on Monday, April 20, 2015 - 10:35 am
Yes, this is still the case.

Rescaling will help if all you are seeing in the three decimal points is zero.
 Simon Schus posted on Friday, May 22, 2015 - 9:58 am
Hi there,

Is there a way to produce the bootstrapped percentile confidence intervals, as mentioned by Hayes and Scharkow (2013) and various others authors?

I can produce a bias-corrected one, but can't find any option for percentile. Are they not included?

 Bengt O. Muthen posted on Friday, May 22, 2015 - 3:30 pm
Cinterval(bcbootstrap) gives several percentiles for the confidence intervals, such as 2.5% and 97.5%.
 chris mooney posted on Wednesday, June 17, 2015 - 10:13 am
Hello Drs. Muthen -

In using bias-corrected bootstraps to get 95% CI for total, direct, and indirect effects, I notice a few things:

1. There are no standard errors or p-values for standardized estimates.

2. Many p-values for unstandardized estimates are different from when the bootstrap option is not used. Indeed, many now show significance. Some estimates show minor changes, but most are the same.

I am unclear why I am seeing this. Which p-value should I use?


 Linda K. Muthen posted on Wednesday, June 17, 2015 - 11:43 am
1. Standardized results are not available when the BOOTSTRAP option is used.

2. The BOOTSTRAP option produces bootstrapped standard errors which are reported in the second column of the output. It is the ratio of the parameter estimate to its standard error that results in a z-test and p-value.
 Kevin So posted on Sunday, June 21, 2015 - 11:46 pm
Hello Drs Muthen,

I am running a partial mediation model in Mplus (five independent variables, one mediator and one dependent variable, all measured using multiple item measures on a seven point Likert scale. The dependent variable was a second order factor with three dimensions, each measured using two items. A composite was created for each of these dimensions to be used as a first order indicator of the dependent variable). I tested the measurement model and the structural model using MLM because it produces Satorra-Bentler corrected Chi-Square and adjusted fit statistics, as well as accounts for the non normality of the data. Both models produced good model fit statistics. In order to assess the indirect effects, I followed the instruction provided in the Mplus manual. I used the BOOSTRAP, MODEL INDIRECT and CINTERVAL commands but it says "BOOTSTRAP is not available for estimators MLM, MLMV, MLF and MLR." I then changed the estimator to ML, the results could be generated.
1. What is your suggestion for testing the indirect effects in the model described above?
2. Is it appropriate to use MLM for the measurement model and the overall structural model and then use ML for only generating the bootstrap indirect effects?
3. In the output for the direct effects, I can see indirect effects but not the direct effects. What might be the problem?

Many thanks in advance!

 Linda K. Muthen posted on Monday, June 22, 2015 - 6:08 am
The parameter estimates are the same for all maximum likelihood estimators. When you use the BOOTSTRAP option, you obtain bootstrap standard errors and maximum likelihood parameter estimates. You can use the CINTERVAL confidence intervals for the indirect effects only.

You get direct effects only when you specify the IND option with one argument on the left-hand side and one argument on the right-hand side. See Page 691 of the user's guide.
 Kevin So posted on Monday, June 22, 2015 - 9:02 am
Thank you Dr Muthen for your response!

I would appreciate if you could advise if it is appropriate to use MLM for the measurement model and the overall structural model and then use ML for only generating the bootstrap indirect effects.

Thank you for your advice.

 Linda K. Muthen posted on Monday, June 22, 2015 - 9:50 am
You can do it either way.
 Trisha Raque-Bogdan posted on Tuesday, June 23, 2015 - 7:33 pm
I am trying to run mediation using bootstrapping and keep getting the following error message, "*** WARNING in MODEL INDIRECT command
There is an indirect effect involving a path between the following
variables, but no indirect or direct path exists in the model."

Any one have suggestions for how to correct for this? Thank you for your time!

 Linda K. Muthen posted on Wednesday, June 24, 2015 - 5:59 am
This means that you are specifying an indirect effect that is not part of your model. If you have an indirect effect

y IND m x;

check that your model contains

y ON m;
m ON x;
 RuoShui posted on Wednesday, November 04, 2015 - 10:54 pm
Dear Drs. Muthen,

I ran a mediation model with 1 latent predictor, 1 latent mediator and 2 observed outcomes. I also included a range of demographic variables as covariates. I was asked to provide effect size of the mediation. I am not quite sure how to apply Preacher & Kelly (2011) into models with latent variables and multiple covariates. I am wondering does Mplus provide staticis that help calculate the effect size of mediation?

Thank you very much.
 Bengt O. Muthen posted on Thursday, November 05, 2015 - 4:16 pm
Just give the standardized indirect effect.
 Carl F Falk posted on Wednesday, June 15, 2016 - 1:45 pm
Mplus v7.4 release notes seem to indicate that Bootstrapping works now with Monte Carlo simulation studies. I've tried this with a very simple study and using external data. I can of course post the minimum working example if necessary. It appears to do bootstrapping, but in conjunction with cinterval(bcbootstrap) or cinterval(bootstrap) in the OUTPUT section, I do not see the resulting intervals saved anywhere. I would have expected there to be a SAVEDATA option or other OUTPUT option that would allow this, but RESULTS and ESTIMATES don't appear to contain the intervals.

Thank you very much for any clarification regarding this matter!

I suppose this is the closest related thread for this question, as I see a reply on October 19, 2011 - 11:21 am stating that BOOTSTRAP does not work with Monte Carlo.
 Linda K. Muthen posted on Wednesday, June 15, 2016 - 3:44 pm
Please send the output and your license number to
 Timothy Ihongbe posted on Friday, August 19, 2016 - 12:02 am
Dear Drs. Muthen,

I am running a mediation analysis on Mplus 7 using the bias-corrected bootstraps to get 95% CI for the total, direct, and indirect effects. However, I have 2 questions that stem from this:

1. With the use of bootstraps, I observed that only the RMSEA is available for model fit. Will it be proper to use the model fit indices (Chisquare, TLI, CFI) that were outputted without using bootstraps, even if I will be using the bootstrap results?

2. For the bootstrap CIs, do I also use the bootstrap estimates? or do I use estimates from the unstandardized output obtained without using bootstrap, and then combine it with the bootstrap CIs. Will that be a correct practice?

I sincerely do appreciate your kind response.

Thank you.
 Bengt O. Muthen posted on Friday, August 19, 2016 - 12:02 pm
We currently recommend the regular bootstrap, not the bias-corrected version (see e.g. our new book).

1. Yes because the model and parameter estimates are the same.

2. Note that bootstrapping does not influence the parameters only the SEs and CIs.
 Timothy Ihongbe posted on Saturday, September 03, 2016 - 4:47 am
Dear Drs. Muthen,

Please, I have a follow-up question to my previous ones.

I am running a mediation analysis on Mplus 7.4 using bootstraps to obtain 95% CI.

However, I am unsure as to which 95% confidence interval to report for the total effect. Do I report the bootstrap 95% CI or the non-boostrap 95% CI?

Also, for the indirect effect, I am reporting the bootstrap 95% CI and for the direct effect, I am reporting the non-boostrap 95% CI. Is this an acceptable strategy?

Thank you.
 Bengt O. Muthen posted on Saturday, September 03, 2016 - 9:20 am
You should be consistent and report the bootstrap CIs.
 Aurelie Lange posted on Friday, September 30, 2016 - 7:06 am
Dear Dr Muthen,

I was recently adviced by a reviewer to report non-symmetric confidence intervals by using bias-corrected bootstrapping or the bayes estimator in Mplus. However, I have a TYPE = COMPLEX TWOLEVEL model. Bootstrapping is not possible with type=twolevel, whereas the bayes estimator is not available for type=complex.

Are there any other methods of constructing non-symmetric confidence intervals for a complex twolevel model?

 Bengt O. Muthen posted on Friday, September 30, 2016 - 3:38 pm
Not that I am aware of. I think the closest you can get is to skip Complex and just use Twolevel together with Bayes.
 Nassim Tabri posted on Tuesday, October 18, 2016 - 4:15 pm
Dear Mplus Team,

I want to run a Monte Carlo simulation for a simple mediation model to determine the sample size needed to detect the indirect effect with 80% power. I want to test the statistical significance of the indirect effect using the 95% bc-bootstrapped CI.

We have Mplus 7.4, which can handle this. We prepared a syntax, but the output did not include the summaries for the bc bootstrap CI.

We would very much appreciate it if you could look at our syntax below and let us know what’s missing.

Thank you for time!

Names are x m y;
NREPS = 10; !update to 10000
SEED = 53487;
cutpoints = x(0);
Estimator = ML;
bootstrap = 5; !update to 5000
Model population:
[x @ 0];
x @ .25;
[m -y @ 0];
m @ .90;
y @ .8954;
m on x @ .632456;
y on m @ .316228;
y on x @ .063;
[m -y * 0];
m * .90;
y * .89;
m on x *.632456;
y on m *.316228;
y on x * .063;
Model indirect:
y IND m x;
SAMPSTAT RESIDUAL tech1 tech3 tech4 tech10;
 Bengt O. Muthen posted on Tuesday, October 18, 2016 - 5:43 pm
Your output shows this under the heading:


The power as computed using the bcbootstrap CIs is given in the last column.
 Jolien Vleeshouwers posted on Thursday, January 05, 2017 - 5:30 am

Regarding the first few posts in this thread, I am running half-longitudinal mediation where also the standard estimator is WLSMV. I tried changing the estimator to ML but get warnings and it won't show bootsrapped CI's.

Is there an estimator I can use so that I get Odds ratios instead of probit results, while running half-longitudinal mediation models with ordinal categorical dependent variables?
 Linda K. Muthen posted on Thursday, January 05, 2017 - 6:08 am
In Mplus, only maximum likelihood gives logistic regression and therefore odds ratios. Weighted least squares gives probit regression.
 Jolien Vleeshouwers posted on Tuesday, January 10, 2017 - 5:32 am
Hi again,

I ran mediation analysis using Model Constraint and estimator ML. I get ORs (logistic regression odds ratio results) for all estimates, but not for the interaction effect. Can I exponentiate the estimate for the indirect effect to get the OR? And, for all results, how do I calculate the CI for the ORs? Can I exponentiate the CI for the model results?

Thanks again
 Bengt O. Muthen posted on Tuesday, January 10, 2017 - 3:36 pm
Q1. Yes. But you may want to express it as simple slope at different moderator values (see e.g. our new book).

Q2. See the FAQ on our website:

Odds ratio confidence interval from logOR estimate and SE
 Jolien Vleeshouwers posted on Thursday, January 26, 2017 - 4:54 am

I am running half-longitudinal mediation models using model constraint, and since I have categorical outcomes - logistic ordinal regression. I calculated Odds Ratios for the indirect effect (X at t1 -> M at t2 * M at t1 * Y at t2), and am now wondering if this is correct. May i calulate an indirect effect in this case, since I do not have linear regressions? And may i calculate Odds ratios for these?

Hope you can help me!

Thank you!

Kind regards,
 Bengt O. Muthen posted on Thursday, January 26, 2017 - 5:03 pm
No, this is not correct. M at T1 is different from M at T2 so there is no reason to use a product for the indirect effect. Also, with categorical outcomes, indirect effects should not be defined as products (see, e.g., our new book).
 Peter McEvoy posted on Monday, February 13, 2017 - 7:38 pm
Hello there,

My understanding is that for half-longitudinal models (X, M, and Y measured at two timepoints), under the assumption of stationarity...

Path a: X at t1 -> M at t2 (controlling for M at t2)
Path b: M at t1 -> Y at t2 (controlling for Y at t1)

The estimated indirect effect is ab (Cole & Maxwell, 2003; Kline, 2015).

If I use SEM, I will probably use WLSMV as items are rated on a Likert-type scale. If I use path analysis with total scale scores, I will probably use MLR.

My question is, how can a SE/CI be calculated around ab, given that the X->M->Y pathway is not explicitly in the model? Is there a way of calculating this within Mplus?

Many thanks,

 Bengt O. Muthen posted on Tuesday, February 14, 2017 - 6:16 pm
You can use Model Constraint. Just label the a and b parameters in the Model Command and then created the NEW parameter a*b.

I am not sure I agree with a*b being the right indirect effect using this controlling for M and Y at t1.
 skander esseghaier posted on Friday, March 17, 2017 - 9:39 am
I have a question regarding the indirect effect in a simple mediation model. If the total indirect effect [a' * b'] is significant, does it implies that both a' and b' are significant ? Or could it be that the total indirect effect [a' * b'] is significant but one of the paths is not?
 Bengt O. Muthen posted on Friday, March 17, 2017 - 5:55 pm
Q1: No

Q2: Yes
 Mike Nelson posted on Wednesday, February 21, 2018 - 8:24 am
Drs Muthen,

I am running a fairly complex mediation model with a large sample (n = 839) and noticed some peculiarities when including/not including bootstrap confidence intervals.

To complete his part of the manuscript, my colleague ran the model with no bootstrap confidence intervals, but I requested them on my end and we found that our p values did not line up. Specifically, for one of our direct effects, the bootstrapped model gave a p value of .07 and the non-bootstrapped model resulted in a p of .036. Additionally, within the bootstrapped model, the p value was different (.062) depending on whether we looked at standardized or unstandardized results. Is there a reason why these may differ so much? Which one is more trustworthy? My thinking is that the bootstrapped estimate will be more accurate.

Thank you!
 Bengt O. Muthen posted on Wednesday, February 21, 2018 - 4:49 pm
Bootstrapped SEs are typically better. But note that in many models the usual symmetric confidence interval (CI) based on the assumption of normality of the parameter estimate distribution is not a good choice - instead, a non-symmetric interval allowing for the non-normality should be used. This is obtained in Mplus by requesting bootstrapping and CIinterval(Bootstrap) in the Output command. CIs are therefore better than p-values because the latter are based on the normality assumption and the former are not.

Standardized p-values can be different than unstandardized ones because the two have different degrees of approximation to the normal distribution assumption. Again, CIs are better.

You can read about all this in our Regression and Mediation book.
 Yue Yin posted on Friday, February 23, 2018 - 8:24 am
Hi, I want to ask the if Mplus can use Monte Carlo CI to analyze mediation in the single level? If it can, where can I find the code? Thank you!
 Bengt O. Muthen posted on Friday, February 23, 2018 - 4:33 pm
Monte Carlo CIs are not available - we find that bootstrapping works very well.
 Yue Yin posted on Sunday, February 25, 2018 - 2:50 pm
But I checked some studies, bias-corrected bootstrapping has elevated Type I error rates when the sample size is small. In our study, the sample size is like 170, is it still OK to use bootstrap?

 Bengt O. Muthen posted on Sunday, February 25, 2018 - 5:13 pm
Mplus offers both bias-corrected bootstrap and regular bootstrap. The regular bootstrap does not have the elevated Type I error that the bias-corrected version has. We discuss this in our book Regression and Mediation Analysis using Mplus.
 Jennie Jester posted on Thursday, April 05, 2018 - 8:13 am
One of the reviewers of a paper told us to use bootstrap standard error to estimate indirect effects in our SEM model. Our SEM model is estimated using TSL (Type = Complex) to account for the cluster effect of siblings. We don't have a weight variable. We looked through the past threads and it seems that Mplus doesn't support bootstrapping with Type = Complex model, unless it is weighted.

Below is the error:

*** ERROR in ANALYSIS command The BOOTSTRAP option with TYPE=COMPLEX and REPSE=BOOTSTRAP requires a weight variable. Specify a weight variable using the WEIGHT option in the VARIABLE command. If weights are not present, create a new variable (weight) for the WEIGHT option. Assign weight=1 in the DEFINE command.

1) How should we deal with this situation? Should we create a weight variable with value of 1? Is this the way to go?

In another thread, we also saw that it is recommended to use ESTIMATOR = BAYES and MODEL CONSTRAINT instead of bootstrapping for 2-level model (Type = TWOLEVEL).

2) Does ESTIMATOR = BAYES apply to Type = Complex model as well? If yes, what would the syntax be for this?
 Tihomir Asparouhov posted on Thursday, April 05, 2018 - 3:19 pm
1) Yes - just add a variable W to the usevar command and add these two lines

ESTIMATOR = BAYES and MODEL CONSTRAINT can be used instead of bootstrapping if the goal is to obtain a distribution for a model parameter such as the indirect effect.

2) ESTIMATOR = BAYES does not apply to Type = Complex. If you want to use ESTIMATOR = BAYES you would need to change the model to a two-level model (Type = TWOLEVEL) and model explicitly the cluster effect of siblings.
 nakyoung kim posted on Thursday, January 03, 2019 - 11:27 pm
Dear Muthen,
I have two questions regarding moderated mediation using latent variables.

I did a path analysis for mediation while using "ANALYSIS: BOOTSTRAP = 5000;".
Strangely, when I used "BOOTSTRAP = 5000;", the p-value for the interaction changed (estimate value was still the same, but S.E value has changed).
Also, the p-value for the interaction differ according to the value of BOOTSTRAP (BOOTSTRAP = 1000 vs. BOOTSTRAP = 3000)
I wonder why there is a change of the p-value for the interaction because BOOTSTRAP option is only related to mediation path(This change repeats when I use PATH ANALYSIS).

I can get p-value during path analysis for mediation using MODEL CONSTRAINT option, but not using BOOTSTRAP option.
How this value is calculated? Can I use this p-value in article?
I mean should I surely use BOOTSTRAP option to calculate p-value of mediation or moderated mediation path analysis?

Thank you,
 Bengt O. Muthen posted on Friday, January 04, 2019 - 5:36 pm
Bootstrap = affects only the SEs and therefore the p-values and the CIs (confidence intervals). If you also ask for Cinterval(Bootstrap) in the Output command you get bootstrapped CIs. For indirect effects, bootstrapped CIs are recommended.

Using a higher bootstrap value gives more precise SEs and CIs.

You can get bootstrap SEs and CIs also for NEW parameters in Model Constraint.
 Mariko SH posted on Wednesday, November 20, 2019 - 8:23 am
Hi I am trying to do a causal mediation analysis using SEM.

Using the same data and command,
my result for direct effect differs between
1) when I model it alone (y on x)
2)when I list it together with other models to caliculate indirect effect in the model command(eg. y on x; y on M;).

The intercept for y on x is the same but the coefficients are very different.

Is there a way to get similar results for 1 and 2?
Thank you in advance for your guidance.
 Bengt O. Muthen posted on Wednesday, November 20, 2019 - 4:28 pm
That difference is to be expected. If M has an effect on Y, the model Y ON X is misspecified.
 Mariko SH posted on Thursday, November 21, 2019 - 5:22 am
Thank you very much Dr Muthen.

I realize I had some misunderstading so can I just confirm that
instead of stating
y on x m; with y on m; m on x;,

y on x; with y on m; m on x;
would give me the direct effect in mplus?

Sorry I am very confused with this,,
Thank you.
 Bengt O. Muthen posted on Thursday, November 21, 2019 - 5:31 pm
You should study up on the "path analysis" section of our Short course Topic 1 video and handout on our website.
 Joanna Davies posted on Friday, January 17, 2020 - 10:29 am

I want bootstrap CI for my multiple mediator model using the Preacher and Hayes 2008 approach. When i use with observed vars (fscores) it works, but when i model the latent vars and paths in one step, the point estimates are given but the CI are 0.000 or 999.000. I dont get an error msg - any idea what the problem is?

Thank you.
 Bengt O. Muthen posted on Friday, January 17, 2020 - 11:47 am
We need to see your full output - send to Support along with your license number.
 Susan South posted on Monday, March 02, 2020 - 8:38 am

I am running a latent variable path model with an indirect effect. When I include the cinterval(bcbootstrap) option, one of the direct paths and a factor loading in the mediation variable are no longer significant. Are the standard errors for all parameters now being bootstrapped?

Thank you.
 Bengt O. Muthen posted on Monday, March 02, 2020 - 12:03 pm
Yes. And typically, bootstrap SEs are more trustworthy than ML SEs.
 Susan South posted on Tuesday, March 03, 2020 - 6:52 am
Thank you for the quick response!

I'm actually using a WLSMV estimator for the model.

I'm not as worried about the direct path from the mediator to the DV, b/c the overall indirect effect does not including 0 in the CI. But I'm still wondering why the bootstrap SE now makes one of the factor loadings not significant? Would you trust that over the factor loading from the model without bootstrap SE?
 Bengt O. Muthen posted on Tuesday, March 03, 2020 - 3:17 pm
Yes - at least if all the bootstraps converged (see the output). As another arbiter, you can use Estimator = Bayes which like bootstrapping gives non-symmetric CIs.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message