Message/Author 

Daniel posted on Friday, June 18, 2004  5:16 am



I am running several mediation models in which my dependent variable is ordered categorical. I am using the bootstrap method to estimate standard errors for the indirect effects, with the bootstrap analysis command. I asked for confidence intervals and am given the appropriate intervals. Can I use these intervals along with the effects to estimate odds ratios, or is this incorrect if my mediator is continuous? 

bmuthen posted on Friday, June 18, 2004  8:42 am



Which estimator are you using, WLSMV or ML? 

Daniel posted on Friday, June 18, 2004  9:02 am



I was using the default estimator. I believe it is WLSMV since I am modeling with categorical dependent variables, although I may be wrong. 

bmuthen posted on Friday, June 18, 2004  9:20 am



The default WLSMV works with probit regressions so the estimates are not directly in odds ratio metric. The indirect effects are with respect to a continuous y* variable behind the dependent observed categorical variable, where y* is the response propensity. I think this idea has been discussed in David MacKinnon's work. 

Daniel posted on Saturday, June 19, 2004  9:00 am



I have a question regarding the indirect effect. If the two arcs (paths) in the specific indirect affect (a to b [path a'] and b to c [path b']) are each significant (i.e., a' and b'), shouldn't the total indirect effect [a' * b')also be significant? Or is it possible for a' and b' to be significant without the specific indirect effect (a' * b') being significant? 

bmuthen posted on Saturday, June 19, 2004  11:59 am



Seems like this is possible because the indirect effect is a product of the two estimates and the SE of this product is a function not only of each of the two SEs, but also the covariance between the two estimates  which might be positive and therefore make the denominator of the test larger. 

Daniel posted on Monday, June 21, 2004  8:04 am



Ok, if I have the case where each path is significant, but the total indirect effect is not significant, what could I conclude about mediation? 

bmuthen posted on Monday, June 21, 2004  4:53 pm



I would say there is no significant mediation. 

Daniel posted on Tuesday, June 22, 2004  6:12 am



Thanks. My population is 913 for the study, and I am modeling mediation in an associative process model between two LGM, each with two random effects (trend and intercept), and about 5 covariates. The observed measures are ordered categorical. What would you suggest I set the bootstrap to (i.e., bootstrap=?) in the analysis command? 


There is no rule for this. You should experiment. Start with 250. Then try 500. Compare the standard errors to see if there is much difference. 

Daniel posted on Tuesday, June 22, 2004  10:49 am



I ran the bootstrap at 250, 300, 350, 400, and 450, and it ran fine each time, with each increase resulting in a proportional increase in run time. However, as soon as I run the bootsrap at 500, it runs for hours without end. Last night I tried to run it with 1000, and left the program running all night, after leaving work at about 4 PM. I returned to work the next morning, and it was still running. Why do you believe I cannot get a solution with values greater than or equal to 500? Does it have something to do with the associative processes or categorical outcome variables? 


Why don't you send the 400 run output, the 500 run input, and the data to support@statmodel.com so I can take a look at it. 


Using a WLSMV estimator, why would a chisquare test not be calculated using Dr. MacKinnon's biascorrected bootstrap method of estimating SE and confidence intervals in a path analysis with multiple mediational pathways? 


There is no reason. We have so far only implemented bootstrap for standard errors. 


Thank you very much for your quick response. Would it appropriate then to report the chisquare goodness of fit test calculated when not using bootstrap function as long as the WLSMV estimator is used? 


Yes but you should make it clear that although the standard errors are bootstrapped, the chisquare is not. 


I am currently running a mediational structural equation model dealing with domestic violence. The observed measures are primarily indices derived from selfreport scales which have ranges as broad as 0 to 177 (i.e. The item endorsements are ordinal values, each of which represents a frequency range for specific behaviors (0 = 010, 1 = 1020, etc.). These items are subsequently summed to produce the indices of interest for the current model.). I decided to model the data as continuous censored, but received the following error message: INPUT READING TERMINATED NORMALLY *** FATAL ERROR Internal Error Code: GH1006. An internal error has occurred. Please contact us about the error, providing both the input and data files if possible. I am forwarding the requested information to you. In the meantime however, I am trying to resolve two questions regarding the model: 1) Regarding overall fit indices, I am contemplating the use of the MLR estimator, treating the data as continuous. a) Given nonnormal data and censoring from below (at sero) to what degree might this yield misleading results? b) Does applying a BollenStine bootstrap procedure provide a means to address this more effectively? 2) Regarding the standard errors of the parameter estrimates in the model I would prefer to use a bootstrap procedure since the this will provide me with confidence intervals for the indirect effects. a) Do you detect anything problematic with using the MLR approach for the overall model fit indices, followed by reporting confidence intervals for the parameter estimates derived from a bootstrap procedure? Any guidance you might offer would be greatly valued. 

bmuthen posted on Monday, February 20, 2006  6:52 pm



1. a. With a high degree of censoring (say > 2550%), the SEs and chisquare based fit indices may be off. The basic problem is that the linear model assumed is wrong with strong censoring, so nonnormality robustness in SEs and chisquare doesn't help. Overall fit indices are perhaps less important than getting the right parameter estimates and checking fit by 2*LL for nested, neighbouring models. b. I don't think so. 2. That's fine. a. Not in principle. 


Thank you so much for you prompt answer. If I might clarify this: Indeed I do have proportions of censored observations that are above 25%. The error message I reported evidently occurs in version 3.01 but has since been corrected in version 3.14. If I run the analysis using the updated (v. 3.14) program, specifying which variables are censored I would obtain appropriate loglikelihoods from which 2*LL could be used for tests of nested models. Am I correct in saying that the loglikelihoods obtained without the censor specification would be misleading? Having obtained the 2*LL, I can use the Baron and Kenny (1981) approach to evaluating mediation. However, would there be a problem with removing the censor specification and bootstrapping the parameter estimates so that I can use the MacKinnnon (2004) approach to obtaining indirect effects and confidence intervals? Finally, is there some reference you would recommend where I might find a primer on bootstrapping specifically regarding how to choose among the different bootstrap confidence intervals? Many thanks. 

bmuthen posted on Tuesday, February 21, 2006  3:23 pm



Yes, you are correct in that the loglik would be misleading if not taking censoring into account such as when using the censored approach. You should use the same model for parameter estimation and testing as for the bootstrapping. Efron, B. & Tibshirani, R.J. (1993). An introduction to the bootstrap. New York: Chapman and Hall. MacKinnon, D.P., Lockwood, C.M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99128. 


Hello, I am back on the mediation analysis trail. This time, unlike most of the data I analyze, my sample size is not that large (n=376) and my ultimate outcome (smoking) is ordered categorical with four levels. I am running a SEM with measured variables only (no factors). Is it better to calculate standard errors with bootstrapping or Delta method in this case, due to the relatively small sample size? 


I would use the default standard errors for the estimator you choose. I don't think that you would benefit from bootstrapping. 


Thanks 


Hello Linda and Bengt. I am asked by a reviewer to estimate the size of an effect in my model. I actually sent you this data before. My finding is that the significant indirect effect with 95% confidence interval is .054(.008,.101). You mentioned that this is a small effect. How should I word this in the results/discussion section to indicate the strength of this effect? I'd appreciate any clues if you have them. By the way, this was calculated with the delta method. 


To know how small it is, wouldn't you want to evaluate it in terms of the SD of the independent and dependent variables, so using a standardized value? 


Ok, I see. Thank you very much. DR 

Yifu Chen posted on Friday, July 21, 2006  7:13 am



Hi, Dr. Muthen, I am working on a model to test mediation effects. I have two predictors, four mediators and two outcomes. The outcomes are all continuous. I've tried to use MODEL INDIRECT with BOOTSTRAP to estimate the standard errors of the indirect effect. The question I have is that: When I ran a recursive model in which outcome1 predicted outcome 2, the output of model indirect showed the standard errors of indirect effects for predictors via each mediator. However, when I estimated the recipical relationship between the two outcomes, the output showed only the total indirect effect for each predictors, but no printouts for the contribution of each mediator. I don't know if what I got is right for Mplus when recipical model are estimated. Is there any way that I can get more detail indirect effect information for this kind of model? I am using MPLUS 3.0. Thanks! 


I don't think this is possible. See the Bollen SEM book to check. 


Dear Mplusteam, I have read in an article by MacKinnon and colleaques that there are different ways to calculate SE for indirect effects, using the delta method (e.g. Freedman & Schatzkin, 1992, or Olkin & Finn, 1995). I would be interested in which one is implemented in Mplus? MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G. & Sheets, V. (2002). A comparison of methods to test mediation and other intervening variable effects. Psychological Methods, 7 (1), 83104. 


We calculate standard errors for indirect effects using both the Delta method and bootstrap as described in the MacKinnon et al. aricle. I am not aware that there are different Delta methods. 

Claire Hofer posted on Thursday, December 07, 2006  8:02 am



Could you tell me about the differences in the bootstrap method between mplus version 3 and version 4? I am getting very different results: my model run in version 4 with the Bollen Stine bootstrap method matches closely what I get in the regular model using ml or mlr estimation but when I run the model in version 3 with the bootstrap method there, I get completely different results. We do have missing data. Could you tell me a little bit about why the results might be so different? Thank you. 


I don't know of any reason offhand there would be a difference. If you send your input, data, output, and license number to support@statmodel.com, I can take a look at it. 


Dear Drs. Muthen, We are running a mediation model with three exogenous variables  (1 continuous and two indicators for race/ethnicity), two mediating variables (one continuous and one dichotomous) and one outcome variable (dichotomous). For different paths we are calculating the mediation proportion, defined as the indirect effect divided by the total effect (indirect + direct effect). We would like to be able to calculate confidence intervals for mediation proportion by using the estimates from each individual bootstrapped dataset. The question is: Can MPlus output into a separate dataset the individual bootstrapped estimates of the direct and indirect effects for a given model? 


Mplus does not saved indirect effects and does not saved results from each bootstrap replication. 

Emily Blood posted on Tuesday, October 09, 2007  6:15 pm



Within the MC facility, is there a way to output indirect effect values and standard errors of indirect effects for each MC replication? I am currently outputting the parameters from each MC replication, but am not able to output the indirect effects and their standard errors from each replication, only the mean and se of all indirect effects from all MC replications. Is this possible in Mplus? Thanks. 


No, results from MODEL INDIRECT are not saved. The only way to obtain them would be to save all of the data sets and analyze them one at a time. 

Eric posted on Monday, June 16, 2008  10:20 pm



I am using cinterval(bcbootstrap) to get confidence intervals for indirect effects in a path analysis model with 4 mediators. Though I get confidence intervals for the specific indirect effects, the confidence intervals for the rest of the path estimates are all zeros. Does this mean that I should not trust the CIs for the specific indirect effects? 


It sounds like you are using an old version of the program. I think there may have been a problem some time ago. I suggest using Version 5.1. 

Eric posted on Tuesday, June 17, 2008  9:56 am



Is it possible to get more decimal places for the confidence intervals when using cinterval(bcbootstrap). One of my confidence intervals ranges from 0.000 to 0.050 and I would like to be able to say that the effect is significant. I have tried using the savedata command, but I am not sure what to ask for, since the results option does not seem to include the confidence intervals. Thanks for your help. 


Confidence intervals are not saved. You can rescale your variables by dividing them by a constant using the DEFINE command. 


Dear Drs. Muthen, We're running a mediational SEM model, in which we have an X, a Y and two mediators, Ma and Mb. When we're running two seperate models, both Ma and Mb fully mediate the X>Y relationship (using bootstrapped standard errors). But when we model both mediators in the same SEM model, the Ma mediator is no longer significantly related to Y. All other paths are significant, including the X>Ma. We also ran an regression analysis and found that Ma predicts unique variance in Y after controlling for both X and Mb. 1. What can we conclude about Ma as a mediator of X>Y? 2. Could it be that the finding that only Mb (and not Ma) mediates the X>Y when tested in the same model, is a statistical artifact? And if that is the case, how does that happen? 3. Alternatively, if it is not an artifact, what can we conclude that Mb is a more important mediator than Ma when compared in the same model? How should one then report that Ma functioned as a mediator when tested alone and when tested in a regression model and was found to predict unique variance in Y. I thank you in advance and for a great discussion board. 


If Ma and Mb are highly correlated, there may not be anything left in y to predict beyond what one of the mediators predicts. 


Thak you so much for your prompt reply. That's probably right, that the high correlation btw Ma and Mb messes this up. My question is still, though, what can we conclude about Mb as a mediator? Is it an artifact, that is, could it just as easily have been Ma that ended up with the significant path or neither? If Ma and Mb are so highly correlated that the Ma > Y path becomes nonsig., it doesn't explain why the Mb > Y is significant? Nor why we found that the unique contribution of Ma was sig. after controlling for X and Mb in a regression model. Or does it? 


This topic  without the mediation angle  is discussed in the linear regression literature under the heading multicollinearity. You may want to take a look at that. I don't think it is possible to conclude about the joint role of Ma and Mb in such a situation, only that each entered separately is a mediator. You may also want to consult the new mediation book by David MacKinnon to see if he has some wisdom on this topic. 


I have question regarding MPlus output. I used bootstraping to test a mediation effect. On the output for Model Indirect command, I have columns for "Estimates S.E. Est./S.E. StdYX StdYX SE StdYX/SE." Can you please explain me what StdYX, StdYX SE, and StdYX/SE refers to? Which one is the test of indirect effect? Thanks. Metin 


The test is the ratio of the estimate to the standard error of the estimate. Please see Chapter 17 for a description of the columns of the Mplus output and information about the various standardizations. 


Hello, can I use FIML to test mediation (with bootstrapping) or to model interaction (both for latent variables)? And in the case of using multiple imputation how do I treat the fit values and indirect and direct coefficients? Can I just use the (so)called rubin formular (which would be like a mean)? Thanks for your help, Miriam 


I would like to explain my post above a little more: maybe it is not so clear what I am trying to ask please exuse that: I am trying to model mediation (with bootstrapping) and moderation (with interaction) but I have missings which I would like to impute: Now I have 5 datasets and Iam doing my analysis with all those data sets, because I cannot read in the 5 data sets at once, because beforementioned modelings won't allow that. But I don't know how to handle the coefficients or fit values of those 5 analysis could you me give me an advice how to handle this? (Is that done with the rubin formular?) Further I read fiml is an appropriate way to handle missing data: but as far as I read here it's more used in multilevel or group analysis. So that was an idea that this could have been a way for my issue. Thanks for your help, Miriam 


You can use the IMPUTATION option of the DATA command to analyze a set of imputed datasets. Correct parameters estimates and standard errors are calculated. Fit statistics are provided. 


Maybe I did something wrong. But I had problems useing this command for the modeling interaction or mediation (bootstrapping). 


Maybe I did something wrong, but I had problems using the command IMPUTATION. This is the error I get *** ERROR MODEL INDIRECT is not allowed with TYPE=IMPUTATION. The same error shows up when I try to model interaction. So thats why I am modelling it with each of the five data sets and would like to ask if the fit values can be integrated by calcutating the mean of them? 


You cannot use MODEL INDIRECT with TYPE=IMPUTATION but you should be able to use XWITH. I would use MODEL CONSTRAINT with TYPE=IMPUTATION to define the indirect effects. Although the parameter estimates are simply an average across imputed data sets, the standard errors and chisquare are not and cannot be computed by hand. If you have further problems along this line, please send them along with your license number to support@statmodel.com. 


Thank you so much for your help. I will first try to model it with the commands you recommended. If this will not work out  I will come back to you later and send you my data. 


Hello I'm running a fairly basic mediation with a dichotomous IV, a dichotomous mediator, and a nonnormal continuous DV (count data). I'm using the bootstrapping command and requesting the indirect effect. However, I can only seem to do this with the WLSMV estimator and was not able to specify a negative binomial distribution for the DV. 1. Is the skewness of the DV a problem, given that I'm boostrapping? If so, is there anything to be done, since I'm unable to execute the (NB) command? 2. The WLSMV estimates are much different than with ML. Is the interpretation of the estimates the same? Can I exponentiate them to get odds ratios of the IV > M relationship? Thanks for any help! 


1. In Mplus, indirect effects can be computed when mediators are categorical only using weighted least squares estimation. 2. WLSMV estimates are in a probit metric. ML estimates are in a logit metric. WLSMV estimates should not be exponentiated. 

yan liu posted on Sunday, August 28, 2011  9:34 am



Hi, Linda and Bengt I am running a multilevel SEM mediation model: mediator1=b*predictor; mediator2=b1*predictor+b2*mediator1; outcome=b1*predictor+b2*mediator1+b3*mediator2; Try to calculate the indirect effects and test if it's significant, using the formula provided by Hayes (2009). I found for between level, although all the mediation effects were not significant, but the sum (total indirect effects) turned out to be signficant, which does not make sense to me. Is the way to test "indtotw" and "indtotb" correct? Thanks! %WITHIN% PNS ON teach (a1w); movat ON teach (a2w); movat ON PNS (a3w); engage ON PNS (b1w); engage ON movat (b2w); engage ON teach; %BETWEEN% PNS ON teach (a1b); movat ON teach (a2b); movat ON PNS (a3b); engage ON PNS (b1b); engage ON movat (b2b); engage ON teach; MODEL CONSTRAINT: NEW(ind1w ind2w ind3w ind1b ind2b ind3b indtotw indtotb); ind1w=a1w*b1w; ind2w=a2w*b2w; ind3w=a1w*a3w*b2w; ind1b=a1b*b1b; ind2b=a2b*b2b; ind3b=a1b*a3b*b2b; indtotw= ind1w+ind2w+ind3w; indtotb= ind1b+ind2b+ind3b; 

yan liu posted on Sunday, August 28, 2011  11:59 am



Just want to follow up my question just posted. The equation for computing the total effects of several mediation pathways can be find in Hayes (2009). Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical Mediation Analysis in the New Millennium , Communication Monographs, 76(4), 408420. http://www.tandfonline.com/doi/pdf/10.1080/03637750903310360 Thanks a lot! 


It looks correct to me. If your sample size is small you may want to try Bayesian analysis which allows indirect effects to have a nonnormal distribution. 

yan liu posted on Sunday, September 11, 2011  11:33 am



Hi, Bengt Thank you so much for your reply. Following up your suggestion to my question (posted above, Aug.28), I tried Bayes estimation. I added the following code to my original Mplus syntax ANALYSIS: TYPE = TWOLEVEL; estimator = bayes; process = 2; fbiter = 10000; However, I got error message as follows. ¡°Unrestricted xvariables for analysis with TYPE=TWOLEVEL and ESTIMATOR=BAYES must be specified as either a WITHIN or BETWEEN variable. The following variable cannot exist on both levels: TEACH¡± (TEACH=predictor, PNS=mediator, movat=outcome) Is something wrong with my code? Or I cannot use Bayes estimation for Preachers et al.'s multilevel SEM mediation approach because Bayes estimation doesn't allow a predictor to be at both levels? Thanks. 


Bayes does not do the latent variable decomposition of the predictor variable, but uses the usual MLM approach. This means that you would have to specify TEACH as a Within variable. If you want it on Between as well, you have to create the clustermean version of the variable yourself (there is an Mplus option for this) and enter it as a Between variable. 


Hello, I am estimating bootstrap CIs for testing indirect effects hypotheses. Is it possible to have contradictory results across "Total indirect effects" and "Standardized Total indirect effects." In the MPLUS output, using the first one as reference, indirect effects are significant, but using as a reference the second one, indirect effects are not significant. Thanks for your response. MF 


Standardized and raw parameters have different sampling distributions and can therefore have different significance levels. 


Hello. I have a multiple mediation path model with 1 IV, 2 meds, and 1 outcome. I want the specific indirect effect for each mediator and to contrast them to see if one is larger than the other. I also would like to do a simulation to determine the sample size needed to power this study. Here is a program I am working from. TITLE: 2 mediator example with contrast DATA: FILE IS data.dat VARIABLE: NAMES ARE x m1 m2 y; ANALYSIS: m1 ON x(a1); m2 ON x(a2); y ON m1(b1); y ON m2(b2); y ON x; m1 WITH m2; MODEL INDIRECT: y IND m1 x; y IND m2 x; MODEL CONSTRAINT: NEW(a1ba a2b2 con); a1b1=a1*b1; a2b2=a2*b2; con=a1b1a2b2; OUTPUT: CINTERVAL (BCBOOTSTRAP); It is similar to MPlus example program 3.16, but the latter doesn't include contrasts of specific indirect effects, and it specifies bootstrap in the analysis section rather than in the output section as in the code listed above. Do these programs otherwise do the same thing? Also, I didn't see a MC counterpart to example 3.16 in the MPlus example programs folder  does one exist? or perhaps I accidentally deleted it sometime. Can you include that code or provide other assistance that might help with determining sample size for this analysis? Thank you. 


The data for Example 3.16 comes from Example 3.11. The BOOTSTRAP option is not available with the MONTECARLO command. If you want to test two indirect effects, define them in MODEL CONSTRAINT and use MODEL TEST to see if they are different. 


Thank you. Is there a way to perform a simulation using MPlus to calculate the sample size necessary to be able to detect a certain size specific indirect effect in a multiple mediation path model? 


Yes. Use mcex3.11.inp as a starting point. 


Is there a way to label the indirect effect (of say x > m > y) in a mediation in order to test the equality of the indirect effect across multiple groups using the Model Test command? 


You would have to label the components of the indirect effect in the groupspecific MODEL commands and define the indirect effects in MODEL CONSTRAINT. 


Thank you. Do you mean calculate the product of the coefficients of the paths in MODEL CONSTRAINT? 


Yes. 

Heike B. posted on Thursday, December 15, 2011  4:06 am



I am using WLSMV to estimate a manifest model with categorical endogenous variables (4 levels each). My sample is small (360 objects). Linda recommended in a similar case here in the thread to use the default estimations rather than the bootstrap estimations to calculate the standard errors. With respect to the pvalues 1.) should I use the pvalues of the default estimation to decide on significance or the confidence intervals / pvalues from the bootstrap? 2.) What would be the rationale behind the recomendation? 3.) If I can use the bootstrap confidence intervals is there a possibility to derive onesided intervalls from the twosided intervals? 4.) If not  is there an other way to determine onesided confidence intervals in MPLUS? Many thanks in advance. Heike 


It's really up to you to decide on which pvalues to use. You would need to investigate how to compute onesided confidence intervals. Mplus does not compute them. 

Heike B. posted on Thursday, December 15, 2011  12:18 pm



Thank you, Linda. Does this mean that both the default estimation and the bootstrap work similarely well under my circumstances? I mean are there some guidelines when one ore the other approach produces better results? Many thanks in advance. Heike 


It's difficult to say. All circumstances differ in many respects. You would need to do a Monte Carlo study that reflects your situation to answer that question. 


Dear Drs. Muthen, We are running a mediational model with bootstrapped standard errors in which variable X is predicting variable M, which in turn predicts Y. This model looks fine, with good fit indices, all paths and indicators significant and the total indirect effects also significant. Because the data are cross sectional and because it could be argued that the direction of effects is actually X to Y to M, we wanted to see if this alternative model would also fit the data. Output for this model showed that the fit indices were very good, but some of the indicators were no longer significant, and the Y to M path was no longer significant (even though in the original XMY model the M to Y path was significant and the bivariate correlations between all the indicators are significant). 1.Does this mean that our original model is in fact better? 2.Why would the YM path no longer be significant in the alternative model? 3.Why is the number of bootstrapped drawn less than what we specified in the input? 4.We sometimes get the ‘THE RESIDUAL COVARIANCE MATRIX (THETA) IS NOT POSITIVE DEFINITE’ message and see that the residual variance of one of the indicators is negative, what is the appropriate action to deal with this? 5.Finally, why is 2tailed pvalue for the unstandardized estimates often times different from the 2tailed pvalue for the stdY and stdYX? Thank you. 


12. The two models are different and fit differently to the covariance matrix of all variables. In your case, you should not base your choice of model on fit but substantive reasoning. 3. Send output to Support 4. This indicates that the model needs to be modified. 5. Unstand and stand coefficients have different sampling distributions and the assumption of a normal distribution may be differently well approximated in the two cases. If they differ, it may be better to use the unstand results. 


Ok. Thank you. Regarding question 1., the theory isn't all that clear here, and while we have reason to believe that the XMY model is the better one substantially, it would be nice to test the alternative model as well, but since the two models are not nested, we figured we could look at the fit indices and path coefficients as an indication of which model best fits the data. 


Dear Drs. Muthen, I'm running a mediation model (using the delta method). There are some significant indirect effects. But they are really small (eg. stand. b = .04, p<.05). Would you report such small effects? Is there an rule of thumb? It's clear, that indirect effects are small, but is there an cut point? thanks Christoph Weber 


I would report also small effects. The size of effects can be discussed in Cohen's terms. 

Xu, Man posted on Thursday, March 22, 2012  8:15 am



Could I just follow up on this thread: Since the regular significance testing of mediation effect might be biased, I try to get confidence interval from bootstrapping (I created mediation effect using NEW & MODEL CONSTRAINT). 1.There are two options of boostrapping apparently, the CINITERVAL (BCBOOTSTRAP) and CINTERVAL (BOOTSTRAP), which one is more suitable? 2. With sample size of around 3000 to 4000, what would be an appropriate number of bootstrapping? I could not request indirect result because the analysis uses TYPE=RANDOM. Thanks! Kate 

Xu, Man posted on Thursday, March 22, 2012  10:57 am



oh, actually BOOTSTRAP cannot be used with TYPE=RANDOM. but I had to have TYPE=RANDOM because I used TSCORE to adjust time at data collection  there is an embedded second order growth curve model and I am looking at mediators of the growth intercept and slopes. Is there anyway around this, to get good standard errors for the mediation effect? 


I would use BCBOOTSTRAP with from 5001000 draws. 

Xu, Man posted on Thursday, March 22, 2012  3:51 pm



Thank you. But it seems BCBOOTSTRAP cannot be used together with TYPE=RANDOM? In this situation, is there any way to get bootstraped standard error for parameters created using NEW & MODEL CONSTRAINT(mediation effect in my case)? Thank you! 


No, there is not. 

Xu, Man posted on Thursday, March 22, 2012  4:04 pm



I see. I will stick to the given output then. Thanks for letting me know. 


I was wondering what to report when examining indirect effects and their significance. Do you report the unstandardized or the standardized coefficients? Because my direct effects are only displayed in a path model with beta's (standardized estimates), I thought it best to report the sobel/delta method test statistic and pvalue from the standardized section of the 'indirect'output, but is this correct? 


Whether to report unstandardized or standardized should be guided by the journal you plan to publish in. Whichever you report, you should report their standard errors and pvalues. You should not use unstandarized pvalues with standardized coefficients. 


OK, thank you! 


When running bootstrapped standard errors to test for mediation using theta parameterization I am getting confidence intervals indicating a significant indirect effect for the unstandardized estimates, but nonsignificant indirect effects for the standardized estimates. I am curious why this is occurring and how I should handle this in terms of reporting results. I have historically reported unstandardized coefficients. 


Raw and standardized coefficients have different sampling distributions so can have different significance levels. If you usually report raw coefficients, I would do that. I would not decide what to report based on significance. 

Jo Brown posted on Thursday, June 14, 2012  3:22 am



how many bootstrap cycles do you normally need to obtain accurate standard errors for the indirect effect? 


This can differ depending on the data and model. I would experiment with different numbers until the results stabilize. 

Jo Brown posted on Friday, June 15, 2012  3:03 am



Thanks! I tried 1000, 5000 and 10000 and I must say that there is not much difference between these number of cycles. Could this be an argument in favour of using a 1000 cycles? 


You might need only 500. Try that. 


Hello, I have a question about using biascorrected bootstrapping in mediation. I ran a mediation model (multigroup, 1 latent IV, 2 latent mediators, 2 latent outcomes + 1 cov) without bootstrapping and found several moderate to large direct effects that were significant (p < .05 to p < .001) in one group as well as significant indirect effects in the same group. I used both ML and MLR and found this result. When I ran the same model with bootstrapping, some of those direct effects dropped to ns, yet some corresponding indirect effects are significant according to the bootstrapped result. I can't get a sense from reading whether it is customary to report the significance of direct effects from the bootstrap results, or from an analysis without bootstrapping? does anyone know a reference on this point? It seems odd to have ns direct effects + significant indirects for the same path... Not sure how to explain that in my results. Any help would be appreciated. Thank you. 


Bootstrap SEs are often bigger so the ns directs is natural. I would report the bootstrapped SEs for all effects. I don't see why it would be odd for a variable to have a ns direct effect and a sig indirect effect, it that is what you are asking  that represents complete mediation. 


Thanks for the fats reply. I should have pointed out that it was the b effect linking mediator to Dv that was ns,thus the concern. I am accustomed to finding joint effects significant when the indirect effects are. Thanks for your help, ML 


Sorry,thanks for the "fast" reply 


See also our FAQ: 11/18/11: Indirect effect insignificant while both paths significant 


Hello, I have two pertaining to a peer review for an article. 1. I bootstrapped the CIs and SEs of direct/indirect effects for a mediation model with latent variables. I therefore couldn't use MLR. Is bootstrapping robust to violations of multivariate normality? I am reluctant to use the biascorrected bootstrapping because of the conditions of my sample size and size of effects (per Fritz and MacKinnon recommendation). 2. I did multiple group mediation and compared undstandardized effects across groups. A reviewer asked about effect sizes. The standardized effects are not comparable, across groups  so I don't want to report them. What is the best thing to report in this situation for an effect size, particularly for an indirect effect? any help would be appreciated, Michelle 


1. Yes. 2. By effect size in this context, I assume you mean: As X increases 1 SD (or changes from control to tx), Y changes ? SD. To compute this, I would take the unstandardized model coefficient estimates (a, b, c) for each group and use the X and Y SDs to compute the groupspecific effect sizes which won't be the same since the SDs aren't the same in the different groups even if the model coeff estimates are the same. 


Thanks so much for your quick reply. I just had one additional followup: The reviewer did not specify a particular effect size measure...I was thinking I could report the Rsquared and standard betas for each group. But, given that these effect sizes are all group specific and my focus is entirely on group differences, I am reluctant to do this. In lieu of this, I could calculate a standard effect based on the pooled SD of X/Y for each group?.. And for the indirect effect it would be ab*(SDxpooled/SDpooledy). Mackinnon suggests ab*(SDx/SDy) but this is for a single group model. Also, I think I could use the same method for the direct effects (a effect and b effect) Does this sound ok? 


This seems reasonable. I don't think there is only one acceptable way to do this. You may want to ask this question on a general discussion forum like SEMNET. Bengt 


Hello, I am running a path analysis with my predictor and 3 mediator variables at time 1 and two outcome variables at time 2, with a sample of 499. In my Analysis command, I indicated "BOOTSTRAP = 10000"; and in my Output command, I asked for STANDARDIZED MODINDICES (3.84)SAMPSTAT TECH1; CINTERVAL(BOOTSTRAP); I want to make sure this is the correct syntax, and also, I'm unsure if the command is for percentile bootstrapping or biascorrected bootstrapping. Given my sample size and that the predictor does NOT lead to the outcomes, which might be better to use in this case? If the CIs do not contain 0, can I assume mediation even though the predictor does not lead to the outcomes? 


If you say CINTERVAL(BCBOOTSTRAP); you get the biascorrected version. Before trusting the CIs, you want to make sure that your model fits, that is, that the direct effects are zero. 

Dexin Shi posted on Sunday, September 08, 2013  5:45 pm



Hello, I am running a meditation analysis with a categorical mediator (no latent variable involved). I used both WLSMV and BC bootsrap. However, for one path (from the independent variable (x) towards to the categorical mediator), the two methods did not agree with each other on significant test. WLSMV gave p=0.819, where as BC bootstrap CI [0.399,2.496].To report the results, which one is recommended? Thank you for your help. 


WLSMV gives a symmetric confidence interval around a bootstrapped standard error. BCBOOTSTRAP gives a nonsymmetric confidence interval around a bootstrapped standard error. This is why they may not agree. 


Dear Drs. Muthen, We have run several studies with this path model: MODEL: empathy ON cond (a1); anc ON cond (a2); liking ON empathy (b1) anc (b2) cond (c1); gameinv ON liking (e1) empathy (d1) anc (d2) cond(f1); Gameinv is categorical, and we are using bootstrapping. Our reviewers were interested in an alternative theoretical model, leading us to test these path models: MODEL: empathy ON cond (a1); liking ON cond (a2); anc ON empathy (b1) liking (b2) cond (a3); gameinv ON anc (c1) empathy (d1) liking (d2) cond (a4); MODEL: empathy ON cond (a1) liking (b1); anc ON cond (a2) liking (b2); liking ON cond (c1); gameinv ON empathy (e1) anc (e2) liking (f1) cond(d1); The relevant indirect paths are significant in all three. Is a way to argue statistically that our original proposed model is better (such as comparing goodness of fit, although the only fit statistic reported from bootstrapping has been WRMR), or should we make a theoretical argument? Any advice would be much appreciated. Stephanie 


You can compare the fit of the models. With the BOOTSTRAP option, only standard errors are bootstrapped so we don't give fit statistics. You can run the three models without the BOOTSTRAP option to obtain the fit statistics. 


Thanks for your quick response! Which fit statistic would you recommend comparing across models? The fit statistics reported when I eliminate bootstrapping are Chisquare, RMSEA, CFI, and WRMR, but it's my understanding that these cannot be used to compare nonnested models. 


There is no way of testing which model is best compared to another statistically unless the models are nested. You can use any of the fit statistics listed above for comparison purposes. I would not use WRMR as it is an experimental fit statistic. 


Great, thanks so much for your feedback. 

Melissa Kull posted on Saturday, November 09, 2013  10:50 am



Using raw data with a small amount of missing data, I've been running basic path models with one exogenous predictor, three mediators (all continuous, mostly normally distributed), and one outcome. I've been trying to run these models using a sampling weight (to adjust for nonresponse), although I'm not interested in stratification or clustering, so I have not identified these data as complex. When I try to estimate bootstrapped SEs in these models, the models will run using ML but will not run using MLR (which is the default for these models when the bootstrap is not applied). Can someone explain why this is happening and suggest some references with information on selecting the most appropriate estimator? I've looked over some of the MacKinnon articles cited in this thread but am not sure which whether my models are correctly specified with the ML estimator and bootstrapped SEs. Many thanks. 


We don't do bootstrap with sampling weights. It is not clear how that should be done. 

Melissa Kull posted on Wednesday, November 13, 2013  8:06 pm



Dr. Muthen, Thanks for your response. It seems peculiar that the models are converging and providing estimates that have been similar to results from other iterations of these models that I've been running. I thought maybe the syntax was just ignoring the population weight, but when I took the weight out, the results were different. Despite this, I suppose these estimates are not to be trusted? Can you explain why or suggest a reading that indicates why bootstrapping should not work with population weights? This would be tremendously helpful as I move forward with trying to appropriately specify these models. Thanks very much for your assistance. 


I think the issue here is that the BOOTSTRAP option is not available with TYPE=COMPLEX. It is available with the WEIGHT option. I think this explains what you are seeing. 


Dear Drs Muthen, I have ran a simple mediation as follows: MODEL: Perf2 ON Res; Perf2 ON TWE; Model indirect: Perf2 IND TWE; From the output below, I conclude that my model is saturated, and the paths are nonsignificant. I have two questions: (1) Why is the model saturated? I am unable to see how is it possible that I have 80+ parameters to estimate... (2) Based on this, do I conclude that I have no evidences to support the mediation hypothesis? Thank you in advance. ChiSquare Test of Model Fit Value 0.000 Degrees of Freedom 0 PValue 0.0000 RMSEA Estimate 0.000 90 Percent C.I. 0.000 0.000 Probability RMSEA <= .05 0.000 CFI 1.000 TLI 1.000 ChiSquare Test of Model Fit for the Baseline Model Value 4.964 Degrees of Freedom 2 PValue 0.0836 SRMR Value 0.000 


Your model is not a mediation model. It should be MODEL: Perf2 ON Res; Res ON TWE; if Res is the mediator. 


Dear Dr. Muthén, Thank you for your answer. My model is: X = Res Mediator = TWE Y = Perf2 Is there anything wrong? My questions remain. 


Then the model should be twe ON x; perf2 ON twe; 


Dear Drs. Muthen, I am hoping to look at indirect effects in a path analysis. I know that I can just request indirect effects using the IND command; however, per the Preacher and Hayes (2008) article, it appears that the bootstrapping method is recommended particularly for samples that are not normally distributed (as mine is not). With that said, though, I have used MLR estimator to help me manage my missing data and nonnormality. When I tried to use bootstrapping as a way to identify indirect effects, I received the error message saying that bootstrapping can't be used with MLR. I'm wondering what you might suggest in a situation like this. I'm imagining that I could either report the indirect effects that are provided without bootstrapping (e.g.just multiplying direct effects between mediators), or I could Not use MLR, and run the bootstrapping procedure to get the bootstrapped indirect effects. Do you have thoughts about how to best proceed? Thank you so much! 


Use ML not MLR and you can do bootstrapping. All of the maximum likelihood estimators give the same parameter estimates. Bootstrapped standard errors are implemented in ML. MLR and bootstrapped standard errors are usually very close. 


Thanks for your help will do! 

RuoShui posted on Sunday, April 06, 2014  5:58 pm



Dear Drs. Muthen, I used ML and Bootstrapping in my SEM and asked for standardized in the output. However, there is no standard error or p value for standardized parameter estimates. Is this normal? Is there anyway I can obtain them? Thank you very much! 


Standardized estimates are not available using the BOOTSTRAP option. 


Dear Drs. Muthen, if I understand correctly, you mention in the discussion above that it is preferred to use bootstrapping (in combination with the ML estimator) instead of using the MLR estimator when indirect effects are of interest and nonnormal data is used? Also, the two methods should produce comparable Se's. However, when I run my model (longitudinal mediation model based on skewed data), I get significant results when I use the MLR but not when bootstrapping is used (i.e., concerning estimates of both direct and indirect effects).. I'm wondering how this could be (perhaps sample size? N = 172) and why bootstrapping is preferred over the usage of MLR? Thank you in advance for your reply. 


Bootstrapping allows the indirect effect to have a nonnormal sampling distribution (this is quite apart from the nonnormality of the outcomes) so that a nonsymmetric confidence interval can be used. Even though MLR gives good SEs, it leads to using symmetric confidence intervals. You can also check by using Bayes, which also gives nonsymmetric confidence intervals. 


Professors, Forgive me if this is a repeat question. If I have a mediation where each path is significant, but the total indirect effect is zero, do I conclude that there is no mediation? I bootstrapped the standard errors, and the CI did not include zero. 


If the CI does not include zero, the effect is significant. 


Hi I was wondering if I could get some clarification on the bootstrapping method. Is the standard "Bootstrap = " statement in Mplus v.6 a parametric bootstrap or does it do case resampling? Thank you. 


Bootstrap = 500; will give you the standard bootstrapping method. Mplus can also do residual bootstrapping using the command BOOTSTRAP = 500 (RESIDUAL); See page 620 in the User's guide for more details. 


So our standard bootstrapping is the case resampling. 

Back to top 