I am running several mediation models in which my dependent variable is ordered categorical. I am using the bootstrap method to estimate standard errors for the indirect effects, with the bootstrap analysis command. I asked for confidence intervals and am given the appropriate intervals. Can I use these intervals along with the effects to estimate odds ratios, or is this incorrect if my mediator is continuous?
The default WLSMV works with probit regressions so the estimates are not directly in odds ratio metric. The indirect effects are with respect to a continuous y* variable behind the dependent observed categorical variable, where y* is the response propensity. I think this idea has been discussed in David MacKinnon's work.
Daniel posted on Saturday, June 19, 2004 - 9:00 am
I have a question regarding the indirect effect. If the two arcs (paths) in the specific indirect affect (a to b [path a'] and b to c [path b']) are each significant (i.e., a' and b'), shouldn't the total indirect effect [a' * b')also be significant? Or is it possible for a' and b' to be significant without the specific indirect effect (a' * b') being significant?
bmuthen posted on Saturday, June 19, 2004 - 11:59 am
Seems like this is possible because the indirect effect is a product of the two estimates and the SE of this product is a function not only of each of the two SEs, but also the covariance between the two estimates - which might be positive and therefore make the denominator of the test larger.
Thanks. My population is 913 for the study, and I am modeling mediation in an associative process model between two LGM, each with two random effects (trend and intercept), and about 5 covariates. The observed measures are ordered categorical. What would you suggest I set the bootstrap to (i.e., bootstrap=?) in the analysis command?
There is no rule for this. You should experiment. Start with 250. Then try 500. Compare the standard errors to see if there is much difference.
Daniel posted on Tuesday, June 22, 2004 - 10:49 am
I ran the bootstrap at 250, 300, 350, 400, and 450, and it ran fine each time, with each increase resulting in a proportional increase in run time. However, as soon as I run the bootsrap at 500, it runs for hours without end. Last night I tried to run it with 1000, and left the program running all night, after leaving work at about 4 PM. I returned to work the next morning, and it was still running. Why do you believe I cannot get a solution with values greater than or equal to 500? Does it have something to do with the associative processes or categorical outcome variables?
Using a WLSMV estimator, why would a chi-square test not be calculated using Dr. MacKinnon's bias-corrected bootstrap method of estimating SE and confidence intervals in a path analysis with multiple mediational pathways?
I am currently running a mediational structural equation model dealing with domestic violence. The observed measures are primarily indices derived from self-report scales which have ranges as broad as 0 to 177 (i.e. The item endorsements are ordinal values, each of which represents a frequency range for specific behaviors (0 = 0-10, 1 = 10-20, etc.). These items are subsequently summed to produce the indices of interest for the current model.). I decided to model the data as continuous censored, but received the following error message:
INPUT READING TERMINATED NORMALLY
*** FATAL ERROR Internal Error Code: GH1006. An internal error has occurred. Please contact us about the error, providing both the input and data files if possible.
I am forwarding the requested information to you.
In the meantime however, I am trying to resolve two questions regarding the model:
1) Regarding overall fit indices, I am contemplating the use of the MLR estimator, treating the data as continuous. a) Given non-normal data and censoring from below (at sero) to what degree might this yield misleading results? b) Does applying a Bollen-Stine bootstrap procedure provide a means to address this more effectively?
2) Regarding the standard errors of the parameter estrimates in the model I would prefer to use a bootstrap procedure since the this will provide me with confidence intervals for the indirect effects. a) Do you detect anything problematic with using the MLR approach for the overall model fit indices, followed by reporting confidence intervals for the parameter estimates derived from a bootstrap procedure?
Any guidance you might offer would be greatly valued.
bmuthen posted on Monday, February 20, 2006 - 6:52 pm
1. a. With a high degree of censoring (say > 25-50%), the SEs and chi-square based fit indices may be off. The basic problem is that the linear model assumed is wrong with strong censoring, so non-normality robustness in SEs and chi-square doesn't help. Overall fit indices are perhaps less important than getting the right parameter estimates and checking fit by 2*LL for nested, neighbouring models.
Thank you so much for you prompt answer. If I might clarify this: Indeed I do have proportions of censored observations that are above 25%.
The error message I reported evidently occurs in version 3.01 but has since been corrected in version 3.14. If I run the analysis using the updated (v. 3.14) program, specifying which variables are censored I would obtain appropriate log-likelihoods from which -2*LL could be used for tests of nested models.
Am I correct in saying that the log-likelihoods obtained without the censor specification would be misleading?
Having obtained the -2*LL, I can use the Baron and Kenny (1981) approach to evaluating mediation. However, would there be a problem with removing the censor specification and bootstrapping the parameter estimates so that I can use the MacKinnnon (2004) approach to obtaining indirect effects and confidence intervals?
Finally, is there some reference you would recommend where I might find a primer on bootstrapping specifically regarding how to choose among the different bootstrap confidence intervals?
bmuthen posted on Tuesday, February 21, 2006 - 3:23 pm
Yes, you are correct in that the loglik would be misleading if not taking censoring into account such as when using the censored approach.
You should use the same model for parameter estimation and testing as for the bootstrapping.
Efron, B. & Tibshirani, R.J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.
MacKinnon, D.P., Lockwood, C.M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99-128.
Hello, I am back on the mediation analysis trail. This time, unlike most of the data I analyze, my sample size is not that large (n=376) and my ultimate outcome (smoking) is ordered categorical with four levels. I am running a SEM with measured variables only (no factors). Is it better to calculate standard errors with bootstrapping or Delta method in this case, due to the relatively small sample size?
Hello Linda and Bengt. I am asked by a reviewer to estimate the size of an effect in my model. I actually sent you this data before. My finding is that the significant indirect effect with 95% confidence interval is .054(.008,.101). You mentioned that this is a small effect. How should I word this in the results/discussion section to indicate the strength of this effect? I'd appreciate any clues if you have them. By the way, this was calculated with the delta method.
Yi-fu Chen posted on Friday, July 21, 2006 - 7:13 am
Hi, Dr. Muthen,
I am working on a model to test mediation effects. I have two predictors, four mediators and two outcomes. The outcomes are all continuous. I've tried to use MODEL INDIRECT with BOOTSTRAP to estimate the standard errors of the indirect effect.
The question I have is that: When I ran a recursive model in which outcome1 predicted outcome 2, the output of model indirect showed the standard errors of indirect effects for predictors via each mediator. However, when I estimated the recipical relationship between the two outcomes, the output showed only the total indirect effect for each predictors, but no printouts for the contribution of each mediator.
I don't know if what I got is right for Mplus when recipical model are estimated. Is there any way that I can get more detail indirect effect information for this kind of model?
Dear Mplus-team, I have read in an article by MacKinnon and colleaques that there are different ways to calculate SE for indirect effects, using the delta method (e.g. Freedman & Schatzkin, 1992, or Olkin & Finn, 1995). I would be interested in which one is implemented in Mplus?
MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G. & Sheets, V. (2002). A comparison of methods to test mediation and other intervening variable effects. Psychological Methods, 7 (1), 83-104.
We calculate standard errors for indirect effects using both the Delta method and bootstrap as described in the MacKinnon et al. aricle. I am not aware that there are different Delta methods.
Claire Hofer posted on Thursday, December 07, 2006 - 8:02 am
Could you tell me about the differences in the bootstrap method between mplus version 3 and version 4? I am getting very different results: my model run in version 4 with the Bollen Stine bootstrap method matches closely what I get in the regular model using ml or mlr estimation but when I run the model in version 3 with the bootstrap method there, I get completely different results. We do have missing data. Could you tell me a little bit about why the results might be so different? Thank you.
We are running a mediation model with three exogenous variables - (1 continuous and two indicators for race/ethnicity), two mediating variables (one continuous and one dichotomous) and one outcome variable (dichotomous). For different paths we are calculating the mediation proportion, defined as the indirect effect divided by the total effect (indirect + direct effect). We would like to be able to calculate confidence intervals for mediation proportion by using the estimates from each individual bootstrapped dataset. The question is: Can M-Plus output into a separate dataset the individual bootstrapped estimates of the direct and indirect effects for a given model?
Mplus does not saved indirect effects and does not saved results from each bootstrap replication.
Emily Blood posted on Tuesday, October 09, 2007 - 6:15 pm
Within the MC facility, is there a way to output indirect effect values and standard errors of indirect effects for each MC replication? I am currently outputting the parameters from each MC replication, but am not able to output the indirect effects and their standard errors from each replication, only the mean and se of all indirect effects from all MC replications. Is this possible in Mplus? Thanks.
I am using cinterval(bcbootstrap) to get confidence intervals for indirect effects in a path analysis model with 4 mediators. Though I get confidence intervals for the specific indirect effects, the confidence intervals for the rest of the path estimates are all zeros. Does this mean that I should not trust the CIs for the specific indirect effects?
Is it possible to get more decimal places for the confidence intervals when using cinterval(bcbootstrap). One of my confidence intervals ranges from 0.000 to 0.050 and I would like to be able to say that the effect is significant. I have tried using the savedata command, but I am not sure what to ask for, since the results option does not seem to include the confidence intervals. Thanks for your help.
Dear Drs. Muthen, We're running a mediational SEM model, in which we have an X, a Y and two mediators, Ma and Mb. When we're running two seperate models, both Ma and Mb fully mediate the X-->Y relationship (using bootstrapped standard errors). But when we model both mediators in the same SEM model, the Ma mediator is no longer significantly related to Y. All other paths are significant, including the X-->Ma. We also ran an regression analysis and found that Ma predicts unique variance in Y after controlling for both X and Mb. 1. What can we conclude about Ma as a mediator of X-->Y? 2. Could it be that the finding that only Mb (and not Ma) mediates the X-->Y when tested in the same model, is a statistical artifact? And if that is the case, how does that happen? 3. Alternatively, if it is not an artifact, what can we conclude that Mb is a more important mediator than Ma when compared in the same model? How should one then report that Ma functioned as a mediator when tested alone and when tested in a regression model and was found to predict unique variance in Y.
I thank you in advance and for a great discussion board.
Thak you so much for your prompt reply. That's probably right, that the high correlation btw Ma and Mb messes this up. My question is still, though, what can we conclude about Mb as a mediator? Is it an artifact, that is, could it just as easily have been Ma that ended up with the significant path or neither? If Ma and Mb are so highly correlated that the Ma --> Y path becomes non-sig., it doesn't explain why the Mb --> Y is significant? Nor why we found that the unique contribution of Ma was sig. after controlling for X and Mb in a regression model. Or does it?
This topic - without the mediation angle - is discussed in the linear regression literature under the heading multicollinearity. You may want to take a look at that. I don't think it is possible to conclude about the joint role of Ma and Mb in such a situation, only that each entered separately is a mediator. You may also want to consult the new mediation book by David MacKinnon to see if he has some wisdom on this topic.
The test is the ratio of the estimate to the standard error of the estimate. Please see Chapter 17 for a description of the columns of the Mplus output and information about the various standardizations.
I would like to explain my post above a little more: maybe it is not so clear what I am trying to ask please exuse that: I am trying to model mediation (with bootstrapping) and moderation (with interaction) but I have missings which I would like to impute: Now I have 5 datasets and Iam doing my analysis with all those data sets, because I cannot read in the 5 data sets at once, because beforementioned modelings won't allow that. But I don't know how to handle the coefficients or fit values of those 5 analysis could you me give me an advice how to handle this? (Is that done with the rubin formular?) Further I read fiml is an appropriate way to handle missing data: but as far as I read here it's more used in multilevel or group analysis. So that was an idea that this could have been a way for my issue. Thanks for your help, Miriam
Maybe I did something wrong, but I had problems using the command IMPUTATION. This is the error I get
*** ERROR MODEL INDIRECT is not allowed with TYPE=IMPUTATION. The same error shows up when I try to model interaction. So thats why I am modelling it with each of the five data sets and would like to ask if the fit values can be integrated by calcutating the mean of them?
You cannot use MODEL INDIRECT with TYPE=IMPUTATION but you should be able to use XWITH. I would use MODEL CONSTRAINT with TYPE=IMPUTATION to define the indirect effects. Although the parameter estimates are simply an average across imputed data sets, the standard errors and chi-square are not and cannot be computed by hand. If you have further problems along this line, please send them along with your license number to firstname.lastname@example.org.
I'm running a fairly basic mediation with a dichotomous IV, a dichotomous mediator, and a non-normal continuous DV (count data). I'm using the bootstrapping command and requesting the indirect effect. However, I can only seem to do this with the WLSMV estimator and was not able to specify a negative binomial distribution for the DV.
1. Is the skewness of the DV a problem, given that I'm boostrapping? If so, is there anything to be done, since I'm unable to execute the (NB) command?
2. The WLSMV estimates are much different than with ML. Is the interpretation of the estimates the same? Can I exponentiate them to get odds ratios of the IV --> M relationship?
1. In Mplus, indirect effects can be computed when mediators are categorical only using weighted least squares estimation.
2. WLSMV estimates are in a probit metric. ML estimates are in a logit metric. WLSMV estimates should not be exponentiated.
yan liu posted on Sunday, August 28, 2011 - 9:34 am
Hi, Linda and Bengt
I am running a multilevel SEM mediation model: mediator1=b*predictor; mediator2=b1*predictor+b2*mediator1; outcome=b1*predictor+b2*mediator1+b3*mediator2;
Try to calculate the indirect effects and test if it's significant, using the formula provided by Hayes (2009). I found for between level, although all the mediation effects were not significant, but the sum (total indirect effects) turned out to be signficant, which does not make sense to me. Is the way to test "indtotw" and "indtotb" correct? Thanks!
%WITHIN% PNS ON teach (a1w); movat ON teach (a2w); movat ON PNS (a3w); engage ON PNS (b1w); engage ON movat (b2w); engage ON teach;
%BETWEEN% PNS ON teach (a1b); movat ON teach (a2b); movat ON PNS (a3b); engage ON PNS (b1b); engage ON movat (b2b); engage ON teach;
It looks correct to me. If your sample size is small you may want to try Bayesian analysis which allows indirect effects to have a non-normal distribution.
yan liu posted on Sunday, September 11, 2011 - 11:33 am
Thank you so much for your reply. Following up your suggestion to my question (posted above, Aug.28), I tried Bayes estimation. I added the following code to my original Mplus syntax
ANALYSIS: TYPE = TWOLEVEL; estimator = bayes; process = 2; fbiter = 10000;
However, I got error message as follows. ¡°Unrestricted x-variables for analysis with TYPE=TWOLEVEL and ESTIMATOR=BAYES must be specified as either a WITHIN or BETWEEN variable. The following variable cannot exist on both levels: TEACH¡±
(TEACH=predictor, PNS=mediator, movat=outcome)
Is something wrong with my code? Or I cannot use Bayes estimation for Preachers et al.'s multilevel SEM mediation approach because Bayes estimation doesn't allow a predictor to be at both levels? Thanks.
Bayes does not do the latent variable decomposition of the predictor variable, but uses the usual MLM approach. This means that you would have to specify TEACH as a Within variable. If you want it on Between as well, you have to create the cluster-mean version of the variable yourself (there is an Mplus option for this) and enter it as a Between variable.
Hello, I am estimating bootstrap CIs for testing indirect effects hypotheses. Is it possible to have contradictory results across "Total indirect effects" and "Standardized Total indirect effects." In the MPLUS output, using the first one as reference, indirect effects are significant, but using as a reference the second one, indirect effects are not significant. Thanks for your response. MF
Hello. I have a multiple mediation path model with 1 IV, 2 meds, and 1 outcome. I want the specific indirect effect for each mediator and to contrast them to see if one is larger than the other. I also would like to do a simulation to determine the sample size needed to power this study.
Here is a program I am working from.
TITLE: 2 mediator example with contrast DATA: FILE IS data.dat VARIABLE: NAMES ARE x m1 m2 y; ANALYSIS: m1 ON x(a1); m2 ON x(a2); y ON m1(b1); y ON m2(b2); y ON x; m1 WITH m2; MODEL INDIRECT: y IND m1 x; y IND m2 x; MODEL CONSTRAINT: NEW(a1ba a2b2 con); a1b1=a1*b1; a2b2=a2*b2; con=a1b1-a2b2; OUTPUT: CINTERVAL (BCBOOTSTRAP);
It is similar to MPlus example program 3.16, but the latter doesn't include contrasts of specific indirect effects, and it specifies bootstrap in the analysis section rather than in the output section as in the code listed above. Do these programs otherwise do the same thing?
Also, I didn't see a MC counterpart to example 3.16 in the MPlus example programs folder - does one exist? or perhaps I accidentally deleted it sometime. Can you include that code or provide other assistance that might help with determining sample size for this analysis? Thank you.
Dear Drs. Muthen, We are running a mediational model with bootstrapped standard errors in which variable X is predicting variable M, which in turn predicts Y. This model looks fine, with good fit indices, all paths and indicators significant and the total indirect effects also significant. Because the data are cross sectional and because it could be argued that the direction of effects is actually X to Y to M, we wanted to see if this alternative model would also fit the data. Output for this model showed that the fit indices were very good, but some of the indicators were no longer significant, and the Y to M path was no longer significant (even though in the original X-M-Y model the M to Y path was significant and the bivariate correlations between all the indicators are significant). 1.Does this mean that our original model is in fact better? 2.Why would the Y-M path no longer be significant in the alternative model? 3.Why is the number of bootstrapped drawn less than what we specified in the input? 4.We sometimes get the ‘THE RESIDUAL COVARIANCE MATRIX (THETA) IS NOT POSITIVE DEFINITE’ message and see that the residual variance of one of the indicators is negative, what is the appropriate action to deal with this? 5.Finally, why is 2-tailed p-value for the unstandardized estimates often times different from the 2-tailed p-value for the stdY and stdYX? Thank you.
1-2. The two models are different and fit differently to the covariance matrix of all variables. In your case, you should not base your choice of model on fit but substantive reasoning.
3. Send output to Support
4. This indicates that the model needs to be modified.
5. Unstand and stand coefficients have different sampling distributions and the assumption of a normal distribution may be differently well approximated in the two cases. If they differ, it may be better to use the unstand results.
Ok. Thank you. Regarding question 1., the theory isn't all that clear here, and while we have reason to believe that the X-M-Y model is the better one substantially, it would be nice to test the alternative model as well, but since the two models are not nested, we figured we could look at the fit indices and path coefficients as an indication of which model best fits the data.
Dear Drs. Muthen, I'm running a mediation model (using the delta method). There are some significant indirect effects. But they are really small (eg. stand. b = .04, p<.05). Would you report such small effects? Is there an rule of thumb? It's clear, that indirect effects are small, but is there an cut point?
I would report also small effects. The size of effects can be discussed in Cohen's terms.
Xu, Man posted on Thursday, March 22, 2012 - 8:15 am
Could I just follow up on this thread:
Since the regular significance testing of mediation effect might be biased, I try to get confidence interval from bootstrapping (I created mediation effect using NEW & MODEL CONSTRAINT).
1.There are two options of boostrapping apparently, the CINITERVAL (BCBOOTSTRAP) and CINTERVAL (BOOTSTRAP), which one is more suitable?
2. With sample size of around 3000 to 4000, what would be an appropriate number of bootstrapping?
I could not request indirect result because the analysis uses TYPE=RANDOM.
Xu, Man posted on Thursday, March 22, 2012 - 10:57 am
oh, actually BOOTSTRAP cannot be used with TYPE=RANDOM. but I had to have TYPE=RANDOM because I used TSCORE to adjust time at data collection - there is an embedded second order growth curve model and I am looking at mediators of the growth intercept and slopes.
Is there anyway around this, to get good standard errors for the mediation effect?
Xu, Man posted on Thursday, March 22, 2012 - 3:51 pm
Thank you. But it seems BCBOOTSTRAP cannot be used together with TYPE=RANDOM? In this situation, is there any way to get bootstraped standard error for parameters created using NEW & MODEL CONSTRAINT(mediation effect in my case)?
I was wondering what to report when examining indirect effects and their significance. Do you report the unstandardized or the standardized coefficients? Because my direct effects are only displayed in a path model with beta's (standardized estimates), I thought it best to report the sobel/delta method test statistic and p-value from the standardized section of the 'indirect'-output, but is this correct?
Whether to report unstandardized or standardized should be guided by the journal you plan to publish in. Whichever you report, you should report their standard errors and p-values. You should not use unstandarized p-values with standardized coefficients.
When running bootstrapped standard errors to test for mediation using theta parameterization I am getting confidence intervals indicating a significant indirect effect for the unstandardized estimates, but non-significant indirect effects for the standardized estimates. I am curious why this is occurring and how I should handle this in terms of reporting results. I have historically reported unstandardized coefficients.
Raw and standardized coefficients have different sampling distributions so can have different significance levels. If you usually report raw coefficients, I would do that. I would not decide what to report based on significance.
Jo Brown posted on Thursday, June 14, 2012 - 3:22 am
how many bootstrap cycles do you normally need to obtain accurate standard errors for the indirect effect?
I have a question about using bias-corrected bootstrapping in mediation.
I ran a mediation model (multi-group, 1 latent IV, 2 latent mediators, 2 latent outcomes + 1 cov) without bootstrapping and found several moderate to large direct effects that were significant (p < .05 to p < .001) in one group as well as significant indirect effects in the same group. I used both ML and MLR and found this result. When I ran the same model with bootstrapping, some of those direct effects dropped to ns, yet some corresponding indirect effects are significant according to the bootstrapped result. I can't get a sense from reading whether it is customary to report the significance of direct effects from the bootstrap results, or from an analysis without bootstrapping? does anyone know a reference on this point? It seems odd to have ns direct effects + significant indirects for the same path... Not sure how to explain that in my results.
Thanks for the fats reply. I should have pointed out that it was the b effect linking mediator to Dv that was ns,thus the concern. I am accustomed to finding joint effects significant when the indirect effects are.
I have two pertaining to a peer review for an article.
1. I bootstrapped the CIs and SEs of direct/indirect effects for a mediation model with latent variables. I therefore couldn't use MLR. Is bootstrapping robust to violations of multivariate normality? I am reluctant to use the bias-corrected bootstrapping because of the conditions of my sample size and size of effects (per Fritz and MacKinnon recommendation).
2. I did multiple group mediation and compared undstandardized effects across groups. A reviewer asked about effect sizes. The standardized effects are not comparable, across groups - so I don't want to report them. What is the best thing to report in this situation for an effect size, particularly for an indirect effect?
2. By effect size in this context, I assume you mean: As X increases 1 SD (or changes from control to tx), Y changes ? SD. To compute this, I would take the unstandardized model coefficient estimates (a, b, c) for each group and use the X and Y SDs to compute the group-specific effect sizes which won't be the same since the SDs aren't the same in the different groups even if the model coeff estimates are the same.
I just had one additional follow-up: The reviewer did not specify a particular effect size measure...I was thinking I could report the R-squared and standard betas for each group. But, given that these effect sizes are all group specific and my focus is entirely on group differences, I am reluctant to do this. In lieu of this, I could calculate a standard effect based on the pooled SD of X/Y for each group?.. And for the indirect effect it would be ab*(SDxpooled/SDpooledy). Mackinnon suggests ab*(SDx/SDy) but this is for a single group model. Also, I think I could use the same method for the direct effects (a effect and b effect) Does this sound ok?
I am running a path analysis with my predictor and 3 mediator variables at time 1 and two outcome variables at time 2, with a sample of 499. In my Analysis command, I indicated "BOOTSTRAP = 10000"; and in my Output command, I asked for STANDARDIZED MODINDICES (3.84)SAMPSTAT TECH1; CINTERVAL(BOOTSTRAP);
I want to make sure this is the correct syntax, and also, I'm unsure if the command is for percentile bootstrapping or bias-corrected bootstrapping. Given my sample size and that the predictor does NOT lead to the outcomes, which might be better to use in this case? If the CIs do not contain 0, can I assume mediation even though the predictor does not lead to the outcomes?
Before trusting the CIs, you want to make sure that your model fits, that is, that the direct effects are zero.
Dexin Shi posted on Sunday, September 08, 2013 - 5:45 pm
I am running a meditation analysis with a categorical mediator (no latent variable involved). I used both WLSMV and BC bootsrap. However, for one path (from the independent variable (x) towards to the categorical mediator), the two methods did not agree with each other on significant test. WLSMV gave p=0.819, where as BC bootstrap CI [0.399,2.496].To report the results, which one is recommended? Thank you for your help.
WLSMV gives a symmetric confidence interval around a bootstrapped standard error. BCBOOTSTRAP gives a non-symmetric confidence interval around a bootstrapped standard error. This is why they may not agree.
MODEL: empathy ON cond (a1); anc ON cond (a2); liking ON empathy (b1) anc (b2) cond (c1); gameinv ON liking (e1) empathy (d1) anc (d2) cond(f1);
Gameinv is categorical, and we are using bootstrapping. Our reviewers were interested in an alternative theoretical model, leading us to test these path models:
MODEL: empathy ON cond (a1); liking ON cond (a2); anc ON empathy (b1) liking (b2) cond (a3); gameinv ON anc (c1) empathy (d1) liking (d2) cond (a4);
MODEL: empathy ON cond (a1) liking (b1); anc ON cond (a2) liking (b2); liking ON cond (c1); gameinv ON empathy (e1) anc (e2) liking (f1) cond(d1);
The relevant indirect paths are significant in all three. Is a way to argue statistically that our original proposed model is better (such as comparing goodness of fit, although the only fit statistic reported from bootstrapping has been WRMR), or should we make a theoretical argument?
You can compare the fit of the models. With the BOOTSTRAP option, only standard errors are bootstrapped so we don't give fit statistics. You can run the three models without the BOOTSTRAP option to obtain the fit statistics.
There is no way of testing which model is best compared to another statistically unless the models are nested. You can use any of the fit statistics listed above for comparison purposes. I would not use WRMR as it is an experimental fit statistic.
Melissa Kull posted on Saturday, November 09, 2013 - 10:50 am
Using raw data with a small amount of missing data, I've been running basic path models with one exogenous predictor, three mediators (all continuous, mostly normally distributed), and one outcome. I've been trying to run these models using a sampling weight (to adjust for non-response), although I'm not interested in stratification or clustering, so I have not identified these data as complex. When I try to estimate bootstrapped SEs in these models, the models will run using ML but will not run using MLR (which is the default for these models when the bootstrap is not applied). Can someone explain why this is happening and suggest some references with information on selecting the most appropriate estimator? I've looked over some of the MacKinnon articles cited in this thread but am not sure which whether my models are correctly specified with the ML estimator and bootstrapped SEs. Many thanks.
We don't do bootstrap with sampling weights. It is not clear how that should be done.
Melissa Kull posted on Wednesday, November 13, 2013 - 8:06 pm
Dr. Muthen, Thanks for your response. It seems peculiar that the models are converging and providing estimates that have been similar to results from other iterations of these models that I've been running. I thought maybe the syntax was just ignoring the population weight, but when I took the weight out, the results were different. Despite this, I suppose these estimates are not to be trusted? Can you explain why or suggest a reading that indicates why bootstrapping should not work with population weights? This would be tremendously helpful as I move forward with trying to appropriately specify these models. Thanks very much for your assistance.
I am hoping to look at indirect effects in a path analysis. I know that I can just request indirect effects using the IND command; however, per the Preacher and Hayes (2008) article, it appears that the bootstrapping method is recommended particularly for samples that are not normally distributed (as mine is not). With that said, though, I have used MLR estimator to help me manage my missing data and non-normality. When I tried to use bootstrapping as a way to identify indirect effects, I received the error message saying that bootstrapping can't be used with MLR.
I'm wondering what you might suggest in a situation like this. I'm imagining that I could either report the indirect effects that are provided without bootstrapping (e.g.just multiplying direct effects between mediators), or I could Not use MLR, and run the bootstrapping procedure to get the bootstrapped indirect effects.
Do you have thoughts about how to best proceed? Thank you so much!
RuoShui posted on Sunday, April 06, 2014 - 5:58 pm
Dear Drs. Muthen,
I used ML and Bootstrapping in my SEM and asked for standardized in the output. However, there is no standard error or p value for standardized parameter estimates. Is this normal? Is there anyway I can obtain them?