I am running a path analysis and I have requested estimates of specific indirect effects. For one of my indirect pathways, the unstandardized estimate is significant (the 95% confidence interval does not contain zero), but the standardized estimate is NOT significant (the 95% confidence interval DOES contain zero). I obtained the standardized confidence interval from the section of output labelled "Confidence intervals of standardized total, total indirect, specific indirect, and direct effects." I was under the impression that standardized estimates are merely in a different metric than unstandardized estimates and that standardization should not change the level of significance of the estimate. Could you please advise me as to why my results differ when I look at the standardized CIs? Thank you very much for your time.
The ratio or the parameter estimate to its standard error and therefore confidence interval can differ slightly for raw and standardized coefficients. The standardized standard errors are not rescaled raw standard errors. See
Standardized Coefficients and Their Standard Errors
Thank you Dr. Muthen for your prompt response, which was very helpful. I am now curious whether you would recommend reporting the unstandardized or the standardized solution, given that they provide a different pattern of results.
I would go with the unstandardized SEs and CIs (but you can still report the standardized parameter point estimates). But I would assume that the CIs are not very different so that significance in the unstandardized case means that the CI barely excludes zero - in which case I wouldn't make a big deal out of that significance.
Shiny posted on Saturday, September 20, 2014 - 9:41 am
I Regressed a binary y on a latent continuous x.The unstandardized estimate is significant yet standardized is not. Which estimate shall I Report? I wonder why the standardized is not significant. Is it due to the fact that I have categorical data that is skewed, and SE is larger?
Unstandardized and standardized versions of estimated coefficients have different sampling distributions. This means that the normality assumption of the estimate assumed by the usual z-score test can be better approximated in one version or the other. Significance testing using z-scores can therefore have different outcomes for the two versions.
One way to check the distribution of the estimated coefficient is to do Bayesian estimation and looking at the posterior distribution. The version that best approximates normality presumably has the most trustworthy z-score. With Bayes estimation, the normality assumption of the estimates is not needed and the 95% credibility interval is trustworthy for both versions.
Shiny posted on Saturday, September 20, 2014 - 2:49 pm
Thank you very much for the Explanation! I did Bayesian estimation in the one Group Analysis and it worked. I am also interested in a multiple Group Analysis and check the standardized coefficient. The System suggested me to specify KNOWNCLASS and Type = mixture. I tried but got an error.
This analysis is only available with the Mixture or Combination Add-On.
Would you kindly give me a hint how this can be done in multiple Group Analysis? I do not have much experience with mixture models or bayes. So I just checked the mplus material on bayes (the 2011 short course) and did the test. Thank you!
Syntax is attached:
USEVARIABLES ARE x med mod y KNOWNCLASS = mod (1=js 2=ijs); CATEGORICAL = y; ANALYSIS: TYPE = MIXTURE; ESTIMATOR = BAYES; PROCESS = 2;
It sounds like you don't have the mixture or combination add-on. You would need that to do this.
Margarita posted on Tuesday, March 17, 2015 - 5:40 am
I am using MPlus for Mac and since Plots are not currently available, I was wondering whether there is another way of checking the distribution of the parameters in Bayesian estimation? I get different results in Bootstrap ML standardised/unstandardised and so I am trying to base my decision on Bayes results.
Margarita posted on Tuesday, March 17, 2015 - 9:25 am
Thank you, I already managed to get the distribution of an indirect effect by using the following function: mplus.plot.bayesian.distribution (I hope that's the correct one). However, I am not sure how to interpret the results. How the plot would help decide if parameter (indirect path) is significant or not?
The plot shows you the 95% limits (vertical lines in the distribution). If they cover zero, it is not significant.
Margarita posted on Tuesday, March 17, 2015 - 12:19 pm
From what I've read in the 'Bayesian analysis in Mplus: A brief introduction", when the distribution of the parameter is skewed, then the normality assumption made by bootstrapped ML is not suitable for the indirect effect. Is that correct?
Even with non-normality of the estimate can you use ML and bootstrap if you use the percentiles (2.5% and 97.5%) from the bootstrap distribution for the CI. You get that when requesting
bootstrap = 10000;
in the Analysis command and
in the Output command.
The bootstrap CI and the Bayes CI are often quite close.
What is wrong to use with non-normal estimate distribution is a symmetric CI, that is, having the lower limit be the same distance from the estimate as the upper limit.
Margarita posted on Tuesday, March 17, 2015 - 3:25 pm
Okay I see, because BCbootstrap also accounts for non-normality. I already used that, and I only used Bayes because I had different results in the unstandardized/standardized 95% CI BCbootstraps and I thought that the Bayes would provide more information, as I am not sure which one to follow. However, you are right the unstandardised ML results are very close to the ones I get from Bayes.
It is a bit more clear now, so thank you for your help. I really appreciate it
Dear Drs. Muthen, Is it appropriate to interpret the standardized estimate and confidence interval of an interaction term (moderation)? What if bootstrapping is being used? I also tried standardizing my X and W variables before creating the interaction term under the DEFINE command as I understand it is not correct to standardize the interaction term after, however I got the following warning message: "A variable that appears as an argument for the STANDARDIZE function is being used in other DEFINE transformations. Note that the original values not the standardized values will be used in these transformations." Is there a way around this? Thank you kindly, Ann
It's a longish story. You may want to look at our RMA book in section 1.6.2 where we discuss 3 quantities which may be of interest to standardize when the model has an interaction/moderation: The simple slope, the moderator function, and the moderated effect. This is the case with or without bootstrapping. If you don't have the book, you may want to ask on SEMNET.
Regarding your Define question, send your output to Support along with your license number.