

Treatment of Variances in Bayesian ML... 

Message/Author 

Shi Yu posted on Tuesday, November 26, 2019  5:35 am



Hello, I am running a multilevel SEM (MLSEM) using Bayesian estimation method. In Bayesian SEM, according to Muthen and Asparouhov (e.g., 2012), in order to bring the variables to a uniform metric and make the model scalefree, so that scale issues do not interfere with prior settings, the observed variables in an SEM should be standardized, and the latent variables should be constrained to have variances equal to 1. Now I am running a multilevel SEM, and on each level, the model has a similar structure like, let’s say, X > M > Y. I am using Bayesian estimation, such that the paths X > M and M > Y are given uninformative priors (with infinite variance), and X > Y is given an informative prior with a mean of 0 and small variance. Like Muthen and colleagues, I standardize X, M, and Y before any model specification. Now my question is: In addition to the standardization of variables, do I need any special treatment for the variances of X, M, and Y for each level (e.g., constraining them to 1)? According to Marsh et al. (2009), X, M, and Y are latent aggregations of the observed variables in a multilevel SEM. I am not sure if the latent X, M, and Y on each level would automatically have unit variance. If not, maybe this will interfere with the prior setting in Bayesian estimation? Thank you for your attention and assistance! 


The variables don't need to be standardized, just divided by some constant to make their variances approximately 1 (or approx. of the same magnitude, adjusting the prior variance instead). The latent X, M, Y don't have unit variances but their variances are probably close enough in magnitude that the prior variances don't need to be adjusted. 

Shi Yu posted on Tuesday, November 26, 2019  6:37 pm



Dear Dr Muthen, Thank you so much for your explanation. While you said “The variables don't need to be standardized”, I also wonder if it would do harm to standardize them (compared to simply divide them by constants)? As far as I can tell, the difference is just whether the means are subtracted from these variables. I suppose this will not affect the coefficient estimates? Thanks 


Q1: There is no harm in standardizing them as long as the model is scale free (for instance, there is no equality constraint across parameters for different variables). Q2: Right 

Back to top 

