

Set Bayesian Priors in LGM 

Message/Author 


Suppose that we know the values of population parameter and so can consider informative prior distributions. Then, we set Bayesian priors for all these parameters like, I S  y1@0 y2@1 y3@2 y4@3; [I] (a); [S] (b); I (c); S (d); I with S (e); model priors: a ~ N(100, 10); b ~ N(10, 3); c ~ N(500, 200); d ~ N(50, 25); e ~ N(18, 15); Am I on the right track? 


If you know the population parameter values, why are the variances not zero, for example, a ~ N(100, 0); 


Thank you, Linda. One more question! As recognized, the population parameter values cannot be easily known in real circumstance. So, if one wants to set (weakly??) informative priors based on the past literatures and do following steps: 1) From the meta analysis, one found that the mean and variance of intercept factor were respectively about 100 and 10. 2) then, set the prior like, a ~ N(100, 10). Does my story seem good? 


To use these priors you would need to be very certain that your study is the same as the studies in the meta analysis. 


Dear Linda, I got reviewer's comment related to choice of priors. As knew, normal prior is often used for mean (fixed effect) parameter estimates, while inverseGamma and inverseWishart priors are commonly used for the variance/covariance. Although I set the normal priors for variance/covariance, the reviewer pointed out that the prior distribution of a variance should not be normal. First, I cannot set normal priors for variances at all? Secondly, if yes, I need to set IW or IG for these parameters. How can I do it when considering above example? I (c); S (d); I with S (e); model priors: c ~ N(500, 200); d ~ N(50, 25); e ~ N(18, 15); OR, model priors: c ~ IW(S, degree of freedom) S here is positive definite of matrix, right? What value of S can be this example? 


You don't want to use a normal prior for variances because the normal prior says that it is possible to get negative variances. In your example you can use a mildly informative inverseWishart prior: c~iw(1,3); d~iw(1,3); e~iw(0,3); See the MuthenAsparouhov (2012) Psych Methods article for a description of IW priors. Also see: Muthén, B. (2010). Bayesian analysis in Mplus: A brief introduction. Technical Report. Version 3. Both papers are on our website. 


Prof. Muthen, I am slightly confused with the IW prior as to how the mean and variance are calculated for this prior because the first parameter seems like a matrix? 1.) For the IW(S,df) prior, as the first parameter S is a matrix, I understand setting it to 1 (as in the above example for parameters d & e) implies identity matrix; but what does 0 (in above example for parameter e) stand for? So when I set the first parameter 'S' to 1, does it imply the mean is 1 and if I set 'S' to 0 does it imply the mean is 0? 2.)And if I set S to 0, from the formulas in both the papers for variance calculations of IW, as the numerator has 'S', does it mean the variance is 0 as well? My apologies if I am missing something very obvious here. Please advice. 


The IW prior is indeed a bit complex. Two facts are useful here: a) with increasing df (second argument), the prior is stronger (given more weight relative to data). b) the mode is the first argument divided by df+p+1, where p is the number of variables. To answer your questions: 1) Your "e" parameter is a covariance for which zero is a neutral point; hence the zero as first argument for IW. So the mode is zero. 2) Your variance parameters c and d are given first argument 1. You have p=2. The mode will then be 1/(3+2+1). 


Prof. Muthen, Thank you very much for a clear simple explanation;it is becoming more clearer now. In a similar manner is there a simple way to know/calculate the variance as well? For my current analysis what I did was to create another separate Mplus script where I inputted different values for IW(S,df) S & df parameters and got in the output the values of Mean and variance for the IW prior. And then I used the proper S and df for my main analysis. Is this work around ok? I am asking this because Mplus output for same S & df gives slightly different Variance values? For eg below for IW(.9,52): Parameter 30~IW(3.000,52) 0.0789 0.0003 0.0186 Parameter 31~IW(0.900,52) 0.0237 0.0002 0.0155 Parameter 32~IW(4.000,52) 0.1053 0.0006 0.0248 Parameter 33~IW(0.900,52) 0.0237 0.0003 0.0172 Parameter 34~IW(0.900,52) 0.0237 0.0004 0.0198 Parameter 35~IW(5.000,52) 0.1316 0.0010 0.0310 Parameter 36~IW(0.900,52) 0.0237 0.0003 0.0172 


The variance of IW priors is discussed in the Appendix of the 2012 MuthenAsparouhov article in Psych Methods. You can get estimates of the prior means and variances as you did. They are based on random draws and can therefore vary somewhat. 


Dear Prof. Muthen, Thank you so very much !! :o) Sincerely Arun 

M Hamd posted on Monday, December 09, 2013  4:45 pm



Dear professor We conducted two studies. We want to incorporate the findings study 1 in study 2 by using informed bayesian. In this case, the prior used will be the standardized regression coefficients from the previous study or the unstandardized regression coefficients? If unstandardized, how does that address the issue of a different scale range for one of our variable (i.e., 19 likert in study 1, vs 17 likert in study 2). 


You should use the unstandardized coefficients. Use a different prior for each variable. 


I am estimating a LGM of alcohol use across 7 waves of data. At each wave, alcohol use is the average number of drinks consumed per day, which can take on positive noninteger values (e.g., 0, .5, 3.2). As such, I would like to constrain the estimates to positive values (because I believe negative binomial is not an option unless I round to nearest integer). It seems like this would require using Bayesian priors (specifying an inversegamma prior?) but I am not sure if there is an alternative way to accomplish this in Mplus. 


How about using the censorednormal model with censoring from below at 0? 


Hi, Can the estimates of a regression analysis be used as model priors of the same regression analysis? Or that would be considered "cheating"? Specifically, running Monte Carlo simulations I notice that Bayes produces accurate estimates even with small sample size and small effect size. However, the power is low. Thanks, Andrea 


I wouldn't recommend this. 

Back to top 

