Message/Author 


Dear Linda and Bengt, Congratulations on the new version 6. I have a model involving 16 observed variables (3 latents & 4 categorical measures) and 75 freely estimated parameters. I am playing around with the BAYES estiamtor to try and get a better understanding of this alternative estimation method. The posterior predictive P value for this model is < .001 and the 95% credible interval for difference in chsq values is (37.790, 149.462), which each indicate the hypothesized model does not fit the data well. I was wondering if it is meaningful under the bayesian approach to consider soemthing like a standardized root mean square residual as an supplementary measure of global fit. Thanks and best wishes, Paul Dudgeon 


We don't do that and have not seen it done in the Bayesian literature. It would be cumbersome with Bayes. 


Thanks Linda for the rely and advice. Paul Dudgeon 


In "Bayesian Analysis In Mplus: A Brief Introduction" version 3, for a CFA with BAYES estimator is is still appropriate to use ML for categorical DVs? For instance, should it the syntax still look like the following: ANALYSIS: ESTIMATOR = BAYES; PROCESS = 2; FBITER = 20000; STVAL = ML; Thank you Drs. Muthen 


You can use STVALML for both categorical and continuous latent variables. This option is not required. 


Bengt, I just attended your workshop at UConn. Very informative. The Bayesian estimation works nicely and is quite fast. One question: Is there some way to save the chains of values for the posterior distributions of the parameters? I would like to be able to use them to make some statements regarding the probability that an effect of a specified magnitude, or greater, exists. Thanks, Chuck Green 


Let me check to see what we have in the way of hidden options on that. 


I am running a conditional latent growth model using Bayes as the estimator. I have asked for tech4 to get the mean intercept and slope. Is there a way to request or calculate a credibility interval for the mean intercept and slope? Thanks! 


You need to define the mean of the intercept and growth factors in MODEL CONSTRAINT using the NEW option. Then you will get credibility intervals for them. 


Thank you  that is very helpful. One followup question: Shouldn't the estimate I get from MODEL CONSTRAINT be the same as the one from TECH4? Mine is not. I have both continuous and dichotomous predictors, and I am wondering if there is something wrong with how I am asking for the intercept estimate... i s  PPVT1@0 PPVT2@.4 PPVT3@1; i on age_mo (c1); i on MomEdYrs (c2); i on Boy (c3); i on fEHome (c4); [i] (m1); [age_mo] (m3); [MomEdYrs] (m4); [Boy$1] (m7); [fEHome$1] (m8); MODEL CONSTRAINT: new (Int); Int = m1 + (c1*m3) + (c2*m4) + (c3*m7) + (c4*m8); 


I think the problem is that you are putting the binary covariates on the CATEGORICAL list and referring to their thresholds. The CATEGORICAL list is for dependent variables. You should remove them from the CATEGORICAL list and refer to their means not their thresholds. 


I am running a small sample (N=79) SEM model and using the Bayes estimator. What kinds of fit statistics are customary in the literature for Bayes models  I noticed that the AIC and BIC are not generated with the ESTIMATOR = BAYES command? I should note that I have no missing data. Thanks! 


See the following paper which is available on the website: Muthén, B. (2010). Bayesian analysis in Mplus: A brief introduction. Technical Report. Version 3. 

Jan Zirk posted on Wednesday, April 25, 2012  8:36 am



Dear Linda or Bengt, I have three questions concerning the Bayes estimator. 1) The rule of thumb says that under the 'traditional' maximumlikelihood estimation of SEM models we need at least about 1020 cases per variable to provide appropriate stability of a model. As Bayesian estimation does not apply large sample theory of normality, does it mean that in the case of a bigger number of variables in a model than the above ruleofthumb under ML allows for, Bayesian approach is more suitable, and do you know any rule of thumb concerning the sample size for Bayesian estimation? 2) As Bayesian estimator does not require the normal distributions, would it be appropriate not to define a binary dependent (or mediating) variable with the 'categorical are' command in the Bayesian input, to obtain the DIC index? 3) is there an equivalent of the chisq difference test for testing nested models under Bayesian estimation? 


1) I don't know of a rule of thumb  even a rule for ML is debatable and highly dependent on the context. But in general, Bayes could work better than ML for smaller samples. 2) No, you still have to use the proper model, in this case logistic/probit. 3) There is "Bayes factors" 

Jan Zirk posted on Wednesday, April 25, 2012  10:13 am



Dear Bengt, Thanks very much for your quick response. As to 3) is there any literature with an example showing how to use bayes factors for nested models comparison in MPlus? I did not find it in the User's Guide. 


We do not currently have nested model testing in Bayes. 

Jan Zirk posted on Thursday, April 26, 2012  2:14 pm



I see. Thanks very much. 

Jan Zirk posted on Wednesday, May 02, 2012  8:16 am



Dear Linda or Bengt, I would like to ask about the parameterization under Bayes estimator. According to "Bayesian Analysis of Latent Variable Models using Mplus" (http://www.statmodel.com/download/BayesAdvantages18.pdf) parameterization PX outperforms V and L. Is PX default for BAYES? Is it possible to change parameterization under Bayes (as it is for ML delta vs. theta)? Best wishes, 


Yes, PX is the default for Bayes. It is not possible to change the parametrization under Bayes. It is probit with Theta. In ML, you can choose logistic or probit. Delta and Theta are not ML choices. 

Jan Zirk posted on Wednesday, May 02, 2012  12:05 pm



Thank you Linda for prompt reply. 


Hi, Dr. Muthen, In webpage http://www.statmodel.com/examples/penn.shtml#baysem I try to run "run15.inp" for the paper Muthen, B. & Asparouhov, T. (2011). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313335. I have one question regarding the understanding of the syntax:  define: standardize y1y15; MODEL: y1 on y2@0; ! to get stdy  Why do you standardize y1y15? Why can you get stdy by adding "y1 on y2@0"? Thanks, Mark 


I standardize because I want the priors to work in a standardize metric. A certain prior variance has different implications for observed variables with different variances. I am allowed to standardize here because the model is "scalefree". The y1 on y2@0 statement is just a trick  ignore it. 


Thanks for your reply, Dr. Muthen. So I don't need to put the statement "y1 on y2@0" in my syntax, right? Best, Mark 


No, but it also doesn't hurt. 

Jan Zirk posted on Thursday, October 17, 2013  4:04 pm



I have a problem with XY standardized coefficients in my objective Bayesian SEM. They are larger than 1 and so the reviewers may be critical. What could be a good solution to get rid of this problem? How would you set the priors or model constraints to help with these? Only 2 standardized coefficients are larger than 1. The remaining have usual values. 


We have a FAQ on "Standardized coefficient greater than 1". This usually has to do with highly correlated predictors; so that's the real issue. I'm not sure you want to use priors to get rid of it, but perhaps instead reformulate the model. 

Jan Zirk posted on Friday, October 18, 2013  1:47 am



Thanks very much. 

Jan Zirk posted on Friday, October 18, 2013  10:58 am



When including the variances and means of predictors under ml it is possible to avoid listwise deletion thanks to fiml. what is the 'bayesian fiml'? Including variances and means in a bayesian regression also avoids listwise deletion; may I ask you for a reference as to this fimllike solution but with bayesian approach ? 


Making the covariates part of the model is possible also in Bayes. I don't know that there is a reference for this; it goes back to basic principles that any variable, the parameters of which are part of the model, is handled by MARtype missing data theory. 

Jan Zirk posted on Friday, October 18, 2013  8:51 pm



Ok, thanks very much. Will refer to MAR. 


Hi, Dr. Muthen, After reading your paper titled "Muthen, B. & Asparouhov, T. (2011). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313335.", I try to modify "run15.inp" for my data set. The indicators for CFA are categorical variables. Therefore, I add a statement "CATEGORICAL =" in my syntax. I wonder any references or examples I can read/follow in terms of the specification of priors for crossloadings and residual correlations. Best, Mark Thanks, Mark 


I recommend reading the papers at http://www.statmodel.com/BSEM.shtml 


Dear Dr. Muthen, Thanks for your prompt reply. In order to run a Bayesian CFA with categorical indicators (6 points Likertscale), I read you paper "Bayesian Analysis Using Mplus: Technical Implementation" and have few questions. Q1. On P.10, you provide a matrix (11) with partially a correlation matrix and partially a covariance matrix. What is the partially covariance matrix about? (the covariances between which parameters??) Q2. Do I have to give priors for thresholds of each categorical variables? Q3. Do you have any suggestion regarding the priors for thresholds? Normal distribution with mean zero and variance 6? Q4. Can I give prior N(0,.01) for crossloadings? Q5. Do I need to standardize the categorical indicators? (My thought is NO because my priors are in a standardize metric) I appreciate all your help! Best, Mark 


q1. It is a matrix for categorical variables (diag=1) and cont's variables (diag not 1). q2.no q3. Use the default (see UG) q4. yes. q5. no 


Dear Bengt and Linda, where can I find the data and input for Figure 3 and Tables 1618 in "Muthen, B. & Asparouhov, T. (2012). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313335."? This is the Bayesian SEM reanalysis of Kaplan's (2009) model using the National Educational Longitudinal Study of 1988. 

DavidBoyda posted on Wednesday, March 19, 2014  3:29 am



Dear Dr.Muthen I have a question regarding the Bayes estimator. If I was to compare a model using the Bayes estimator to the same model using MLR, would I expect wildly differing results between these two estimators in the same way perhaps that ML might giver slightly differing results to MLR? 


The results for ML versus Bayes with noninformative priors should be very close. 

db40 posted on Monday, March 31, 2014  5:08 am



Dear Dr Muthen, I have encountered a scenario where I have a significant path estimates from IV to Mediator and from Mediator to DV. However, the indirect effect is not significant. I have searched and found information that this is not uncommon (ref: http://goo.gl/PRkFmy ). I have a question regarding credibility intervals. I see in the manual there are two options for Bayes, EQTAIL or HPD. Which of these should I use if my product of ab is asymmetrical? 


Either one is appropriate. I would use the default of EQTAIL. 

Back to top 