Message/Author 


Dear Team, I really appreciate the new Bayesian features in Mplus 6.1. However, I have a quick query: I am struggling to achieve convergence (psr<=1) when using Bayesian estimation with informative priors on a CFA. As with ML, would adding plausible starting values to the model help achieve convergence? Many Thanks Paul 


Yes, it does. You can also try using STVALUES = ML in the ANALYSIS command. I typically don't see that much of convergence problems with Bayes. An exception of course is if the model is not identified, or identified only with informative priors and they have too large variances. 

Mike Zyphur posted on Tuesday, March 13, 2012  8:30 pm



Hi Bengt, Have you noticed shorter estimation times in very complex models using such empirical priors? They would seem to reduce the need for a substantial burn in phase. Thanks mike 


In my current experience, I don't see much of a benefit of using intelligent starting values  I don't use that approach. 


Hello dr Muthen, Below syntax is based on your example 9.19. Could you tell me which statements I need to add to check measurement invariance confer the third example in 'Bayesian SEM: A more flexible representation'? I found file 'run23' of that paper, but have no idea how to implement it in this multilevel analysis. Any suggestions are welcome. Many thanks, Luk ANALYSIS: TYPE = TWOLEVEL RANDOM; ESTIMATOR = BAYES; BITERATIONS = (1000); MODEL: %WITHIN% OPSw BY A_1_4 A_1_5 A_1_18; RNMDw BY A_1_2 A_1_7 A_1_13 A_1_17 A_1_21 A_1_26 A_1_30; STAFw BY A_1_8 A_1_9 A_1_12; PARTw BY A_1_3 A_1_6 A_1_11 A_1_23 A_1_25 A_1_29; QUALw BY A_1_19 A_1_20 A_1_24 A_1_27 A_1_28 A_1_31 A_1_32; s1  OPSw ON func; s2  RNMDw ON func; s3  STAFw ON func; s4  PARTw ON func; s5  QUALw ON func; %BETWEEN% F1 BY A_1_4 A_1_5 A_1_18 A_1_3 A_1_6 A_1_11 A_1_23 A_1_25 A_1_29 A_1_15 A_1_16; F2 BY A_1_2 A_1_7 A_1_13 A_1_17 A_1_21 A_1_26 A_1_30; F3 BY A_1_1 A_1_8 A_1_9 A_1_12 A_1_10 A_1_22 A_1_19 A_1_20 A_1_24 A_1_27 A_1_28 A_1_31 A_1_32; s1 s2 s3 s4 s5 F1 F2 F3 ON exp lan degree; 


Not sure what you mean here. What is the "third example"? What type of measurement invariance are you interested in  across WithinBetween? Run23 is a MIMIC model with crossloadings and direct effects which it doesn't look like you are after. 


Thanks for your prompt reply. The main covariate of interest is 'func'. This is coded as 0 for staff and 1 for their chiefs (with exactly one chief in each unit). I see that the factor means of chiefs are for all but one factor on the within level significantly higher. Now I want to know that I'm not comparing apples and oranges like Linda says in the webvideo. The 'third example' I use for reference is 'Study 3: Direct effects in MIMIC modeling' from the BSEM paper, from which you concluded 'This illustrates the importance of allowing for all possible direct effects using informative, smallvariance priors.' How can I implement this in my model? Also, since I use RANDOM, is there no way to reproduce Table 23 from the BSEM paper? So I stick to EFA for the factor loadings confer Table 22 of your paper? Thanks in advance, Luk 


BSEM MIMIC modeling can be done in line with the UG ex 5.32. Because you consider func as the covariate, you would apply this input to the Within level. Even if you use RANDOM in your setup above, you would still get estimated loadings on each level because the random aspect is only for the regression slopes of the regressions of the factors on func. 


Thank you, that helped a lot. My final model is a twolevel Bayesian mimic model with crossloadings and direct effects with zero mean and smallvariance priors. I have three questions related to this: 1. In my output I don't get PPC. What could be the reason? 2. Apologies, but I'm still confused what I can conclude from a mimic model regarding the factor structure across groups. In your 1989 Psychometrika paper (latent variable modeling in...) I read on page 563 that it allows an investigation of hypotheses of construct validity and invariance across subpopulations. I get the invariance part, but how does this relate to construct validity exactly? Is that because the direct effects of covariates on items are tested over and above the indirect effect via the factors? Or, in the Topic 1 slides (slide 175) it says that the model ASSUMES the same factor loadings, observed residual variances/covariances, factor variances and covariances for all levels of the covariates. Should I thus first do a MGCFA? 3. If I have multiple covariates I want to test, would you recommend first testing them all seperately, or immediately all together? All the best, Luk 


1. We have not yet implemented PPC for multilevel models. 2. I would forget about the construct validity issue here  you are testing for invariance. MIMIC assumes what you mention, but that doesn't mean you have to do MGFA to check that. Just look at the MIMIC approach as an approximate way to discover major noninvariance in the intercepts. 3. In MIMIC I would do them all together, which is the strength of MIMIC. 


Im having difficulty obtaining a Posteria Predictive Pvalue for my CFA model. Variable: Names are crisis people resource future disaster cntrysde control priority species id; Missing are all (9999) ; USEV = crisis people resource future disaster cntrysde control priority species; DEFINE: STANDARDIZE crisis people resource future disaster cntrysde control priority species; ANALYSIS: ESTIMATOR = bayes; PROCESSORS = 2; CHAIN = 4; FBITER = 50000; POINT = mean; Model: f1 BY crisis* future control priority; f2 BY disaster* resource people; f3 BY cntrysde* species; f1f3@1; !CROSSLOADINGS: f1 BY disaster resource people cntrysde species*0 (a5a9); f2 BY crisis future control priority*0 (b1b4); f2 BY cntrysde species*0 (b8b9); f3 BY crisis future control priority disaster resource people*0 (c1c7); MODEL PRIORS: a5a9 ~ N(0,.01); b1b4 ~ N(0,.01); b8b9 ~ N(0,.01); c1c7 ~ N(0,.01); The ppp came out at 0.000. 


I then ran a basic version of the model without crossloadings using the syntax below and had the same problem. Variable: Names are crisis people resource future disaster cntrysde control priority species id; Missing are all (9999) ; USEV = crisis people resource future disaster control priority cntrysde species; CATEGORICAL are crisis people resource future disaster control priority cntrysde species; DEFINE: STANDARDIZE crisis people resource future disaster cntrysde control priority species; ANALYSIS: ESTIMATOR = bayes; PROCESSORS = 2; Model: f1 BY crisis future control priority; f2 BY disaster resource people; f3 BY cntrysde species; ************************************* I have run each factor individually, and each produces good ppp values. I also ran the entire model with ML estimation, which appears to run fine. Goodness of fit statistics indicate the model is good. I have rechecked the data in STATA and cannot see a problem. Im completely stuck at this point. Any help would be much appreciated. 


Please send the ML and BAYES outputs and your license number to support@statmodel.com. You should check the data in Mplus not STATA. It is how Mplus is reading the data that is the issue. You could be reading it incorrectly in Mplus. In the future, please do not post questions that cannot fit in one window. 


Hi dr. Muthén, I completed a study with a Bayesian 2level MIMIC model. Covariates are at the lowest level. My findings are very interesting in that I find significant effects for the main covariate of interest on several latent variables. More concretely, I find that nurse managers compared to staff nurses have consistenty higher ratings of several aspects of nurses' work environment. One could say that nurse managers don't seem to grasp what is going on. I hypothesize that this would lead to decreased wellbeing among nurses. So is there any way that I could model this (thus add an outcome variable to this model)? 


You should ask this on a general discussion forum like SEMNET or Multilevelnet. 

antsu chen posted on Thursday, November 28, 2013  1:55 am



Dear Team, When doing Bayesian estimation, we often choose the inversewishart distribution when modeling the normal covariance matrix. But what if some of the variables are categorical? Does this case change the way we assign parameters to the IW distribution? I have already referred to the User's Guide but did not see any information addressing this, so I came here to ask this question. Thank you!! Andrew 


With categorical variables you can still use IW priors for say the factor covariance matrix Psi and the residual covariance matrix Theta (e.g. offdiagonal elements can be set to have zeromean, smallvariance priors). See also our technical papers on Bayes implementation in Mplus which are on our website. 


Dear Bengt and Linda, I am trying to run a Bayesian CFA model similar to the one shown in Table 15 of Muthén & Asparouhov 2012 (Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17(3): 313–335); that is, one where residual covariances have inverse Wishart piors. I am basing my code on the code in run15.inp on your website. I have noticed in the code the following line that also assigns inverse Wishart piors with an estimate of 1 to the residual variances: p1p15~IW(1,21); Could you explain why this is necessary? It is not explained in the article. I notice that when I leave it out, I get the following error: THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY. THE PSI MATRIX IS NOT POSITIVE DEFINITE. PROBLEM INVOLVING VARIABLES Q17 AND Q27. Thanks for your help and this very useful site! 


You want to give priors for the whole covariance matrix, so not just the offdiagonal covariances but also the diagonal variances. Regarding the settings for the inverse Wishart, we recently posted the following on the variances, implying that the choice of IW(1,21) is not always the best. Here is one way to think about the IW(Sigma,df) specification, where Sigma is the covariance matrix and df is the degrees of freedom. Denote the number variables of the covariance matrix by p. Our BSEM article appendix and wikipedia points out that df > p+1 makes the mean exist, so a weak prior may have df=p+2. A larger df makes the prior stronger. Furthermore, the mode is mode = Sigma/(df+p+1). So say that you want the mode to correspond to a variance of say v. Together with df=p+2, you can then solve for Sigma in IW(Sigma,df) as Sigma = v*(2p+3). For instance, p=10 and v=50 gives df=12 and Sigma = 50*23 = 1150, so IW(1150,12). 


Dear Bengt, thanks for the thorough explanation. 3 followup questions: 1. Do you still recommend IW(0,p+2) for the covariances? 2. Is it correct to conclude from your post that if I give priors to any element in the varcovariance matrix I must give priors to all elements? Suppose I have a SEM with 50 observed variables and know that the measurement model of 2 LVs (each having 5 indicators) is problematic so want to use your methods of assigning IW priors to the 90 residual covariances between and 10 variances of their indicators. Do I have to assign IW priors to all 1225 covariances and 50 variances? Or is there a way to do so only for the part of the model that I believe to have problems? 3. Are crossloadings with 0mean normal priors required to make this work, or can I give IW priors to the covariance matrix without also giving priors to crossloadings? thanks for the help. 'Alim 


1. Yes, but the DF parameter is usually varied for sensitivity analysis 2. You can assign only to those 90 residual covariances ( ... but to be accurate ... there are 45 covariances and 10 variances ... so you will be assigning a prior to 55 parameters). 3. You need priors for the cross loadings. 


This is a clarification on 3. You need priors for unindentified cross loadings such as when all cross loadings are included. If you are using cross loadings that are identifiable (for example when only some cross loadings are included in the model) you don't need priors. 


And, the use of crossloading priors is not tied to the use of IW priors. 


Thanks for all the guidance. A clarification question: does a smaller DF value for an IW prior mean a stricter (more informative) prior (as with a normal prior's variance) or does it mean a less informative prior? thanks, 'Alim 


Please ignore my above question. I see you already answered it in an earlier post: "a weak prior may have df=p+2. A larger df makes the prior stronger." 


Dear Bengt, 1. A followup question to your above post about an updated approach to setting IW priors for residual variances. I am trying a Bayesian CFA model with 27 observed variables (all continuous) and 7 LVs. When I don't estimate residual covariances at all I get a very poor fit. If estimate all possible residual covariances following the example in your Muthén & Asparouhov 2012 Psych Methods paper (resid variances have priors c1c27~IW(1,29) and resid covariances have prios p1p351~IW(0,29) ) I get good fit and in addition the loadings for most indicators on almost all LVs improve noticeably. In addition, 18 resid covariances are significant. Then I try the new recommendation. Given that when I do not add priors the residual variances are in the range .2.8 with most being around .5, I set the Sigma for resid variance priors to achieve a mode of .5: .5(27*2+3) = 28.5 However I get the following error for any Sigma > 3: THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY. THE PSI MATRIX IS NOT POSITIVE DEFINITE. THE PROBLEM OCCURRED IN CHAIN 2. PROBLEM INVOLVING VARIABLES Q28_O AND JINS. For Sigma = 2 or 3 the results are very similar to Sigma = 1. Can you explain why the error occurs for Sigma > 3? Thanks, 'Alim 


This did not fit in the above give the size limits of posts: 2. In your paper you write, "An excellentfitting model is expected to have a posterior predictive p value around .5 and an f statistic difference of zero falling close to the middle of the confidence interval." When I use a Sigma = 1 for the resid variances I get the following fit: Chisq diff 95% CI: (99.122 56.210) p= 0.726 How should I interpret this given that the CI is skewed towards the negative and that p > .5 rather then less than? Thanks, 'Alim 


1. Send output and data to Support. 2. p = 0.726 is still a good fit. I don't think it is important that it is greater than .5. 

Ted Fong posted on Sunday, April 20, 2014  9:09 am



Dear Dr. Muthen, I am evaluating the factor structure of a 9item scale via specifying 3 Bayes CFA models (onefactor, threefactor, and bifactor model with 1 general and 2 specific factors) using informative priors for the residual covariances. All of the 3 models show a good fit with PPP around .5. 1) My intention is to compare them via information criteria shown in the output. However, the DIC and BIC appear to show conflicting results: 1factor model: DIC = 21017, BIC = 21344 3factor model: DIC = 21008, BIC = 21494 bifactor model: DIC = 20714, BIC = 21684 The 1factor model has the highest DIC but the lowest BIC. The bifactor model, on the other hand, has the lowest DIC but highest BIC. I know that BIC is favored over AIC in mixture modeling. Do you have a preference for using the DIC or BIC in comparing Bayesian CFA models? 2) The Estimated Number of Parameters (pD) is positive in the 1 or 3factor CFA models but is negative (248.970) in the bifactor model. Does this negative pD signify a problem for the bifactor model? Many thanks and regards, Ted 


1) DIC tends to select overfitted models (it is very similar to AIC). 2) Usually a large negative pD value signifies a problem. Try to diagnose it with the the traceplots for the parameters. Perhaps some kind of sign switching is happening. You can read section 3 in http://statmodel.com/download/HighDimension.pdf for some ideas. 

Ted Fong posted on Sunday, April 27, 2014  9:02 am



Dear Dr. Asparouhov, Thanks for your reply and expert opinions. Following your advice, I checked the trace plots for the parameters in the bifactor model. Yes, the major loadings on the 2 specific factors showed some kind of sign switching and bimodal posterior distributions with wide 95% CI e.g. (0.384, 0.877). I have read section 3.1 of your paper but still am not clear on the cause and proper treatment of such unidentification problem. 1) I read from page 6 of your paper: "the only statistic that is not essentially distorted by the multiple modes is the mode of the posterior". Does this mean using POINT = MODE may help in this problem? 2) I tried constraining these major loadings to be greater than zero using model constraint. However, the model estimation did not terminate normally: UNABLE TO GENERATE PARAMETERS SATISFYING THE CONSTRAINTS. CHECK THE STARTING VALUES. THE PROBLEM OCCURRED IN CHAIN 2. 3) I also tried specifying informative priors e,g, N(0.3, 0.01) for these major loadings. This results in a pD of 42.938 for the model with no apparent sign switching. However, in Ex 5.31 of the UG, informative priors are specified only for crossloadings but not major loadings. I am not sure adding informative priors is a correct way to tackle the issue. I would be grateful if you would have any comments. 

Back to top 