Message/Author 


I am running a multilevel analysis with assessments (90) nested within individuals (n=25) with type=twolevel random and estimator=Bayes. The model converges after 800 iterations with a final PSR of 1.101. If I use biterations=10000 and biterations=20000 the PSR slightly rises again with 1.26 max and then drops till 1.03/1.04 and seems to remain stable at this level between 15000 and 20000 iterations. My questions are: (1) is it safe to stop here and should I use the estimates produced by the model based on 20000 iterations? What PSR is acceptable? (2) What criteria does Mplus use to stop after 800 iterations? and (3) under 'simulated prior distributions' behind all my parameters it says: 'improper prior'. I read somewhere else on the forum that this is not a problem but is this never a problem? Many thanks! 


(1) it sounds like 20,000 is a good number to stop at given the long sequence of low PSR values less than 1.1. (2) See Section 2.5 of our paper under Papers, Bayesian Analysis: Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3. download paper contact second author (3) Ignore this part. 


Thanks! I have two additional questions about the analysis I described above. (1) I used the syntax I posted below, is this correct for a multivariate twolevel model in which each variable is estimated by the lagged version of itself and all other variables? (2) How are missings treated in this model? If I understand it correctly if I do not specifiy 'Listwise=ON' (which I did not) Mplus will use all available data to estimate the information matrix and SEs. Is this correct? Is a specific method used that should be reported in publications? The model as I defined it: Data: File is data for Mplus.dat; Variable: Names are short_ID INT JOY SAD IRR WOR POS NEG; Usevariables = INT JOY SAD IRR WOR POS NEG; Missing are ALL (999); Within = ; lagged= INT JOY SAD IRR WOR POS NEG (1); Cluster = short_ID; ANALYSIS: type = twolevel random; estimator = Bayes; biterations=(20000); PROCESSORS=2; Model: %within% sINTINT INT on INT&1; sJOYINT INT on JOY&1; sSADINT INT on SAD&1; sIRRINT INT on IRR&1; sWORINT INT on WOR&1; sPOSINT INT on POS&1; sNEGINT INT on NEG&1; sINTJOY JOY on INT&1; sJOYJOY JOY on JOY&1; and so on....in order to estimate a full multivariate model. 


Yes, on all your questions. 


Perhaps you can refer to Joe Schafer's book on missing data. Bayes is a fullinformation estimator and does the same job as ML under MAR  that is, using all available data. 


I’m running a simple multilevel CFA with Bayesian estimation consisting of categorical indicators (4point Likert items) from about 500 teachers in 27 schools (see model below). The model converged at 11,600 iterations and when I increased to 25,000 FBITERATIONS, the PSR had a max of 1.3 after 11,600 and ended at 1.08 (becomes pretty stable around this value around 16000 iterations). However, I did notice from checking the trace plot, that some parameters varied quite a bit, especially the BETWEEN parameters. For example, one seemed to vary from about .3 to .7 even after the burn in. How am I to interpret a stable PSR, but such wide variability in the parameters and how problematic is this? Is this likely due to the small sample size, and therefore, lack of precision at Level2? CATEGORICAL ARE x1 x2 x3; CLUSTER=School; WITHIN; BETWEEN; ANALYSIS: TYPE IS TWOLEVEL; PROCESSOR = 2; ESTIMATOR = BAYES; MODEL: %WITHIN% FW BY x1 x2 x3; %BETWEEN% FB BY x1 x2 x3; OUTPUT: STDYX TECH1 TECH8; PLOT: TYPE= PLOT2; 


The variability across iterations is what creates the posterior distribution of the parameter estimated. With only 27 schools you would expect the posterior distributions for betweenlevel parameters to have large variance, that is, large SEs. 


Thank you. Another question regarding this model... If I am primarily interested in generating factor scores for the BETWEEN factors to be used as predictors in a model, is there any advantage in also modeling the WITHIN factors? And would doing so affect the BETWEEN scores if the variables are not declared as WITHIN or BETWEEN? 


If the Within factor structure fits the data well there should be a small advantage to using such a parsimonious model. My prior is that the Between scores may only be affected to small degree, however. 

Back to top 