Message/Author 


I am running a multilevel analysis with assessments (90) nested within individuals (n=25) with type=twolevel random and estimator=Bayes. The model converges after 800 iterations with a final PSR of 1.101. If I use biterations=10000 and biterations=20000 the PSR slightly rises again with 1.26 max and then drops till 1.03/1.04 and seems to remain stable at this level between 15000 and 20000 iterations. My questions are: (1) is it safe to stop here and should I use the estimates produced by the model based on 20000 iterations? What PSR is acceptable? (2) What criteria does Mplus use to stop after 800 iterations? and (3) under 'simulated prior distributions' behind all my parameters it says: 'improper prior'. I read somewhere else on the forum that this is not a problem but is this never a problem? Many thanks! 


(1) it sounds like 20,000 is a good number to stop at given the long sequence of low PSR values less than 1.1. (2) See Section 2.5 of our paper under Papers, Bayesian Analysis: Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3. download paper contact second author (3) Ignore this part. 


Thanks! I have two additional questions about the analysis I described above. (1) I used the syntax I posted below, is this correct for a multivariate twolevel model in which each variable is estimated by the lagged version of itself and all other variables? (2) How are missings treated in this model? If I understand it correctly if I do not specifiy 'Listwise=ON' (which I did not) Mplus will use all available data to estimate the information matrix and SEs. Is this correct? Is a specific method used that should be reported in publications? The model as I defined it: Data: File is data for Mplus.dat; Variable: Names are short_ID INT JOY SAD IRR WOR POS NEG; Usevariables = INT JOY SAD IRR WOR POS NEG; Missing are ALL (999); Within = ; lagged= INT JOY SAD IRR WOR POS NEG (1); Cluster = short_ID; ANALYSIS: type = twolevel random; estimator = Bayes; biterations=(20000); PROCESSORS=2; Model: %within% sINTINT INT on INT&1; sJOYINT INT on JOY&1; sSADINT INT on SAD&1; sIRRINT INT on IRR&1; sWORINT INT on WOR&1; sPOSINT INT on POS&1; sNEGINT INT on NEG&1; sINTJOY JOY on INT&1; sJOYJOY JOY on JOY&1; and so on....in order to estimate a full multivariate model. 


Yes, on all your questions. 


Perhaps you can refer to Joe Schafer's book on missing data. Bayes is a fullinformation estimator and does the same job as ML under MAR  that is, using all available data. 

Back to top 