Convergence for estimator=Bayes PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Charlotte Vrijen posted on Wednesday, July 26, 2017 - 2:10 am
I am running a multilevel analysis with assessments (90) nested within individuals (n=25) with type=twolevel random and estimator=Bayes. The model converges after 800 iterations with a final PSR of 1.101. If I use biterations=10000 and biterations=20000 the PSR slightly rises again with 1.26 max and then drops till 1.03/1.04 and seems to remain stable at this level between 15000 and 20000 iterations. My questions are: (1) is it safe to stop here and should I use the estimates produced by the model based on 20000 iterations? What PSR is acceptable? (2) What criteria does Mplus use to stop after 800 iterations? and (3) under 'simulated prior distributions' behind all my parameters it says: 'improper prior'. I read somewhere else on the forum that this is not a problem but is this never a problem?

Many thanks!
 Bengt O. Muthen posted on Wednesday, July 26, 2017 - 3:55 pm
(1) it sounds like 20,000 is a good number to stop at given the long sequence of low PSR values less than 1.1.

(2) See Section 2.5 of our paper under Papers, Bayesian Analysis:

Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3.
download paper contact second author

(3) Ignore this part.
 Charlotte Vrijen posted on Friday, July 28, 2017 - 2:59 am
Thanks! I have two additional questions about the analysis I described above.

(1) I used the syntax I posted below, is this correct for a multivariate twolevel model in which each variable is estimated by the lagged version of itself and all other variables?

(2) How are missings treated in this model? If I understand it correctly if I do not specifiy 'Listwise=ON' (which I did not) Mplus will use all available data to estimate the information matrix and SEs. Is this correct? Is a specific method used that should be reported in publications?

The model as I defined it:

Data:
File is data for Mplus.dat;

Variable:
Names are short_ID INT JOY SAD IRR WOR POS NEG;

Usevariables = INT JOY SAD IRR WOR POS NEG;
Missing are ALL (-999);
Within = ;
lagged= INT JOY SAD IRR WOR POS NEG (1);
Cluster = short_ID;

ANALYSIS:
type = twolevel random;
estimator = Bayes;
biterations=(20000);
PROCESSORS=2;

Model:
%within%
sINTINT| INT on INT&1;
sJOYINT| INT on JOY&1;
sSADINT| INT on SAD&1;
sIRRINT| INT on IRR&1;
sWORINT| INT on WOR&1;
sPOSINT| INT on POS&1;
sNEGINT| INT on NEG&1;

sINTJOY| JOY on INT&1;
sJOYJOY| JOY on JOY&1;

and so on....in order to estimate a full multivariate model.
 Bengt O. Muthen posted on Friday, July 28, 2017 - 5:38 pm
Yes, on all your questions.
 Bengt O. Muthen posted on Friday, July 28, 2017 - 5:40 pm
Perhaps you can refer to Joe Schafer's book on missing data. Bayes is a full-information estimator and does the same job as ML under MAR - that is, using all available data.
 Michael Strambler posted on Sunday, February 04, 2018 - 10:25 am
I’m running a simple multilevel CFA with Bayesian estimation consisting of categorical indicators (4-point Likert items) from about 500 teachers in 27 schools (see model below).

The model converged at 11,600 iterations and when I increased to 25,000 FBITERATIONS, the PSR had a max of 1.3 after 11,600 and ended at 1.08 (becomes pretty stable around this value around 16000 iterations). However, I did notice from checking the trace plot, that some parameters varied quite a bit, especially the BETWEEN parameters. For example, one seemed to vary from about -.3 to .7 even after the burn in. How am I to interpret a stable PSR, but such wide variability in the parameters and how problematic is this? Is this likely due to the small sample size, and therefore, lack of precision at Level2?

CATEGORICAL ARE x1 x2 x3;
CLUSTER=School;
WITHIN;
BETWEEN;
ANALYSIS: TYPE IS TWOLEVEL;
PROCESSOR = 2;
ESTIMATOR = BAYES;
MODEL:
%WITHIN%
FW BY x1 x2 x3;
%BETWEEN%
FB BY x1 x2 x3;
OUTPUT:
STDYX TECH1 TECH8;
PLOT:
TYPE= PLOT2;
 Bengt O. Muthen posted on Monday, February 05, 2018 - 9:17 am
The variability across iterations is what creates the posterior distribution of the parameter estimated. With only 27 schools you would expect the posterior distributions for between-level parameters to have large variance, that is, large SEs.
 Michael Strambler posted on Tuesday, February 06, 2018 - 7:37 am
Thank you. Another question regarding this model... If I am primarily interested in generating factor scores for the BETWEEN factors to be used as predictors in a model, is there any advantage in also modeling the WITHIN factors? And would doing so affect the BETWEEN scores if the variables are not declared as WITHIN or BETWEEN?
 Bengt O. Muthen posted on Tuesday, February 06, 2018 - 3:18 pm
If the Within factor structure fits the data well there should be a small advantage to using such a parsimonious model. My prior is that the Between scores may only be affected to small degree, however.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: