Hello! I am using DSEM (ESTIMATOR=BAYES) in version 8.1 to estimate a two-level fixed effects model based on intensive longitudinal data from an EMA study. Wariables are mostly categorical at the within-person level, with low endorsement rates, so we have our work cut out for us in terms of getting the models to work!
I have been using the BITER command to set min and max # of iterations, plus THIN=10, and following your guidance re: doubling # of iterations after convergence to ensure the PSR value does not increase.
In one set of models testing relatively simple fixed lagged effects, the number of iterations required ranged from 5,600 to 13,000; for tests of within-level mediation (1-1-1), they ranged from 16,000 to 455,600. The results generate no errors, final PSRs are under 1.1, and the proportion of covariance coverage for my variables ranges from .38 to .98.
My question - if the model does, eventually, converge, should I be concerned about the huge number of iterations required for it to do so (other than the fact that the analyses take forever)? Is there a general threshold of iterations for Bayesian estimation that indicates a poor fitting model, regardless of whether the model will eventually converge?
A very large number of iterations might be an indication that it is hard to find the parameter estimates - which in turn suggests that there isn't enough information in the data for the model to be estimated. Simplifying the model can help.
If a very large number of iterations is required, but the PSR threshold *does* indicate convergence at some point, is it reasonable to consider the parameter estimates trustworthy? And if not, I'm wondering what order of magnitude # of iterations should be concerning - 50,000? 100,000? 500,000?
Unfortunately, the models are not especially complex to begin with - our lagged variables are only lagged once, we have no random effects, we are not estimating any latent variables, and our continuous measures are reasonably normal - it's just that our categorical outcomes are highly zero-inflated, so no matter what we do with the models, the data are sparse. I just want to make sure the high # of iterations is primarily a problem in terms of computation time/resources, rather than a substantive problem with the model results.
There are many pieces of considerations. It is also a matter of how many iterations that the low, stable PSR has been observed after potentially bouncing up and down in earlier iterations. Estimation difficulties may arise if your auto-regression coefficients are close to 1, for instance due to a trend that is not modeled. If you request Factors=All in the Plot command and FSCOMPARISON in the Output command, you get more insights into the values of the auto-regression coefficients. If you like, you can send your output (and data) to Support along with your license number and we can take a closer look at it. Otherwise, it is hard to give specific guidance.