Sarahpsychology PreviousNext
Mplus Discussion > Dynamic Structural Equation Modeling >
Message/Author
 Sarah Victor posted on Friday, July 13, 2018 - 12:08 pm
Hello! I am using DSEM (ESTIMATOR=BAYES) in version 8.1 to estimate a two-level fixed effects model based on intensive longitudinal data from an EMA study. Wariables are mostly categorical at the within-person level, with low endorsement rates, so we have our work cut out for us in terms of getting the models to work!

I have been using the BITER command to set min and max # of iterations, plus THIN=10, and following your guidance re: doubling # of iterations after convergence to ensure the PSR value does not increase.

In one set of models testing relatively simple fixed lagged effects, the number of iterations required ranged from 5,600 to 13,000; for tests of within-level mediation (1-1-1), they ranged from 16,000 to 455,600. The results generate no errors, final PSRs are under 1.1, and the proportion of covariance coverage for my variables ranges from .38 to .98.

My question - if the model does, eventually, converge, should I be concerned about the huge number of iterations required for it to do so (other than the fact that the analyses take forever)? Is there a general threshold of iterations for Bayesian estimation that indicates a poor fitting model, regardless of whether the model will eventually converge?
 Sarah Victor posted on Friday, July 13, 2018 - 12:11 pm
Oh dear me, I am mortified that my password manager substituted my username for a title and I didn't notice until I hit post...and I can't even delete it!

Title should have been "Max iterations for DSEM with complex models?"
 Bengt O. Muthen posted on Friday, July 13, 2018 - 1:11 pm
A very large number of iterations might be an indication that it is hard to find the parameter estimates - which in turn suggests that there isn't enough information in the data for the model to be estimated. Simplifying the model can help.
 Sarah Victor posted on Friday, July 13, 2018 - 4:13 pm
Thank you for the quick reply!

If a very large number of iterations is required, but the PSR threshold *does* indicate convergence at some point, is it reasonable to consider the parameter estimates trustworthy? And if not, I'm wondering what order of magnitude # of iterations should be concerning - 50,000? 100,000? 500,000?

Unfortunately, the models are not especially complex to begin with - our lagged variables are only lagged once, we have no random effects, we are not estimating any latent variables, and our continuous measures are reasonably normal - it's just that our categorical outcomes are highly zero-inflated, so no matter what we do with the models, the data are sparse. I just want to make sure the high # of iterations is primarily a problem in terms of computation time/resources, rather than a substantive problem with the model results.
 Bengt O. Muthen posted on Friday, July 13, 2018 - 5:43 pm
There are many pieces of considerations. It is also a matter of how many iterations that the low, stable PSR has been observed after potentially bouncing up and down in earlier iterations. Estimation difficulties may arise if your auto-regression coefficients are close to 1, for instance due to a trend that is not modeled. If you request Factors=All in the Plot command and FSCOMPARISON in the Output command, you get more insights into the values of the auto-regression coefficients. If you like, you can send your output (and data) to Support along with your license number and we can take a closer look at it. Otherwise, it is hard to give specific guidance.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a public posting area. Enter your username and password if you have an account. Otherwise, enter your full name as your username and leave the password blank. Your e-mail address is optional.
Password:
E-mail:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: