Similar to the Mplus user above, I am trying to use TYPE = IMPUTATION (to read in 20 plausible values that I already have, for my dependent variable). But I also want to use ESTIMATOR=BAYES and I am getting the message indicated by tue user above.
Given that I am using multiple imputation for my dependent rather than independent variable, is there anything I can do, in order to use both TYPE = IMPUTATION and ESTIMATOR=BAYES?
There is a Bayes estimator specified to impute the missing values. Now I am wondering if that is the algorithm as described in your paper "Bayesian Analysis Using Mplus"? According to that it is a Gibbs sampler with conditional univariate distributions. Is the H0-Imputation conceputally equal to Fully Conditional Specification with the differnce that with H0 I have to specifiy the model directly?
Asparouhov, T. & Muthén, B. (2010). Multiple imputation with Mplus. Technical Report. Version 2. Click here to view Mplus inputs, data, and outputs used in this paper. download paper contact second author
Fred posted on Thursday, September 28, 2017 - 12:09 am
Thank you for your answer.
In your paper, you write: "[...] that can be estimated in Mplus with the Bayesian estimator, which we call H0 imputation." (p. 2). Unfortunately there is no indication of what the Bayes estimator is exactly and how the imputations are generated. My question therefore remains, if the Bayes estimator used for imputation is the same one that is used for estimation which is described in your "Bayesian Analysis Using Mplus"-paper.
In example 11.7 you can remove the DATA IMPUTATION: and the SAVEDATA: sections and run the example. You will notice that the estimated model is exactly the same (with or without those commands). That is, we do one type of Baysian estimation, adding DATA IMPUTATION: does not change anything about the model estimation.
The H0 imputation works like this (specified as in example 11.7, estimator = bayes and data imputation command) we first estimate the model until convergence. After the model has converged we continue chain 1 of the MCMC for 100*NDATASETS more iterations. All missing values are imputed during MCMC and one copy is stored every 100 iterations.
Yes, like FIML, MAR is assumed and a full-information approach is used. With noninformative priors and increasing sample size, Bayes and FIML results become more and more similar. One is as good as the other. See also the J. Schafer missing data book.