BAYES estimator PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Paul Dudgeon posted on Tuesday, August 24, 2010 - 3:20 pm
Dear Linda and Bengt,

Congratulations on the new version 6.

I have a model involving 16 observed variables (3 latents & 4 categorical measures) and 75 freely estimated parameters.

I am playing around with the BAYES estiamtor to try and get a better understanding of this alternative estimation method.

The posterior predictive P value for this model is < .001 and the 95% credible interval for difference in ch-sq values is (37.790, 149.462), which each indicate the hypothesized model does not fit the data well.

I was wondering if it is meaningful under the bayesian approach to consider soemthing like a standardized root mean square residual as an supplementary measure of global fit.

Thanks and best wishes,

Paul Dudgeon
 Linda K. Muthen posted on Tuesday, August 24, 2010 - 4:48 pm
We don't do that and have not seen it done in the Bayesian literature. It would be cumbersome with Bayes.
 Paul Dudgeon posted on Tuesday, August 24, 2010 - 8:34 pm
Thanks Linda for the rely and advice.

Paul Dudgeon
 Keivn Linares posted on Wednesday, October 20, 2010 - 3:38 pm
In "Bayesian Analysis In Mplus:
A Brief Introduction" version 3, for a CFA with BAYES estimator is is still appropriate to use ML for categorical DVs? For instance, should it the syntax still look like the following:
ANALYSIS:
ESTIMATOR = BAYES;
PROCESS = 2;
FBITER = 20000;
STVAL = ML;

Thank you Drs. Muthen
 Linda K. Muthen posted on Thursday, October 21, 2010 - 2:33 pm
You can use STVAL-ML for both categorical and continuous latent variables. This option is not required.
 Charles Green posted on Friday, May 27, 2011 - 6:33 pm
Bengt,

I just attended your workshop at UConn. Very informative. The Bayesian estimation works nicely and is quite fast. One question: Is there some way to save the chains of values for the posterior distributions of the parameters? I would like to be able to use them to make some statements regarding the probability that an effect of a specified magnitude, or greater, exists.
Thanks,
Chuck Green
 Bengt O. Muthen posted on Saturday, May 28, 2011 - 5:47 am
Let me check to see what we have in the way of hidden options on that.
 Michelle Maier posted on Thursday, March 29, 2012 - 2:17 pm
I am running a conditional latent growth model using Bayes as the estimator. I have asked for tech4 to get the mean intercept and slope. Is there a way to request or calculate a credibility interval for the mean intercept and slope? Thanks!
 Linda K. Muthen posted on Thursday, March 29, 2012 - 4:59 pm
You need to define the mean of the intercept and growth factors in MODEL CONSTRAINT using the NEW option. Then you will get credibility intervals for them.
 Michelle Maier posted on Friday, March 30, 2012 - 10:44 am
Thank you - that is very helpful. One follow-up question: Shouldn't the estimate I get from MODEL CONSTRAINT be the same as the one from TECH4? Mine is not. I have both continuous and dichotomous predictors, and I am wondering if there is something wrong with how I am asking for the intercept estimate...

i s | PPVT1@0 PPVT2@.4 PPVT3@1;
i on age_mo (c1);
i on MomEdYrs (c2);
i on Boy (c3);
i on fEHome (c4);

[i] (m1);
[age_mo] (m3);
[MomEdYrs] (m4);
[Boy$1] (m7);
[fEHome$1] (m8);

MODEL CONSTRAINT:
new (Int);
Int = m1 + (c1*m3) + (c2*m4) + (c3*m7) + (c4*m8);
 Linda K. Muthen posted on Saturday, March 31, 2012 - 9:57 am
I think the problem is that you are putting the binary covariates on the CATEGORICAL list and referring to their thresholds. The CATEGORICAL list is for dependent variables. You should remove them from the CATEGORICAL list and refer to their means not their thresholds.
 Justin D. Smith, Ph.D. posted on Thursday, April 05, 2012 - 4:17 pm
I am running a small sample (N=79) SEM model and using the Bayes estimator. What kinds of fit statistics are customary in the literature for Bayes models - I noticed that the AIC and BIC are not generated with the ESTIMATOR = BAYES command? I should note that I have no missing data.
Thanks!
 Linda K. Muthen posted on Friday, April 06, 2012 - 9:42 am
See the following paper which is available on the website:

Muthén, B. (2010). Bayesian analysis in Mplus: A brief introduction. Technical Report. Version 3.
 Jan Zirk posted on Wednesday, April 25, 2012 - 8:36 am
Dear Linda or Bengt,
I have three questions concerning the Bayes estimator.
1) The rule of thumb says that under the 'traditional' maximum-likelihood estimation of SEM models we need at least about 10-20 cases per variable to provide appropriate stability of a model. As Bayesian estimation does not apply large sample theory of normality, does it mean that in the case of a bigger number of variables in a model than the above rule-of-thumb under ML allows for, Bayesian approach is more suitable, and do you know any rule of thumb concerning the sample size for Bayesian estimation?
2) As Bayesian estimator does not require the normal distributions, would it be appropriate not to define a binary dependent (or mediating) variable with the 'categorical are' command in the Bayesian input, to obtain the DIC index?
3) is there an equivalent of the chi-sq difference test for testing nested models under Bayesian estimation?
 Bengt O. Muthen posted on Wednesday, April 25, 2012 - 9:40 am
1) I don't know of a rule of thumb - even a rule for ML is debatable and highly dependent on the context. But in general, Bayes could work better than ML for smaller samples.

2) No, you still have to use the proper model, in this case logistic/probit.

3) There is "Bayes factors"
 Jan Zirk posted on Wednesday, April 25, 2012 - 10:13 am
Dear Bengt,
Thanks very much for your quick response. As to 3) is there any literature with an example showing how to use bayes factors for nested models comparison in MPlus? I did not find it in the User's Guide.
 Linda K. Muthen posted on Thursday, April 26, 2012 - 1:51 pm
We do not currently have nested model testing in Bayes.
 Jan Zirk posted on Thursday, April 26, 2012 - 2:14 pm
I see. Thanks very much.
 Jan Zirk posted on Wednesday, May 02, 2012 - 8:16 am
Dear Linda or Bengt,
I would like to ask about the parameterization under Bayes estimator.

According to "Bayesian Analysis of Latent Variable Models using Mplus"
(http://www.statmodel.com/download/BayesAdvantages18.pdf)
parameterization PX outperforms V and L.
Is PX default for BAYES? Is it possible to change parameterization under Bayes (as it is for ML- delta vs. theta)?

Best wishes,
 Linda K. Muthen posted on Wednesday, May 02, 2012 - 11:58 am
Yes, PX is the default for Bayes.

It is not possible to change the parametrization under Bayes. It is probit with Theta. In ML, you can choose logistic or probit. Delta and Theta are not ML choices.
 Jan Zirk posted on Wednesday, May 02, 2012 - 12:05 pm
Thank you Linda for prompt reply.
 Hsien-Yuan Hsu posted on Wednesday, September 18, 2013 - 12:20 am
Hi, Dr. Muthen,

In webpage
http://www.statmodel.com/examples/penn.shtml#baysem

I try to run "run15.inp" for the paper-
Muthen, B. & Asparouhov, T. (2011). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313-335.

I have one question regarding the understanding of the syntax:
----------------------------------
define:
standardize y1-y15;
MODEL:
y1 on y2@0; ! to get stdy
----------------------------------
Why do you standardize y1-y15?
Why can you get stdy by adding "y1 on y2@0"?

Thanks, Mark
 Bengt O. Muthen posted on Wednesday, September 18, 2013 - 11:25 am
I standardize because I want the priors to work in a standardize metric. A certain prior variance has different implications for observed variables with different variances. I am allowed to standardize here because the model is "scale-free".

The y1 on y2@0 statement is just a trick - ignore it.
 Hsien-Yuan Hsu posted on Wednesday, September 25, 2013 - 8:26 pm
Thanks for your reply, Dr. Muthen.

So I don't need to put the statement "y1 on y2@0" in my syntax, right?



Best,
Mark
 Bengt O. Muthen posted on Thursday, September 26, 2013 - 8:22 am
No, but it also doesn't hurt.
 Jan Zirk posted on Thursday, October 17, 2013 - 4:04 pm
I have a problem with XY standardized coefficients in my objective Bayesian SEM. They are larger than 1 and so the reviewers may be critical. What could be a good solution to get rid of this problem? How would you set the priors or model constraints to help with these? Only 2 standardized coefficients are larger than 1. The remaining have usual values.
 Bengt O. Muthen posted on Thursday, October 17, 2013 - 8:39 pm
We have a FAQ on "Standardized coefficient greater than 1". This usually has to do with highly correlated predictors; so that's the real issue. I'm not sure you want to use priors to get rid of it, but perhaps instead re-formulate the model.
 Jan Zirk posted on Friday, October 18, 2013 - 1:47 am
Thanks very much.
 Jan Zirk posted on Friday, October 18, 2013 - 10:58 am
When including the variances and means of predictors under ml it is possible to avoid listwise deletion thanks to fiml. what is the 'bayesian fiml'? Including variances and means in a bayesian regression also avoids listwise deletion; may I ask you for a reference as to this fiml-like solution but with bayesian approach ?
 Bengt O. Muthen posted on Friday, October 18, 2013 - 8:42 pm
Making the covariates part of the model is possible also in Bayes. I don't know that there is a reference for this; it goes back to basic principles that any variable, the parameters of which are part of the model, is handled by MAR-type missing data theory.
 Jan Zirk posted on Friday, October 18, 2013 - 8:51 pm
Ok, thanks very much. Will refer to MAR.
 Hsien-Yuan Hsu posted on Sunday, November 17, 2013 - 10:01 pm
Hi, Dr. Muthen,

After reading your paper titled "Muthen, B. & Asparouhov, T. (2011). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313-335.", I try to modify "run15.inp" for my data set.

The indicators for CFA are categorical variables. Therefore, I add a statement "CATEGORICAL =" in my syntax.

I wonder any references or examples I can read/follow in terms of the specification of priors for cross-loadings and residual correlations.

Best,
Mark


Thanks, Mark
 Bengt O. Muthen posted on Monday, November 18, 2013 - 8:26 am
I recommend reading the papers at

http://www.statmodel.com/BSEM.shtml
 Hsien-Yuan Hsu posted on Tuesday, November 19, 2013 - 1:25 am
Dear Dr. Muthen,

Thanks for your prompt reply.

In order to run a Bayesian CFA with categorical indicators (6 points Likert-scale), I read you paper "Bayesian Analysis Using Mplus: Technical Implementation" and have few questions.

Q1. On P.10, you provide a matrix (11) with partially a correlation matrix and partially a covariance matrix. What is the partially covariance matrix about? (the covariances between which parameters??)

Q2. Do I have to give priors for thresholds of each categorical variables?

Q3. Do you have any suggestion regarding the priors for thresholds? Normal distribution with mean zero and variance 6?

Q4. Can I give prior N(0,.01) for cross-loadings?

Q5. Do I need to standardize the categorical indicators? (My thought is NO because my priors are in a standardize metric)

I appreciate all your help!

Best,
Mark
 Bengt O. Muthen posted on Tuesday, November 19, 2013 - 8:39 am
q1. It is a matrix for categorical variables (diag=1) and cont's variables (diag not 1).

q2.no

q3. Use the default (see UG)

q4. yes.

q5. no
 'Alim Beveridge posted on Monday, January 27, 2014 - 1:31 am
Dear Bengt and Linda,
where can I find the data and input for Figure 3 and Tables 16-18 in "Muthen, B. & Asparouhov, T. (2012). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313-335."? This is the Bayesian SEM reanalysis of Kaplan's (2009) model using the National Educational Longitudinal Study of 1988.
 DavidBoyda posted on Wednesday, March 19, 2014 - 3:29 am
Dear Dr.Muthen

I have a question regarding the Bayes estimator. If I was to compare a model using the Bayes estimator to the same model using MLR, would I expect wildly differing results between these two estimators in the same way perhaps that ML might giver slightly differing results to MLR?
 Linda K. Muthen posted on Wednesday, March 19, 2014 - 10:38 am
The results for ML versus Bayes with non-informative priors should be very close.
 db40 posted on Monday, March 31, 2014 - 5:08 am
Dear Dr Muthen,

I have encountered a scenario where I have a significant path estimates from IV to Mediator and from Mediator to DV. However, the indirect effect is not significant.

I have searched and found information that this is not uncommon (ref: http://goo.gl/PRkFmy ).

I have a question regarding credibility intervals. I see in the manual there are two options for Bayes, EQTAIL or HPD. Which of these should I use if my product of ab is asymmetrical?
 Linda K. Muthen posted on Monday, March 31, 2014 - 8:03 am
Either one is appropriate. I would use the default of EQTAIL.
 db40 posted on Wednesday, May 07, 2014 - 7:00 am
Dear Dr.Muthen,

I have question regarding the Bayes estimator and that it outputs a one-tailed P-value - yet also outputs a 95%CI. Im under the notion that the CI overs both tails - so I dont really understand the one-tailed approach.

Apologies if this question is makes no sense.
 Linda K. Muthen posted on Wednesday, May 07, 2014 - 11:01 am
You should use the confidence interval to assess significance. If you have a positive estimate, the p-value is the probability of the estimate being negative.
 Dr George Chryssochoidis posted on Thursday, August 14, 2014 - 6:51 am
Hello Linda,

I want to restrict bayesian priors to positive values.

The corresponding command to use the half-normal priors in WinBUGS for instance is: ~dnorm(0,0.0001)I(0,).


I cannot trace how to do this in Mplus. Is there a way?
 Bengt O. Muthen posted on Monday, August 18, 2014 - 10:36 am
You can use inequalities with Model Constraint, but a better way is to use a prior like N(2,1) or some version of that that is weakly informative and essentially will keep the parameter positive.
 anonymous Z posted on Friday, July 01, 2016 - 7:09 am
Dear Drs. Muthen,

How the missing data is dealt with Bayes estimation? Is FIML used?

Thanks so much,
 anonymous Z posted on Friday, July 01, 2016 - 8:11 am
Hi Drs. Muthen,

To add to my previous email, according to a conversation we had a while ago,can I say

"Bayes is a full-information estimator, and it produce similar results as Maximum likelihood estimation with missing data."

Is there a citation for this?

Thanks,
 Bengt O. Muthen posted on Friday, July 01, 2016 - 11:53 am
You can say what you have in the quote. I am not sure about a good source - check the Schafer book we refer to in our UG. Perhaps you can also refer to our
Bayes implementation papers:


Asparouhov, T. & Muthén, B. (2010). Bayesian analysis of latent variable models using Mplus. Technical Report. Version 4. Click here to view Mplus inputs, data, and outputs used in this paper.
download paper contact second author

Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3.
 anonymous Z posted on Friday, July 22, 2016 - 7:23 am
Dear Drs. Muthen,

Can I do multiple group comparison with Bayes estimation?

Thanks so much,
 Linda K. Muthen posted on Friday, July 22, 2016 - 10:03 am
Yes. You must do this via TYPE=MIXTURE with the KNOWNCLASS and CLASSES options.
 Tyler Moore posted on Thursday, August 25, 2016 - 1:54 pm
Hi Bengt and Linda, I'm not sure whether this is the appropriate place for this question, but are there any papers out there presenting strong arguments in favor of the BAYES estimator over others? I'm anticipating some reviewer questions RE: why I decided to use BAYES rather than more common estimators that output conventional fit indices (CFI, RMSEA, etc.). The answer is that BAYES was the only estimator that didn't result in a non-positive-definitely residual covariance matrix, but I'd like to have more support for that decision besides "the others didn't work." Any suggestions or refreences you'd point me to? Thanks!
 Bengt O. Muthen posted on Thursday, August 25, 2016 - 4:10 pm
You might find this paper on our website useful:

Muthén, B. (2010). Bayesian analysis in Mplus: A brief introduction. Technical Report. Version 3. Click here to view Mplus inputs, data, and outputs used in this paper.
download paper contact author show abstract

Bayes priors are such that negative residual variances are not possible so I am not sure avoiding the non-pos-def res cov matrix is a strong argument in favor of Bayes. Perhaps the model instead needs to be modified in some way.
 Cristina Ramirez posted on Thursday, October 12, 2017 - 8:18 pm
Hello, I have been reading the "Prior-Posterior Predictive P-values" paper and I have a question.

In the paper, when the PPPP value does not reject, that means that the minor parameters are approximately 0. I take it that the PPPP value can be used with other non 0 mean priors? For instance, if one uses a prior of ~N(-0.3,0.01) for a slope, the likelihood suggests a 0 mean slope, and the posterior lies at some point in between, would a PPPP value of 0.4 suggest that the obtained slope is consistent with the prior?

Thank you.
 Tihomir Asparouhov posted on Friday, October 13, 2017 - 2:15 pm
Yes it works with non-zero mean.
 Cristina Ramirez posted on Friday, October 13, 2017 - 10:00 pm
Ok, thank you very much.
 Fred  posted on Wednesday, October 18, 2017 - 11:04 pm
Dear Drs. Muthen,

on page 16 of your paper: Baysesian Analysis Using Mplus: Technical Implementation (Version 3) there is a statement regarding Missing Values on categorical variables. I am a little confused by this and got two questions:

1. Are the Missing Values in a cateogrical Variable (say X) directly handled when the continous Variable X* of X is generated? If so, how does this work exactly?
2. Or are the Missing Values on X handled through X*, where first the Missings are handled with conditional normal distributions (as on page 17) and then the threshold parameters are used to account for the categorical nature of the variable?

Thank you so much for your help
Fred
 Tihomir Asparouhov posted on Friday, October 20, 2017 - 9:05 am
X* is generated at each MCMC iteration from the conditional distribution of X* on all observed data and parameters. X* and the thresholds uniquely determine X. On page 16 we simply make a note on how the conditional distribution of X* changes in the case when X is missing. The missing data handling is likelihood based and guarantees consistent estimation as long as the missing data is missing at random (MAR).
 Alaine Garmendia posted on Monday, April 23, 2018 - 4:15 am
Dear Drs. Muthen and Asparouhov,

I am trying to run a bivariate longitudinal Cross-Lagged Panel Model with three waves of data. I have tried MLR estimator and the model does not work. The message is the following:

THE STANDARD ERRORS FOR H1 ESTIMATED SAMPLE STATISTICS COULD NOT BE COMPUTED. THIS MAY BE DUE TO LOW COVARIANCE COVERAGE.
THE ROBUST CHI-SQUARE COULD NOT BE COMPUTED.

THE MODEL ESTIMATION TERMINATED NORMALLY
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE
TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE
FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING
VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE
CONDITION NUMBER IS -0.521D-16. PROBLEM INVOLVING THE FOLLOWING PARAMETER:
THIS IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE SAMPLE SIZE.


I tried the same model with bayes and it works. Having that error with MLR should I trust the results with bayes estimator? The PPPs that I get re higher than 0.05.

Thank you very much
 Bengt O. Muthen posted on Monday, April 23, 2018 - 4:42 pm
You should investigate the MLR problem. If you can't figure it out, send to Support along with your license number.
 Alaine Garmendia posted on Monday, April 23, 2018 - 11:20 pm
Thank you very much for your quick response Dr. Muthen!

I have tried with several variables and realised that problem only arises with the variables with the most missing data in the dataset. Does it make sense? I have read that bayes estimator is better for missing data. Otherwise I am not able to figure out the problem.

Thank you again
 Bengt O. Muthen posted on Tuesday, April 24, 2018 - 2:39 pm
I don't think missing data has to do with this issue. ML is as good as Bayes in dealing with missing data.

If you can't figure it out, send to Support along with your license number.
 Katie Gelman posted on Monday, July 16, 2018 - 8:22 pm
Dear Drs. Muthen,
Upon a reviewer suggestion, I am re-running a MLR mediation model using Bayes estimator to address concerns with small sample size. When I do this, I get similar patterns in significance for parameters of interest, however the posterior predictive p value indicates poor model fit (.000/.004 with non/informative priors). PSR in tech 8 is close to 1. Do you have suggestions for why this might be? My data are nested within schools (9), which I had dummy coded in my MLR model since fewer than ~20 clusters. I kept them dummy coded in the Bayes model as type=complex does not work, but wonder if this is causing some of the model fit issue?
Many thanks.
 Bengt O. Muthen posted on Wednesday, July 18, 2018 - 6:53 am
Check what your left-out arrows (paths) are in your model. Check if your MLR run has non-zero degrees of freedom. Perhaps df=0 as it often it with mediation models so that not overall test of fit is obtained.
 Makoto Kyougoku posted on Monday, August 13, 2018 - 11:04 pm
I would like to know how to get log-likelihood to calculate the widely applicable information criterion for Mplus models.

https://arxiv.org/abs/1507.04544

Please teach me how to extract log likelihood from mplus.
 Tihomir Asparouhov posted on Tuesday, August 14, 2018 - 1:44 pm
It is not available but you can get the parameter estimates at each iteration if you want to compute it outside of Mplus (bparameters option). These criteria are asymptotically equivalent to DIC.
 Makoto Kyougoku posted on Tuesday, August 14, 2018 - 4:32 pm
Thank you.
I understood that there is no way to extract log likelihood from mplus.

How can I calculate the value asymptotically equivalent to DIC from Iteration number?
 Tihomir Asparouhov posted on Wednesday, August 15, 2018 - 8:53 am
You can get the log-likelihood value from plots: Bayesian posterior predictive checking scatter plots - use the observed chi-square then divide by 2 and subtract from the H1 model log-likelihood. I would recommend that you use DIC, however, and not do any of these computations.
 Friedrich Platz posted on Thursday, March 28, 2019 - 2:35 am
Dear Drs. Muthen,

how is it possible to get a posterior predictive check for variables defined being new in the model command (like the test score as sum of dichotomous items)?

Best wishes
Friedrich
 Bengt O. Muthen posted on Thursday, March 28, 2019 - 5:44 pm
What do you mean by

"variables defined being new in the model command "

Do you mean new parameters in the Model Constraint command?
 Friedrich Platz posted on Thursday, March 28, 2019 - 10:51 pm
Yes, I mean new parameters in the Model Command.
 Bengt O. Muthen posted on Friday, March 29, 2019 - 3:57 pm
I don't know what you mean here. The posterior predictive checking that Mplus offers refers to testing the overall fit of the model.

A posterior distribution is provided for each parameter, including parameters defined in Model Constraint.
 Daniel Lee posted on Friday, October 11, 2019 - 10:20 am
Hi Dr. Muthen,

I am using Mplus to conduct a bayesian multiple imputations. After conducting the imputations, I checked the convergence plot and autocorrelation and all looked good. However, the posterior predictive checking p-value was < .01.

When doing bayesian multiple imputations in mplus, does this mean there was a problem in the imputations and shouldn't be used. How would one rememdy this situation (E.g., add more variables into the imputation)?

Thank you!
Dan

Dan
 Tihomir Asparouhov posted on Friday, October 11, 2019 - 2:39 pm
The PPP value is like a chi-square. What this means is that the model that you estimated(and used for the data imputation) doesn't quite fit the data very well. That also means that the imputed values you obtained are not good enough. Try to modify the model so you get a good PPP value or use type=basic for the data imputation (and no model), see user's guide example 11.5. With type=basic we use the unrestricted variance covariance model for the data imputation so this will be avoided altogether.
 Daniel Lee posted on Friday, October 11, 2019 - 8:03 pm
Hi Tihomir,

Thank you so much for the response.
I have a follow-up question.

If I use type=basic (the unrestricted variance/covariance model for the data imputation), are there any quality checks I can implement to show that my imputation is acceptable? For example, in the bayesian multiple imputation approach, I was able to produce plots of autocorrelation, convergence plot, etc. With the proposed approach (using type=basic), I wonder if there are quality checks I can do to show that the imputation is acceptable.

Again, I appreciate your help!
 Tihomir Asparouhov posted on Monday, October 14, 2019 - 4:47 pm
To do that you will have to replace
ANALYSIS: TYPE = BASIC;
with
ANALYSIS:estimator=bayes;
model: Y1-Y10 with Y1-Y10;
Y1-Y10 on X1-X5
assuming you have 10 dependent variables and 5 covariates. Essentially you would write the H1 model manually.

Take a look at User's guide page 576 (as well as the entire section on data imputation). The diagram summarizes the different ways to do imputations in Mplus. You would have to move back to the box with example 11.7.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: