Bayes multilevel analysis PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Mathias Höglund posted on Thursday, August 05, 2010 - 2:20 am
Hello Linda & Bengt,

Inspired by the new features in mplus version 6 I am trying to estimate a multilevel latent covariate model using Bayes estimation in a sample of 192 individuals in 32 teams. I have a couple of questions regarding this analysis.

a) Is there a way to obtain any fit statistics for this model using Bayes?

b) In a multilevel CFA with three factor indicators I get abnormally large variance estimates for the between level factor under Bayes but not under MLR when centering (grandmean) the factor indicators. The input I use is:

Centering = GRANDMEAN(Factor1 Factor2 Factor3);


ANALYSIS: Type = twolevel;
Estimator = bayes;
Process = 2;
STVAL = ML;
MODEL:

%within%
HRMw by Factor1 Factor2 Factor3;

%between%
HRMb by Factor1 Factor2 Factor3;
Factor1-Factor3@0;
 Mathias Höglund posted on Thursday, August 05, 2010 - 6:29 am
Sorry,

Is there something to be done about this problem report:

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY.
THE POSTERIOR COVARIANCE MATRIX FOR THE PARAMETERS
IS NOT POSITIVE DEFINITE, AS IT SHOULD BE.

THE PROBLEM OCCURRED IN CHAIN 1.

I get the model to converge under MLR (with biased estimates) but not under Bayes. Is there a way around it?

My input is:

ANALYSIS: Type = twolevel;
Estimator = bayes;
Process = 2;
BITER = 100000;

MODEL:

%within%
HRMw by Factor1 Factor2 Factor3;
RAw by RA1 RA2 RA5;
ACw by Ac1 Ac2 Ac3rev;
ac1@0;
ACw on Raw HRMw;
Acw on HRMW;


%between%
HRMb by Factor1 Factor2 Factor3;
Factor1-Factor3@0;
RAb by RA1 RA2 RA5;
RA1-RA5@0;
ACb by Ac1 Ac2 Ac3rev;
Ac1-Ac3rev@0;

ACb on RAb HRMb;
RAb on HRMb;
 Linda K. Muthen posted on Thursday, August 05, 2010 - 10:41 am
With maximum likelihood estimation, fixing small variances to zero helps in model estimation. With Bayesian analysis it does not. Remove all of the statements where you have variances fixed at zero.
 Mathias Höglund posted on Thursday, August 05, 2010 - 11:14 am
Thanks Linda!

I tried removing all the variances fixed at 0. But how can I make sure that I use the same factor at both levels if I remove the variances fixed at 0 from the between level?

Also, even after this I get the error message. From the results that are given it seems that the estimates for the indicators of the HRMb factor are not correctly estimated:

RESULTS:
Between Level

HRMB BY
FACTOR1 1.000
FACTOR2 74828.477
FACTOR3 *********

RAB BY
RA1 1.000
RA2 72.132
RA5 -1361.438

ACB BY
AC1 1.000
AC2 1.189
AC3REV 0.561

ACB ON
RAB -2999.736
HRMB *********

RAB ON
HRMB -249.932

Intercepts
RA1 -0.056
RA2 -0.091
RA5 -0.283
AC1 0.098
AC2 -0.036
AC3REV -0.016
FACTOR1 -0.112
FACTOR2 -0.024
FACTOR3 -0.026

Variances
HRMB 0.000

Also, the residual variance of RAB is 0.
 Linda K. Muthen posted on Thursday, August 05, 2010 - 12:23 pm
Please send the full output and your license number to support@statmodel.com.
 Mathias Höglund posted on Monday, September 06, 2010 - 2:30 pm
Hello again,

I have two short questions with regards to a twolevel analysis using estimator = bayes.

1. Is there a way to get fit statistics for a twolevel bayes analysis?
2. I'm freely estimating the first factor loading of a factor and fixing the variance of the factor at 1. Is it ok to do this for only one factor in the model or do I have to use the same parametrization for all factors?
 Bengt O. Muthen posted on Monday, September 06, 2010 - 3:52 pm
1. In principle, yes. In current Mplus, no. Compare neighboring models instead.

2. It should be ok in principle, although I don't off hand recall if it violates the current Bayes restrictions on which type of Psi matrices can be handled - try it.
 ywang posted on Thursday, September 23, 2010 - 1:49 pm
Dear Drs. Muthen:
Parallel growth modeling with Bayesian method for categorical indicator variables (fixed time scores) can not converge even if I increased the integration number to 5000. Here is the error message and input. Any advice? Thanks in advance.

THE CONVERGENCE CRITERION IS NOT SATISFIED.
INCREASE THE MAXIMUM NUMBER OF ITERATIONS OR INCREASE
THE CONVERGENCE CRITERION.


categorical are bmi1 bmi2 bmi3 stnbdi_b stnbdi_6 stnbdi_8;

Analysis:
integration = 5000;
ESTIMATOR=BAYES;
process = 4;
Model:
i1 s1| stnbdi_b @0 stnbdi_6 @0.8 stnbdi_8@2;
i2 s2| bmi1@0 bmi2@0.8 bmi3@2;
s2 on i1 ;
s1 on i2 ;
i1 with i2;
s1 s2 i1 i2 on male;
s2 on intervention;

output: tech1 tech8;
 Linda K. Muthen posted on Thursday, September 23, 2010 - 2:17 pm
The INTEGRATION option should not be used with ESTIMATOR=BAYES; I suspect that you are not actually getting BAYES but MLR.

I would suggest running each process separately as a first step. Then put the processes together. After success with this, you can add the regression among the growth factors and the covariates. If you continue to have problems, send the files and your license number to support@statmodel.com.
 Ewan Carr posted on Thursday, December 16, 2010 - 8:11 am
I'm interested in a two-level analysis using the Bayes estimator. It was mentioned above that fit statistics are not currently available for such models in Mplus, and that we should "compare nested models", instead.

1. Is this still the case?

2. How would I compare nested models in this case (with a two level model, using the Bayes estimator)?

Many thanks,

Ewan
 Linda K. Muthen posted on Thursday, December 16, 2010 - 2:59 pm
1. Yes.
2. We aren't able to do that at this time.
 Ewan Carr posted on Wednesday, March 16, 2011 - 12:35 pm
I'm getting slightly confused about the "THIN" option for Bayesian estimation.

I'm running a two-level factor model, using estimator = BAYES. I want to run 50,000 or 100,000 iterations, and then thin the output by 50, giving 1000 or 2000 samples, respectively.

I started out using:

FBITER = 100000;
THIN = 50;

Thinking this would achieve the above. However, the model is taking a very (very) long time to run. It then occurred to me that these settings might be causing the model to run for 5 million (50 x 100000) iterations.. which would explain the slow progress.

If that is the case, then to achieve the above I should use:

FBITER = 2000;
THIN = 50;

Some clarification about what the FBITER and THIN settings achieve would be much appreciated.

Thanks in advance,

Ewan
 Tihomir Asparouhov posted on Wednesday, March 16, 2011 - 4:41 pm
This is correct Ewan. FBITER refers to the actual number of iterations that are recorded, i.e., it does not include those that are thinned out.
So
FBITER = 2000;
THIN = 50;
would result in 100000 computed MCMC iterations of which every 50-th is recorded for estimation purposes.
 Ewan Carr posted on Thursday, March 17, 2011 - 5:03 am
Thanks Tihomir.

That would explain why things were taking so long..
 Unai Elorza posted on Friday, June 17, 2011 - 8:20 am
Dear Drs. Muthén,

I am running a multilevel (random intercept and random slope) analysis with a ESTIMATOR=Bayes. I get a result that is confusing me a little bit:

The point estimate of the SLOPE ON W is:
0,118; SD 0,077; p-value 0,040 and 95% Confidence Interval: -0,013-0,27.

The p-value is lower than 0,05 (significance level) though the Confidence interval is not excluding the zero (lower limit -0,013 and upper limit 0,27).

From my point of view it is a contradiction. Could you help me interpreting these results? Thank you very much in advance.

Unai
 Bengt O. Muthen posted on Friday, June 17, 2011 - 9:13 am
For a positive estimate as you have, the p-value is the proportion of the posterior that is less than zero, so it is not a contradiction. From a frequentist point of view you can think of that as the chance that the true value is of opposite sign.
 Unai Elorza posted on Sunday, June 19, 2011 - 11:34 pm
Thank you Dr. Muthén.
 Rob Dvorak posted on Saturday, August 20, 2011 - 6:06 am
First, let me apologize for my ignorance. But it seems a lot of us out here are running into the issue, so I thought I'd post on it. I'm trying to wrap my head around the use of a one-tailed test using estimator=bayes. I would like to use the bayes estimator for the reasons you mention in your intro to bayes paper, however, my working knowledge of bayes is weak. Can you recommend a reading for a scientist (not a statistician) that explains the logic of the one-tailed test in bayes for those of us who are used to two-tailed b/c we're trained as frequentists? I'm sure I will need to justify the use of a one-tailed test in any papers I publish, so a reference (and rationale) for this would be great. Thanks.
 Bengt O. Muthen posted on Saturday, August 20, 2011 - 9:05 am
I quote from the paper on our web site

Muthén, B. (2010). Bayesian analysis in Mplus: A brief introduction. Technical Report. Version 3.

"The third column gives a one-tailed p-value based on the posterior distribution. For a positive estimate, the p-value is the proportion of the posterior distribution that is below zero. For a negative estimate, the p-value is the proportion of the posterior distribution that is above zero."

So if you get a positive effect estimate, this two-tailed p-value can be seen as the probability that it is negative, that is, that it's not the effect you had expected.

However, I would instead report the more common 95% Bayesian credibility interval (CI) that we show and note if that covers zero or not. But at a first glance, instead of looking for CIs covering zero, the two-tailed p-value is a quick way to scan the results for almost zero p-values - which imply that the CIs likely don't cover zero.
 Rob Dvorak posted on Sunday, August 21, 2011 - 4:00 am
Thanks for the explanation.
 Sarah Depaoli posted on Wednesday, January 25, 2012 - 3:25 pm
Is it currently possible to estimate a model with "estimator=BAYES" and "type = twolevel mixture"?
 Bengt O. Muthen posted on Wednesday, January 25, 2012 - 3:36 pm
No, not yet.
 Lindsay Bell posted on Friday, June 01, 2012 - 10:09 am
Hi -

I am doing a two-level Bayesian analysis using this syntax:

ANALYSIS:
TYPE=TWOLEVEL;
ESTIMATOR=BAYES;
POINT=MODE;

I have discovered that in my output, the "estimate" reported, which I had assumed was the mode of the posterior parameter distribution, is slightly different from the mode of the distribution that is shown when I view the posterior distribution graphs (i.e., estimate in output is .27, mode in graph is .31). The 95% CCI bounds, however, are identical in the output and the graphs. Can you help me understand this discrepancy and advice me on which point estimate to report?

Thank you,
Lindsay
 Bengt O. Muthen posted on Friday, June 01, 2012 - 11:15 am
Our regular output gives the multivariate and the plot the univariate mode. The mode (and CI) in the plot is deduced from the values used in the plot. The multivariate mode should be reported.
 Lindsay Bell posted on Sunday, June 03, 2012 - 8:50 pm
Hi -

Thanks for your answer. I have come across another issue this weekend. When I run the analysis as a standard two level model with the default estimator, I get an error message about a non-positive definite matrix because the model has more parameters than there are clusters. However, when I run the analysis with the Bayesian estimator, I don't get that error message. Is that because it's ok to have more parameters than clusters in a Bayesian two-level analysis? Can I feel comfortable with those results or do I need to reduce the number of parameters?

Thank you,
Lindsay
 Tihomir Asparouhov posted on Monday, June 04, 2012 - 11:11 am
The missing error message should not be interpreted as an endorsement of one
method over the other. ML is entirely based on asymptotic assumptions driven
by the number of clusters converging to infinity. Bayes is not but if the
number of clusters is smaller than 10 the estimates will be sensitive to the
priors.

The error message is based on a technical threshold point - we use the MLF
matrix for standard errors but that matrix is singular iff the number of
parameters is more than the number of clusters - thus the model behaves
somewhat as an unidentified model and our ability to confirm model
identification is limited. Bayes doesn't have this threshold as it uses
different methods for identifications etc (with Bayes the number of clusters
should be bigger than the number of random effects though).
 Lindsay Bell posted on Monday, June 04, 2012 - 11:17 am
Ok, thank you, that helps. I have 35 clusters so I'm not concerned about sensitivity to priors.

I have also noticed that the error message doesn't appear when I use multiply imputed data, even when using a ML estimator with more parameters than clusters. Is there a reason that more parameters than clusters would be ok with multiply imputed data?

Thanks,
Lindsay
 Tihomir Asparouhov posted on Tuesday, June 05, 2012 - 9:30 am
No Lindsay ... but I didn't say it was not ok. We have conducted simulation studies that show that ML works fine even when the number of clusters is smaller than the number of parameters. The error message says that it is not possible to confirm that the model is identified. If you already know the model is identified you should just ignore that warning.
 Lindsay Bell posted on Wednesday, June 06, 2012 - 6:39 pm
Thanks very much, Tihomir. I've been trying to figure out how to get quantiles of the posterior parameter distributions so that I can determine, for example, what percentage of two parameters' posterior distributions overlap, or where the cutoff is for 80% of the distribution. Can you tell me how I can get quantiles for the posterior distributions? I haven't been able to figure it out from the manual.

Lindsay
 Lindsay Bell posted on Thursday, June 07, 2012 - 6:58 am
So sorry for the multiple posts. Additionally, I am trying to determine whether the model fits better when all the data are analyzed together vs. separate models for subgroups. However, because the GROUPING options is not available with ESTIMATOR=BAYES, I can't figure out how to allow parameters to be estimated separately for groups in the same model so that I can compare the fit to the model with all parameters forced to be equal. Do you have any advice on how I can compare the fit of the subgroup model to the full model?

Thank you,
Lindsay
 Linda K. Muthen posted on Thursday, June 07, 2012 - 10:50 am
I would use MODEL CONSTRAINT to create a NEW parameter that is the difference between the two parameters.

With Bayes, multiple group analysis is done with MIXTURE and KNOWNCLASS.
 Lindsay Bell posted on Thursday, June 07, 2012 - 4:34 pm
Linda -

From my understanding, I don't think I can use MIXTURE because I don't have a latent class. I also don't know how to create a new parameter from the subgroup parameters, because as of now, the only way I can think to analyze the subgroups separately is to use USEOBSERVATIONS and select each group one at a time. Can you elaborate on how I could do the subgroup analysis in one model?

Also, a separate question: I've been trying to figure out how to get quantiles of the posterior parameter distributions so that I can determine, for example, what percentage of two parameters' posterior distributions overlap, or where the cutoff is for 80% of the distribution. Can you tell me how I can get quantiles for the posterior distributions? I haven't been able to figure it out from the manual.

Thanks so much for your help!

Lindsay
 Linda K. Muthen posted on Friday, June 08, 2012 - 11:19 am
When all classes are known in mixture, it is the same as multiple group analysis. This is simply how we do multiple group analysis with Bayes -- via MIXTURE KNOWNCLASS.

The suggestion to use MODEL CONSTRAINT to create a NEW parameter that is the difference between the two parameters is the answer to your second paragraph.
 Lindsay Bell posted on Friday, June 08, 2012 - 2:48 pm
Thank you, I understand now. I apologize for my confusion.

I am still struggling with how to assess model fit. I was hoping to be able to compare the Deviance Information Criterion from the unconstrained model to the model with all parameters constrained to be equal across groups, but with the MIXTURE analysis, the DIC is not appearing in the output. Is there a way for me to get the DIC with TYPE=MIXTURE?

If not, is there another index of fit I could use? The posterior predictive p-value is not showing up in the output, which I am assuming is because I am using TYPE=MIXTURE COMPLEX, but I'm not sure about that.

Thank you,
Lindsay
 Bengt O. Muthen posted on Friday, June 08, 2012 - 3:32 pm
One simple way is to use

new(d);
d=p1-p2;

where p1 is a parameter in one group and p2 the same parameter in another group. If d is significant the unconstrained model fits better.
 Lindsay Bell posted on Friday, June 15, 2012 - 2:48 pm
Linda -

I apologize, but I am still having trouble understanding your recommendation for determining what percentage of two parameters' posterior distributions are overlapping.

For example, in Group A, the 95% CCI for parameter 1 is (-.12, .29); in Group B, it is (.23, .83), so these intervals overlap between .23 and .29. I would like to know what percentage of the posterior distribution for Group A is >.23 and for Group B is <.29.

I created a new parameter that is the difference of the two groups' parameters. This new difference parameter has a CCI of (.06, .49). However, I don't know how to translate this information into the answer I'm looking for.

Thank you,
Lindsay
 Bengt O. Muthen posted on Saturday, June 16, 2012 - 9:32 am
How about a normal approximation? You have the mean and SD of A's posterior distribution and can then compute Prob(A >.23).
 Tihomir Asparouhov posted on Monday, June 18, 2012 - 10:44 am
Lindsay

If you are interested in the proportion of Group A > Group B this is given in the column "One-Tailed P-Value" for the difference of the two groups' parameters. I think this is what you want.

All other probabilities can be computed by external means after getting the posterior distributions of the parameters from Mplus (using savedata: bparameters=1.dat;).
 M Hamd posted on Saturday, June 30, 2012 - 5:06 pm
Hello Mplus team

I am running a multilevel mediation model with bayesian estimator. The full model terminates normally.

However, when I test independent measurement models e.g.,

between
TC by f1-f6;
within
ic by f1-f6;

I get this message:

" THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY.
THE POSTERIOR COVARIANCE MATRIX FOR THE PARAMETERS
IS NOT POSITIVE DEFINITE, AS IT SHOULD BE.

PROBLEM INVOLVING VARIABLES F6 AND TC."

Please note that I am able to run this analysis and the full model with MLR estimate as well.


I would appreciate some help.
 Linda K. Muthen posted on Sunday, July 01, 2012 - 10:50 am
Please send the MLR and Bayes outputs and your license number to support@statmodel.com.
 M Hamd posted on Thursday, July 05, 2012 - 2:53 pm
Just in case someone else faces the problem, my communication with Mplus team resolved the issue. It seems that the following factor parametrization is better for Bayesian.

TC by f1-f6*;
TC@1;

i.e., instead of setting first indicator loading to zero, use the residual var of factor is 1.
* is used to override default parametrization (i.e., first indicator loading is zero.
 Linda K. Muthen posted on Thursday, July 05, 2012 - 3:53 pm
This is true in some but not most cases.
 Ewan Carr posted on Friday, July 13, 2012 - 2:48 pm
I'm trying to a fit a two-level model with the Bayes estimator.

I have a binary mediator (x3 is binary; everything else is continuous):

%WITHIN%

x3 ON x1 x2;
y ON x1 x2 x3;

%BETWEEN%

y ON w1 w2;
x3 ON w1 w2;
y ON x3;


Everything works fine, except I'm having a problem with the between-level threshold of the binary mediator (i.e. [x3$1]).

The mixing of the chains for this threshold is really, really bad (traceplot), and the parameters — for both the threshold and the residual variances — tend to infinity (they increase with the number of iterations). Diagnostics for other parameters seem fine.

Is there anything (obvious) I can do about this? Specifically:

  • Are there any alternative priors that might improve mixing for the threshold?
  • How important is it to set a binary mediator as categorical?


Thanks!
 Tihomir Asparouhov posted on Friday, July 13, 2012 - 3:28 pm
I would say try this alternative parameterization that eliminates the threshold and instead estimates a mean for X3 via regression on ONE. I cant quite see where the poor mixing comes from but it could be due to small number of between level clusters. You can also run this model with WLSMV estimator and ML, but for ML it is a bit harder to setup. If you want to try changing priors - the place to look would be the variances on the between level - IG(1,1) often improves the situation.

Here are the commands you need for the alternative parameterization (X3 on ONE is minus the threshold)

data: ...
variance=nocheck;

define: ONE=1;

variables:
usevar= y x1-x3 w1-w2 ONE
between=ONE;

model:

%WITHIN%
x3 ON x1 x2;
y ON x1 x2 x3;

%BETWEEN%
[x3$1@0];
y ON w1 w2;
x3 ON w1 w2 ONE;
y ON x3;
 Tihomir Asparouhov posted on Friday, July 13, 2012 - 3:47 pm
Just realized where the poor mixing comes from -- the threshold parameter is highly correlated with X3_between, which is particularly poor when the size of the clusters is large. The above parameterization should resolve your problem - if not send it to support@statmodel.com.
 Ewan Carr posted on Friday, July 13, 2012 - 3:48 pm
Thank you!!

That is amazing —  the chains are mixing very well now, and the parameters don't tend off to infinity.

e.g. the traceplot for the "x3 ON ONE" parameter.

I'll test it fully tomorrow, with all the control variables/etc, but that alternative parameterization seems to have done it.

Thanks, for such a speedy and useful response. It's much appreciated.
 Ewan Carr posted on Friday, July 13, 2012 - 3:49 pm

quote:

Just realized where the poor mixing comes from...




Thanks, that's good to know.

Ewan
--
 Keivn Linares posted on Monday, January 14, 2013 - 1:08 pm
Hello Mplus team,

While fitting a 3-level multilevel model to my data I encountered this problem:

WARNING: A VARIABLE HAS A FREE MEAN AND VARIANCE FIXED TO ZERO. THIS MAY LEAD TO POOR CONVERGENCE AND MIXING.TO AVOID THIS, ESTIMATE THE MEAN ON A LEVEL WHERE THE VARIANCE IS FREE.

How would I go about this in my model?

CLUSTER = Level3 Level2;
WITHIN = x1 x2;
BETWEEN =(Level2) z1 z2
(Level3) w1 w2 w3 w4 w5 w6
w7-w8(covariates);

ANALYSIS:
ESTIMATOR = bayes;
FBITERATIONS = 40000;
TYPE = THREELEVEL RANDOM;

MODEL:
%WITHIN%
Y ON x1 x2;

%BETWEEN level2%
Y ON z1;
beta1 | y ON z2;

%BETWEEN level3%
Y ON w1 w2 w3 w4 w5 w6;
w1 w2 w3 w4 w5 w6 ON w7 w8;
beta1 ON W7;
beta1 @0;

!yij = beta0 + beta1j xij + rij;
!beta1j = gamma10 + gamma11 wj + u1j;

Thank you,

Kevin
 Linda K. Muthen posted on Monday, January 14, 2013 - 5:22 pm
You put the residual variance at zero: beta1@0. This may hurt the computations of the beta1 intercept. Why not free that residual variance - an R-square of 1 does not seem realistic.
 Keivn Linares posted on Sunday, January 27, 2013 - 8:26 pm
When I run the model after using Bayes imputation the program runs for about 5 seconds and stops but does not show any error. Is it possible to use type = imputation and type = threelevel random?

Kevin

TITLE: ml project model 1

DATA:
FILE IS "C:\Users\kevin\Dropbox\ . . . . \SLMLM.5implist.dat";
type = imputation;


ANALYSIS:
ESTIMATOR = MLR; (same problem with bayes)
ITERATIONS = 1000;
CONVERGENCE = 0.00005;
COVERAGE = 0.10;
proc=2;
TYPE = THREELEVEL RANDOM;
 Linda K. Muthen posted on Monday, January 28, 2013 - 12:31 pm
You can use TYPE=IMPUTATION with TYPE=THREELEVEL RANDOM. Try running this on the first data set.
 Tanja Ka posted on Wednesday, March 06, 2013 - 11:03 am
Hello,
I'm estimating a multilevel model with a random slope in Mplus7. When using the default gibbs algorithm, the model converges and everything looks fine. But when I switch to the gibbs (rw) algorithm, I don't get an output for the model. The model converges obviously (as I see by the PSR during the iterations), but the output is just a reproduction of the input file. It happens only in models with random slopes. Could this possibly be a minor bug? I'd like to switch to the gibbs (rw) algorithm to estimate two random slopes in one model.
Thanks a lot!
 Linda K. Muthen posted on Wednesday, March 06, 2013 - 11:42 am
Please send the input, data, output, and your license number to support@statmodel.com.
 Alexandru Cernat posted on Thursday, August 08, 2013 - 1:38 pm
I fitted a two-level model (binary dependent variable) using ESTIMATOR=bayes. I used the default prior, but the bayesian estimates did not resemble the maximum likelihood results (the model was fitted the same way) as they were supposed to. Any idea what could be the reason?

Thank you,
Alex
 Bengt O. Muthen posted on Thursday, August 08, 2013 - 1:45 pm
Make sure you used link = probit for ML. If you did, send files to support.
 Ewan Carr posted on Thursday, August 22, 2013 - 9:16 am
MODELS WITH RANDOM SLOPES FOR CATEGORICAL VARIABLES CAN NOT BE ESTIMATED WITH THE BAYES ESTIMATOR.

Is this error message new to Version 7?

I have some models which include a random slope for a binary variable. These models ran OK (with the Bayes estimator) in Mplus 6.

Is there any way around this error?

Many thanks,

Ewan
--
 Linda K. Muthen posted on Thursday, August 22, 2013 - 10:51 am
Please send the two outputs, 6 and 7, and your license number to support@statmodel.com. I don't think this has changed.
 Ben Kelcey posted on Tuesday, September 17, 2013 - 12:02 pm
Good afternoon,
What options are there to compare the relative and absolute fit of twolevel or threelevel factor models estimated with bayes in Mplus or using output from Mplus?
Thanks
 Bengt O. Muthen posted on Tuesday, September 17, 2013 - 1:30 pm
There isn't anything automatically produced. One approach is to use "neighboring" models that are less restrictive. For instance, BSEM can be used to allow residual covariances that can be checked for significance. See the article:

Muthén, B. & Asparouhov, T. (2012). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313-335.
 Linda Guo posted on Sunday, December 01, 2013 - 10:21 am
Hi Linda and Bengt:

I am trying to confirm results of a two-level model with a dichotomized outcome, by comparing estimations from MLWin and that from Mplus. I used Bayes estimation in Mplus and mcmc estimation in MLWin. However, the estimations from Mplus and MLWin appear to be quite different. Below are algorithms I specified in the two software packages.

For MLWin, I specified mcmc(burnin(2000) chain(20000)), and set the starting value to be the estimation from ML methods.

For Mplus, I specified Type = twolevel; Estimator= BAYES; Biterations = 2000; Fbiteration = 20000; and also manually set the starting value to be the estimation from ML methods.

The results of the default ML methods from the two software were the same by the way.I just started on Mplus, and am certainly not familiar with the algorithm used by Mplus. Is there a difference between Bayes estimation and MCMC estimation in the two software packages? Any suggestions on why the results are quite different? Thank you.
 Bengt O. Muthen posted on Sunday, December 01, 2013 - 11:19 am
Have you checked that the programs use the same model, for instance the same number of parameters and logit/probit link?

MCMC is a technique to get Bayesian estimates that is used by both programs and the two programs should get the same results if set up to estimate the same model.
 Linda Guo posted on Monday, December 02, 2013 - 10:07 am
Hi Bengt:

I used logistic regression in MLWin, for Mplus, I specified in the variable section: categorical = my outcome; I first run the ML algorithm and got the same results from both softwares. However after I turn on MCMC in MLWin and Bayes in Mplus, results started to differ.In both sofwares, I have the same number of samples being used in the regression, and the same number of parameters, I'm not sure how to specify the link in Mplus, but I'm assuming after I specify categorical = my outcome, it should run the model as logistic regression?
Here's the command I use in to set up model in Mplus:
Variable:Names ARE VAR1 VAR2 VAR3 VAR4 VAR5 VAR6;,Missing ARE all (-9999); Categorical = VAR5;
Cluster = VAR6;
Within = VAR1 VAR2;
Between = VAR3 VAR4;
Analysis:
Type = twolevel;
Estimator= BAYES;
Biterations = 20000;
Fbiteration = 200000;
Model:
%Within%
VAR5 ON VAR1*-0.050 VAR2*-0.175;
%Between%
VAR5 ON VAR3*0.048 VAR4*0.167;
Regardless of whether I specify the starting values to be the ones from ML estimation, or not specify the starting values for each parameter, the results from Mplus are different from those I got from MLWin. Is there a problem in the code I use? Or where else do you think could have gone wrong?

Thank you.
 Bengt O. Muthen posted on Monday, December 02, 2013 - 11:02 am
Bayes in Mplus uses probit link; see the Bayes papers we have posted. Mplus does not have logit link for Bayes. So request probit link for your MLWiN run.

ML starting values are not needed for Bayes in Mplus. No starting values need to be given. This avoids high-dimensional integration with ML for models with many latent variables.
 Linda Guo posted on Monday, December 02, 2013 - 12:04 pm
Hi Bengt:

I called the probit link in MLWiN,and didn't change anything else (distribution is binomial, number of parameter stays the same), but the results are still different as compared to those from Mplus. I removed the starting values in Mplus, and still didn't get the same results. What else could go wrong here?

Thanks.
 Bengt O. Muthen posted on Monday, December 02, 2013 - 5:15 pm
Please send data, input, and output using TECH8 and TECH1 to Support. Also send a pdf of the MLWin output.
 'Alim Beveridge posted on Sunday, February 09, 2014 - 7:24 pm
Hello Bengt, Linda & Tihomir,

this is a question to make sure my conceptual understanding is correct. In models of clustered data estimated using ML or least squared, standard errors tend to be incorrect (underestimated in most cases) due to the non-independence of observations. Thus standard errors are 'corrected' using a sandwich estimator. Am I correct in my understanding that this is not the case in Bayesian estimation because the standard errors and p-values are derived from the posterior distribution of parameters, which is generated using Markov chain Monte Carlo (MCMC). This procedure does not assume that observations are independent. Please correct me if I am wrong.
Thanks, 'Alim
 Bengt O. Muthen posted on Monday, February 10, 2014 - 3:27 pm
Unless you use Type=Twolevel, Bayes does not correct for non-independence among observations.
 Stefan Schneider posted on Monday, March 10, 2014 - 6:57 pm
Hi,
Is it possible to estimate between-level differences in within-level variances using a "random factor loadings" approach in cases where there is only a single outcome/indicator variable? I would like both intercept and variance of the random loading to be freely estimated on the between level (in order to enter between-level predictors). Below is what I tried (without person covariates). To avoid poor mixing, I fixed the residual within-level variance at an arbitrary value >0 (but smaller than the within-variance from a simple multilevel "null" model), and estimated the mean within-person variance (mwvar) as this residual plus the estimated mean loading (squared). The resulting model estimates do not appear completely unreasonable. However, (a) the value for “mwvar” always seems lower when using random loadings than when using a fixed loading, and (b) “mwvar” differs depending on the selected value of the residual variance. Especially (b) makes me believe I am doing something wrong. Your input would be greatly appreciated!

ANALYSIS:
estimator = bayes;
MODEL:
%within%
sigma | f by y;
f@1; y@0.1;
%between%
[sigma] (p1); sigma; y;
MODEL CONSTRAINT:
new (mwvar);
mwvar = 0.1 + p1**2;
 Tihomir Asparouhov posted on Tuesday, March 11, 2014 - 5:03 pm
Our favorite model for this purpose has been

MODEL:
%within%
sigma | f by y;
f; y@0;
%between%
[sigma@1]; sigma; y;

see equation
(24) and (25) in
http://statmodel.com/download/NCME_revision2.pdf

The entire section 5 in that paper discusses this issue but for latent variable. Your model also seem fine but I think the above model mixes better.

Var(y)= (sigma_j)^2

As a measure of stability of the model
Var(sigma_j) should be small ... smaller than 0.25 so that sigma_j>0 (otherwise interpretation would be hard). Now you can easily add predictors both for the random intercept and for the random variance.


For your model, in your model constraint you inherently are making the mistake regarding this statement
* if X and Y are independent
Var(XY) is not E(X)*E(X)*Var(Y)

it is var(x)*var(y) + E(X)*E(X)*Var(Y)+E(Y)*E(Y)*Var(x)

now applying to your case

mwvar = 0.1 + p1**2+v;

where v=Var(Sigma)

%between%
sigma (v);
 Stefan Schneider posted on Wednesday, March 12, 2014 - 6:28 pm
This works great, thank you very much!
 Jonathan L. Helm posted on Wednesday, April 30, 2014 - 2:26 pm
I am running a large simulation study and trying to diagnose conditions that lead to failed convergence of a particular model.

I am wondering if there is any difference between these two error messages:

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY.
THE POSTERIOR COVARIANCE MATRIX FOR PSI IS NOT POSITIVE DEFINITE, AS IT SHOULD BE.

and

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY.
THE PSI MATRIX IS NOT POSITIVE DEFINITE.


Any insight is greatly appreciated.

-Jon
 Tihomir Asparouhov posted on Wednesday, April 30, 2014 - 9:39 pm
Yes they are different matrices.
 Jonathan L. Helm posted on Thursday, May 01, 2014 - 12:06 pm
Is the former the PSI matrix formed from the estimates at the last update, and the latter is the PSI matrix based on the median of each of the respective chains after the PSR has converged?

Or are these referring to something else?
 Tihomir Asparouhov posted on Thursday, May 01, 2014 - 4:03 pm
The PSI matrix is formed from the estimates at the last update

The latter is the matrix from the posterior distribution of PSI that is used to generate PSI: it is E+Omega from formula (8) on page 7 http://statmodel.com/download/Bayes3.pdf
 Ronny Scherer posted on Sunday, May 04, 2014 - 3:39 am
Dear Mplus team,

I do have a question concerning a Multilevel CFA model that uses the Bayesian estimator. The manifest indicators were not restricted to the within level by using the within = ... command.

The model is specified as follows:

...
CLUSTER IS country;
...

ANALYSIS:

TYPE = TWOLEVEL;
ESTIMATOR = BAYES;
PROCESSORS = 4;
Bconvergence = 0.01;
Biterations = (30000);
Chains = 8;
Bseed = 511;

MODEL:

%within% ! students
f1w BY y1-y6;
f2w BY y7-y10;
f3w BY y11-y20;
f4w BY y21-y25;

%between% ! countries
f1b BY y1-y6;
f2b BY y7-y10;
f3b BY y11-y20;
f4b BY y21-y25;

OUTPUT:
tech1 tech8 stdyx cinterval(hpd);
...

The output looks quite nice and the estimation terminated normally. Checking the plots suggests convergence.

However, I do not get the DIC and the pD value. I also looked into your webnote#18 and the supplementary files (26-countries modeled as clusters). In there, the output of the Bayesian multilevel model also did not show the DIC and pD. Is there a general problem with these values in multilevel models? Do I need to tell Mplus to give me these values by using an additional command?

Many thanks for your comments in advance.
 Linda K. Muthen posted on Sunday, May 04, 2014 - 6:25 am
These values are not yet available for multilevel models.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: