Categorical Outcomes PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Anonymous posted on Friday, January 07, 2000 - 12:20 pm
Some multilevel packages have begun to include methods for multilevel models with binary and other categorical outcomes.

Does MPLUS plan to develop procedures for multilevel models with categorical outcomes?

If yes, when might we look for those to become available?
 Linda K. Muthen posted on Saturday, January 08, 2000 - 3:45 pm
This is an important topic. We continue to expand Mplus modeling opportunities in many ways. To both not disappoint people and not place undue pressure on ourselves, we try to avoid preannouncements of our developments.
 David Klein posted on Tuesday, April 10, 2001 - 11:16 am
I have data with probability weights and a categorical outcome. I'd like to run a weighted analysis (structural equation modeling) but I want to make sure that simply adding the WEIGHT statement will adjust the standard errors properly.

A read of the manual gives me the feeling that I should use TYPE=COMPLEX analysis, but that requires that I use a continuous outcome, which is not the case. The manual also says that the WEIGHT statement "identifies the variable that contains case or sampling weight information," but does not say how to differentiate between case vs. sampling weights or whether M-Plus in fact treats them differently.

Any insight would be greatly appreciated. Thanks!
 Linda K. Muthen posted on Monday, April 16, 2001 - 9:03 am
Yes, just adding the WEIGHT statement is sufficient. TYPE=COMPLEX requires that you have a cluster variable and that the outcomes be continuous. If you do not, just use the WEIGHT statement alone. Mplus does not treat case or sampling weights differently.
 Yeong Chang posted on Monday, September 10, 2001 - 1:15 pm
I hope the person who started this topic would check this out:

In your message, you said:

Some multilevel packages have begun to include methods for multilevel models with binary and other categorical outcomes.

I hope you could name one or two of these packages because I want to use them for my analysis. Thanks a lot.

Another question is for Muthen & Muthen: Are we closer to the stage where MPlus can do multilevel modeling with binary outcomes?
 Linda K. Muthen posted on Tuesday, September 11, 2001 - 8:49 am
Packages that can handle categorical outcomes are Mixor, MLWin, and HLM as far as I know. We are currently working on adding categorical outcomes to the multilevel modeling in Mplus.
 Yeong Chang  posted on Thursday, September 13, 2001 - 2:27 pm
Thank you, Linda. Among the three packages you mentioned (Mixor, MLWin, and HLM), can any of them do multilevel factor analysis (like CFA with categorical outcomes)? I guess HLM cannot, but I don't know much about Mixor or MLWin. Thanks again.
 Linda K. Muthen posted on Friday, September 14, 2001 - 11:18 am
Not to my knowledge.
 Allison Tracy posted on Wednesday, February 13, 2002 - 10:44 am
What are the risks and implications of modeling dichotomous indicators of latent continuous variables as if they were continuous, non-normal indicators under the complex sampling paradigm?
 Linda K. Muthen posted on Thursday, February 14, 2002 - 8:11 am
The issues of modeling dichotomous indicators as though they are continuous are the same for clustered data as for non-clustered data. As you get away from a 50/50 split, correlations become attenuated.
 Allison Tracy posted on Friday, February 15, 2002 - 12:19 pm
But since the COMPLEX modeling procedure allows for non-normally distributed variables, isn't the affect of unequally distributed dichotmous indicators also accounted for?
 Linda K. Muthen posted on Sunday, February 17, 2002 - 10:54 am
Parameter estimates are not adjusted for non-normality when a robust estimator is used. Only standard errors and chi-square test statistics are adjusted. With dichotomous indicators that are not 50/50, the sample statistics (correlations) will be attenuated. This means that the parameter estimates will be distorted. So although the standard errors may be correct for the parameter estimates, the parameter estimates themselves will not be correct.
 Ronny Klaeboe posted on Wednesday, January 05, 2005 - 4:16 pm
Help wanted for some basic questions:

I want to explore alternative descriptions of the same model:

The data consists of 508 persons making 9 responses to stated choice experiments. If we initally ignore the fact that each person gave 9 responses, we can view the data as 4572 cases.

The model is then simply:

categorical ch1;
ch1 on pd1 td1 wd1;

The (dichotomous) choice is dependent on a price difference, time difference and waiting time difference.

A multilevel approach could be applied to check the size of the within subject variation and possible to improve the estimates. However, how do we specify that the 9 responses for each cluster are correlated?

Type=complex; and cluster=id; and what more??


------------
The data could alternatively be dealt with by applying a multivariate framework where the 9 choices are represented as 9 variables. The number of cases is then 508.

The simple model then looks like this:

ch1 ON pd1 (1);
ch2 ON pd2 (1);
ch3 ON pd3 (1);
ch4 ON pd4 (1);
ch5 ON pd5 (1);
ch6 ON pd6 (1);
ch7 ON pd7 (1);
ch8 ON pd8 (1);
ch9 ON pd9 (1);
ch1 ON Td1 (2);
ch2 ON Td2 (2);
ch3 ON Td3 (2);
ch4 ON Td4 (2);
ch5 ON Td5 (2);
ch6 ON Td6 (2);
ch7 ON Td7 (2);
ch8 ON Td8 (2);
ch9 ON Td9 (2);
ch1 ON wd1 (3);
ch2 ON wd2 (3);
ch3 ON wd3 (3);
ch4 ON wd4 (3);
ch5 ON wd5 (3);
ch6 ON wd6 (3);
ch7 ON wd7 (3);
ch8 ON wd8 (3);
ch9 ON wd9 (3);


Again I wonder how to specify that the 9 choices are correlated (since the dependent variables are categorical the with statement will not work). Could one introduce a factor f by {ch1-ch9@1} to signify that the 9 choices share a common identical source of error?

Or should one introduce 9 continous formative indicators of the respective pd td and wd inputs and let each of them list the respective dichotomous choice as their sole indicator?

f1 by ch1;
...
f9 by ch9;

[ch1-ch9](4); !constrain intercepts to be equal

f1 on pd1(1);
f1 on td1(2);
f1 on wd1(3);
.....

f1-f9 with f1-f9(5);

I am thankful for any answers
 Linda K. Muthen posted on Friday, January 07, 2005 - 6:23 am
To account for clustering in the univariate approach that you showed where cluster is id, saying CLUSTER = ID; and TYPE = COMPLEX is all you have to do to account for the lack of independence of observations.

Regarding the multivariate framework, you could do one of the following:

Weighted least squares estimator

MODEL: ch1-ch9 ON pd1-pd9 td1-td9 wd1-wd9;

Maximum likelihood estimator

f BY ch1-ch9;
f ON pd1-pd9 td1-td9 wd1-wd9;
 CMW posted on Sunday, June 26, 2005 - 12:20 pm
Greetings,

In example 9.2 in the Mplus manual on page 213 of two-level regression analysis for a categorical dependent variable, is that probit or logistic regression?

Also, assuming a model like example 9.2 with a random slope, how can I get the actual slope estimate for each person?

CMW
 BMuthen posted on Tuesday, June 28, 2005 - 8:13 am
It is a logistic regression.

You can get this by requesting factor scores in the SAVEDATA command.
 Jonathan Brown posted on Sunday, September 04, 2005 - 1:37 pm
I have data in which nearly 1000 children were recruited among 60 doctors. I want to include child self reported symptoms and doctor reported child symptoms in the same latent class model. My only outcome is to find latent classes combining the doctor and child reports of child symptoms. The symptoms are binary.

1) Is there anything theoreitcally or conceptually violated by including data from multiple levels in a single latent class analysis - i.e. including responses from doctors and children about the child in the same model? If they are measures of the underlying latent construct and are conditionally independent then it seems acceptable.

2) Can MPLUS take into account the clustering of children around doctors when the items are binary? There is no regression outcome - just latent classes based on the binary symptoms. How do I know if its even necessary to take into account the clustering?

3) If yes the second question, could you please direct me to where I would find the syntax for clustering?

3) I have not found any 'multilvel' latent class analysis? Could you provide any references or examples of this, in which for example, latent classes are found using community level, school level, teacher level, child level data, not from the same reporter?

Thank you so much for having an excellent site and for all your help.
 BMuthen posted on Monday, September 05, 2005 - 2:34 pm
The doctor and the child measures are not independent but can be analyzed in a single model by using TYPE=TWOLEVEL MIXTURE; or TYPE=COMPLEX MIXTURE; when symptoms are binary. See Chapter 10 Example 10.3 for multilevel latent class analysis.
 CMW posted on Thursday, September 15, 2005 - 12:15 pm
Greetings,

In the context of two-level logistic regression with a random slope, does Mplus compute empirical bayes estimates of the slope for each cluster?

CMW
 Linda K. Muthen posted on Thursday, September 15, 2005 - 2:34 pm
These can be obtained by requesting factor scores using the FSCORES option of the SAVEDATA command.
 Carol M. Woods posted on Monday, March 06, 2006 - 6:38 pm
Thank you for the reply in September. Now I am doing a two-level logistic regression with a random intercept as well as a random slope and would like to get EB estimates of the intercepts. When I use this code:

"savedata:
file is slopesall.out;
save = fscores;"

I get EB estimates of slopes only. Is there a way to get estimates of the intercepts too?

CW
 Linda K. Muthen posted on Tuesday, March 07, 2006 - 7:41 am
It sounds like you are using an old version of Mplus. You should get factor scores for both. If this is not the case, please send your input, data, output, and license number to support@statmodel.com.
 chennel huang posted on Monday, July 03, 2006 - 1:40 am
Does the Mplus 3.0 can do the multi-level SEM with two continuous latent variables(CFA) that are both with categorical indicators?
 chennel huang posted on Monday, July 03, 2006 - 10:44 am
Following the question, after I exploringly operate the two level model, the output state"*** FATAL ERROR
THERE IS NOT ENOUGH MEMORY SPACE TO RUN THE PROGRAM ON THE CURRENT
INPUT FILE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER
APPLICATIONS THAT ARE CURRENTLY RUNNING. ANOTHER SUGGESTION IS
CLEANING UP YOUR HARD DRIVE BY DELETING UNNECESSARY FILES".What is the basic need for hardware. How do I overcome this problem? Thanks for help.

Following is the syntax:

data:
file is ag.prn;
variable:
names are cl edu inc gen bmi year at1-at7 ba1-ba5 em im;
usevariables are at1-at7 ba1-ba5;
categorical are at1-at7 ba1-ba5;
cluster is cl;
analysis:
type is twolevel random;
model:
%within%
aw1 by at1-at3;
aw2 by at4-at7;
bw1 by ba1-ba3 ba5;
bw2 by ba4;
s1 | bw1 on aw1;
s2 | bw1 on aw2;
s3 | bw2 on aw1;
s4 | bw2 on aw2;
%between%
ab1 by at1-at3;
ab2 by at4-at7;
bb1 by ba1-ba3 ba5;
bb2 by ba4;
output:
tech1 tech8;
 Thuy Nguyen posted on Monday, July 03, 2006 - 11:45 am
The model you are estimating requires numerical integration. There is a section on numerical integration and suggestions for using numerical integration in Chapter 13 of the User's Guide.

If you are still having problems, please send your input and data to support@statmodel.com.
 Boliang Guo posted on Friday, January 12, 2007 - 10:11 am
when conducting 2 level logistic regression y on X, I find the result from MLwiN is quit different from MPlus with the same data, but the results are virtually same when I run 1 level logistics regression with the same data. I test the reult with ex9.2 data with U on X with both MLwiN and Mplus, the results are almost same when I run a non-random slop rgression, but if I set the slop random, the result will be different. is the different result due to the algorithm used in each software?
 Bengt O. Muthen posted on Friday, January 12, 2007 - 10:48 am
The results should be the same if you use the ML estimator in MLwiN (I think they also have a PQL estimator). If you use ML, please send input, output (from both programs), data and license number to support@statmodel.com.
 Scott C. Roesch posted on Sunday, October 21, 2007 - 1:19 pm
I have just finished running a multilevel analysis (using TWOLEVEL RANDOM) with a binary outcome (smoking). As part of the output I received a Threshold value at the Between level for the binary outcome that has a value of -.086. How is this value interpreted? Can it be transformed into a more easily interpreted value (e.g., as logits can be transformed into odd's ratios)? Thanks in advance!
 Linda K. Muthen posted on Sunday, October 21, 2007 - 5:34 pm
It is in a logit scale. See the section in Chapter 13 Calculating Probability Coefficients From Logistic Regression Coefficients to see how the threshold is used. In the example, intercepts are used rather than thresholds. Note that an interecept is the threshold with the opposite value.
 Oliver Arranz-Becker posted on Thursday, June 23, 2011 - 12:14 am
In a two-level multinomial logistic regression, Mplus v6 gives odds ratios only for the within but not the between part of the model. Why is that?
 Linda K. Muthen posted on Thursday, June 23, 2011 - 11:33 am
The variable in the between-part of the model is a continuous random intercept.
 Susan Haws posted on Wednesday, November 30, 2011 - 4:35 pm
I am estimating multilevel models using type=complex twolevel, with binary outcome smoking (0/1). Slope coefficients are fixed. Are the R-squared within and R-squared between estimates provided in the MPlus output legitimate ways to convey proportion of variance explained at each level? Secondly, is it safe to compare them as one builds models in steps, for example, comparing the R2 between before and after adding level 2 predictors?
 Bengt O. Muthen posted on Wednesday, November 30, 2011 - 5:01 pm
Q1. These R-squares have been proposed in for example the multilevel book by Snijder & Boskers.

Q2. R-square can change in unexpected ways due to the two levels interacting. See the multilevel literature such as the Raudenbush & Bryk book.
 Christoph Weber posted on Wednesday, September 19, 2012 - 4:53 am
Dear Dr. Muthen,
I have the following problem:

I want to transform the results of a twolevel logit model into probabilities.
But apparently I make some mistakes, because I get values which don't seem correct.

Now I tried a twolevel model using only the binary outcome Y (fixed intercept).
According to univariate proportions Y = 1 amounts for 36%.
The treshold is 0.581. Now I calculated p (Y=1) as 1/(1+exp(-0.581) = 0.64, what is actually p(Y=0).

If I treat Y as continous, the intercept = 0.36.

Shoudn't p also be 0.36?

Some more infos: I use within-weigts and
WTSCALE IS UNSCALED;

Thanks
Christoph Weber
 Christoph Weber posted on Wednesday, September 19, 2012 - 5:00 am
Sorry, I already found the mistake.
p(Y=1) = 1/(1+exp(-a), and a = -t. Isn't it?
 Linda K. Muthen posted on Wednesday, September 19, 2012 - 11:44 am
Getting this probability requires taking into account the distribution of the random effect which can be done only using numerical integration.
 Christoph Weber posted on Wednesday, September 19, 2012 - 2:57 pm
Could you explain the required steps, or give some references?

I use the following input (Now with random intercept):
Is this correct?

weight = wgtstud;
WTSCALE IS UNSCALED;
cluster = idclass;

USEVARIABLES = ahs ;
MISSING = all (-99);
categorical = ahs ;

Analysis:
type = twolevel;
algorithm = integration;
MODEL:

%within%
%between%
ahs;

I get a treshold = 0.769, thus p=0.32?

Thanks
 Bengt O. Muthen posted on Wednesday, September 19, 2012 - 3:15 pm
See

Larsen & Merlo (2005). Appropriate assessment of neighborhood effects on individual health: Integrating random and fixed effects in multilevel logistic regression. American Journal of
Epidemiology, 161, 81-88.

and also our teaching on slides 60-66 of the Topic 7 handout and video on our web site.
 Jan Super posted on Friday, September 28, 2012 - 8:56 am
Hello,

We are trying to run a multi-level analysis with a 2-2-1-2 model. Our primary IV is categorical (pay-type) and our DV is categorical (ranking). MPLUS won't allow us to use a within-level predictor for a between-level DV that has been declared categorical. We're getting error messages such as the following:

*** ERROR in MODEL command
Variances for between-only categorical variables are not currently defined on the BETWEEN level. Variance given for: GP_E_RNK


A sample of our code is as follows:
MISSING ARE ALL (-999999);
CLUSTER IS Grp_N;
CATEGORICAL = pay_t gp_E_rnk;
BETWEEN ARE pay_t disc_t gp_E_rnk;
ANALYSIS:
TYPE IS TWOLEVEL ;
MODEL:
%WITHIN%
tot_Ed;

%BETWEEN%
pay_t disc_t gp_E_rnk;

disc_t on pay_t (a1);
tot_Ed on disc_t (b1);
gp_E_rnk on tot_Ed (c1);
gp_E_rnk on disc_t (d1);
tot_Ed on pay_t;

Thanks in advance for your help.
 Linda K. Muthen posted on Friday, September 28, 2012 - 12:57 pm
You mention the variances of categorical variables. Their variances are not model parameters. You should remove the variances. Also, observed exogenous variables should not be on the CATEGORICAL list. The CATEGORICAL list is for dependent variables only.
 Jan Super posted on Friday, September 28, 2012 - 2:36 pm
Thank you for your reply. I'd like to ask a follow-up question. Could you elaborate on removing variances from dependent categorical variables? We aren't sure how to go about that.

Thanks, Jan
 Linda K. Muthen posted on Friday, September 28, 2012 - 3:54 pm
Remove pay_t from the CATEGORICAL list. I find it only on the right-hand side of ON. Remove the variances:

pay_t disc_t gp_E_rnk;

gp_E_rnk is categorical so no variances are estimated. You should not mention the variance of pay_t because it is an observed exogenous variable. The other residual variance is estimated as the default.

If you run into other errors, send the full output and your license number to support@statmodel.com.
 Kate Song posted on Friday, April 12, 2013 - 1:43 pm
Hi Linda.
I am using 2-level ordered logit model.
For mplus,
the code I use as following.

VARIABLE: NAMES ARE a3_le a3_eff id_yr health a3_p;
CATEGORICAL = health;
BETWEEN = a3_le a3_p a3_eff;
CLUSTER IS id_yr;
ANALYSIS: TYPE = TWOLEVEL;
ESTIMATOR = WLSM;
MODEL:
%BETWEEN%
health on a3_p a3_le a3_eff;

Health is the categorical variable and only one within-variable.

However, I've compared the results with STATA ordered logit model.
It looks so different.

Could you please explain the reason?

FYI, I uses the following code for STATA for the same model.

gllamm health a3_p a3_le a3_eff, i(id_yr) link(ologit)

Thank you.
 Linda K. Muthen posted on Friday, April 12, 2013 - 1:49 pm
You would need to send the Mplus and STATA outputs and your license number to support@statmodel.com. Be sure you are using the same estimator in both analyses and that your sample sizes are the same.
 Andreas Wihler posted on Monday, June 24, 2013 - 2:22 am
Dear Drs. Muthén,

we are trying to analyze diary data with Mplus.

We use emotion as independent variable and a dichotomous (defined as categorical in the syntax) dependet variable on level 1.
Since the daily emotion are dependent on the individual (level 2), we try to model emotion as a random effect. However, mplus can't compute the model, showing the following error message:
Unrestricted x-variables for analysis with TYPE=TWOLEVEL and ALGORITHM=INTEGRATION must be specified as either a WITHIN or BETWEEN variable.

So my question is: Is there a way to compute a multilevel analysis with a categorical dependent variable with a random effect? And if so, how can I perform the analysis?

Thank you in advance.
Andreas
 Linda K. Muthen posted on Monday, June 24, 2013 - 5:42 am
Please send the output with the error and your license number to support@statmodel.com so I can see the full context.
 Adel Pwell posted on Monday, August 26, 2013 - 9:09 am
When modeling multilevel categorical data with sample size 20: I am getting an error message
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE
CONDITION NUMBER IS 0.646D-17. PROBLEM INVOLVING PARAMETER 20.


Parameters 15-20 are under Tau and I inputted the cut points when I generated the data. So what could this error point to as to why the standard error could not be estimated.
 Linda K. Muthen posted on Monday, August 26, 2013 - 11:36 am
Please send the output and your license number to support@statmodel.com.
 Darin Erickson posted on Monday, October 28, 2013 - 9:49 am
I have a hierarchical dataset (people nested within states), a dichotomous outcome, a number of within-level covariates, and a single between-level dichotomous predictor. Here is some of my code:

CATEGORICAL = drunkdr;
BETWEEN = spsobch;
CLUSTER = state;
STRATIFICATION = strata;
WEIGHT = weight;
WTSCALE = ECLUSTER;

Analysis: TYPE = TWOLEVEL COMPLEX;


Model:

%BETWEEN%
drunkdr ON spsobch;

I want to interpret the effect of spsobch (a dichotomous L2 predictor) on my dichotomous outcome (drunkdr). I understand that the within part of the model is regressing the L2 random intercept on the predictor. Here is the output for this part of the model:

MODEL RESULTS


Between Level

DRUNKDR ON
SPSOBCH -0.280 0.058 -4.803 0.000

Thresholds
DRUNKDR$1 6.028 0.088 68.774 0.000

Residual Variances
DRUNKDR 0.134 0.028 4.744 0.000

I'm struggling with how to interpret the estimate = -0.28. Given my outcome variable is dichotomous, is this mean clusters with a 1 on the predictor have an average proportion of .28 lower than those with a 0? Is there a way to output the random intercepts for each of the clusters? Thanks, Darin
 Linda K. Muthen posted on Monday, October 28, 2013 - 10:17 am
The variable drunkdr is a random intercept on the between level. -0,28 is a linear regression coefficient.
 Lindsey M. King posted on Wednesday, July 02, 2014 - 11:07 am
I am trying to calculate predicted probabilities for certain values of a higher-level covariate using a multilevel model with a 4-category ordinal dependent variable and a cross-level interaction term. From what I've read on the forum, I must calculate this by hand. I have tried to write out the equation so I can input covariate values, but I cannot visualize how the pieces fit together in a single equation. The inclusion of the cross-level interaction term is confusing me. Any help you can offer in interpreting the output would be greatly appreciated.

MODEL:
%WITHIN%
worry ON jobsec easefind union private univ abovehi higher age spwrkft;
%BETWEEN%
worry ON plmp unemp;
 Lindsey M. King posted on Wednesday, July 02, 2014 - 11:18 am
Apologies for posting the wrong equation. The correct equation is below:
MODEL:
%WITHIN%
worry ON jobsec easefind union private univ abovehi higher;
%BETWEEN%
easefind worry ON almp ;
worry ON unemp ;
 Bengt O. Muthen posted on Wednesday, July 02, 2014 - 5:14 pm
I assume "worry" is your categorical DV. With categorical DVs, the multilevel model involves numerical integration to get the probabilities. Also, I don't see a cross-level interaction term. That would appear as e.g.

%within%

s | worry on jobsec;

%between%

s on w;

so that w*jobsec is the interaction.
 Lindsey M. King posted on Wednesday, July 02, 2014 - 7:32 pm
Perhaps I used the wrong terminology (maybe the term is cross-level effect rather than interaction?). I only want random intercepts. Here is what I am trying to simultaneously achieve: (1) Categorical DV worry is regressed on individual-level IVs (jobsec easefind union private univ abovehi higher). (2) DV worry is regressed on country-level IVs (almp unemp). (3) easefind, an IV at the individual level, becomes a DV at the country level and is regressed on a country-level IV (almp). In short, I'm trying to create a multilevel mediation model where X is measured at a higher level (country) than the mediator or Y, both of which are measured at the individual level.

It sounds like calculating predicted probabilities for such a model might not be possible, at least not by hand, if their computation requires numerical integration. If not, how can it be done?
 Bengt O. Muthen posted on Thursday, July 03, 2014 - 4:29 pm
See the multilevel teaching we do on categorical DVs in our short course website handout and video from Johns Hopkins - Topic 7.
 Jana Holtmann posted on Wednesday, July 09, 2014 - 8:31 am
Hi,

I am trying to fit a two-level model with ordered categorical dependent variables and two measurement time points. I tried to include measurement invariance by constraining the thresholds to be equal over time. Doing this I get an error message saying

Internal Error Code: PARAMETER EQUALITIES ARE NOT SUPPORTED.

Is it not possible to constrain thresholds in this kind of models?
Even if I just try to assign variable names to the threshold variables I get the same error message...

Thanks!
 Bengt O. Muthen posted on Wednesday, July 09, 2014 - 10:46 am
Please send your output and license number to support@statmodel.com.
 Hoda Vaziri posted on Thursday, September 18, 2014 - 1:12 pm
Does the two-level multinomial regression provide any chi-square test for fit?

It does give me ligliklihood and AIC, but I can't run a null model to get the logliklihood for the null model to compare.

What about R^2? how can I compute that?
 Bengt O. Muthen posted on Thursday, September 18, 2014 - 4:53 pm
Q1. No.

You can run a null model by fixing slopes at zero.

Q2. Which R-square reference do you have in mind?
 Hoda Vaziri posted on Friday, September 19, 2014 - 7:34 am
Thank you!

I mean Cox and Snell's R-square, Nagelkerke's R-square, or Hosmer & Lemeshow's R-square.
 Bengt O. Muthen posted on Saturday, September 20, 2014 - 2:18 pm
Those can be obtained by getting the likelihood for the full model and for the model where all covariates have slopes fixed at zero. But I thought those measures were more used for binary response than multinomial. And with 2-level modeling they would be relevant mostly for level-1 I presume, whereas R-square for level-2 is the usual one since the DV is continuous.
 Hoda Vaziri posted on Thursday, September 25, 2014 - 12:17 pm
Thanks Bengt.

I also have 2 other questions.

1. Does two-level multinomial model gives the test for each of the predictor variables, so that i can see whether their inclusion significantly improves the model, or not. If not, is there any way to calculate those using the data provided?

2. The output only gives me the results comparing to the last category. Is there any way to get the results with other categories as the reference automatically? Or I should change the reference category using define command?

Thanks
 Bengt O. Muthen posted on Thursday, September 25, 2014 - 4:18 pm
1. You get a z-test for the significance of each predictor. And you can check how BIC changes.

2. You have to use Define in this model.
 Duhita Mahatmya posted on Friday, March 20, 2015 - 7:11 pm
I am running a two-level multinomial model with random intercepts and fixed slopes.

After reviewing the discussion board, user guide, and training handout for Topic 7, I still have a question about how/if Mplus (version 7.0) provides output for the estimated variance components for the level-1 and level-2 models in multinomial models. It seems like a variance estimate is provided for two-level logistic models, but not for multinomial models?
 Bengt O. Muthen posted on Saturday, March 21, 2015 - 8:53 am
Both give variance estimates on level-2. You may have to mention the variances on level-2 to activate them.
 Yoon Oh posted on Wednesday, June 10, 2015 - 10:17 pm
I was running multilevel analysis of count outcome variables. The result output contains both "unstandardized model results" & "standardized model results". I found that the p-values for each predictor variable differed between standardized & unstandardized model results. I wonder why the p-values differ and what version should be used.

Your help would be greatly appreciated.
 Bengt O. Muthen posted on Thursday, June 11, 2015 - 12:51 pm
Please send input, output and data to support so we can tell.
 Andrew Percy posted on Thursday, December 10, 2015 - 10:02 am
I'm currently analysing data from a clustered RCT (pupils in schools) with a binary primary outcome (binge drinking). The model I'm using is a two level logistic with a random intercept. I have been asked (by STATA users) to calculate an effect size for the treatment effect (a binary between level covariate) preferably as a odds ratio (along with other estimates that they would be familiar with). The Heck and Thomas book indicate that the relevant parameter is a linear regression coefficient for regression of the covariate on the underlying latent response variable. However, I'm struggling to understand the scale of the parameter. Is it on a log odds scale?
 Linda K. Muthen posted on Thursday, December 10, 2015 - 4:51 pm
This type of general statistical question should be posted on a general discussion forum like SEMNET or MultilevelNet.
 Bengt O. Muthen posted on Thursday, December 10, 2015 - 6:18 pm
If you use two-level analysis with a binary outcome and ML you get the same results as in Stata, so ask your Stata colleagues what kinds of effect size measures they use and then you can create the same ones using Model Constraint.

On the other hand, if you are talking about the effect of a binary between-level covariate then the DV is a continuous variable and regular linear regression effect estimates apply (the DV is continuous because you consider the random intercept of the binary outcome).
 Jessika Bottiani posted on Tuesday, December 22, 2015 - 9:58 am
I am running a 2-level model (students in schools) using 12 student-report categorical ordinal (scale 1-4) indicators that are non-normal to create 3 latent outcome variables (within) and same 3 latent outcome variables (between).

My within independent variables are categorical (gender, SES, grade level), and my between independent variables are continuous (3) and binary (1).

I declared the 12 categorical indicators using CATEGORICAL ARE. I specified in Analysis:Type= twolevel; Estimator = wlsmv; and requested Standardized in Output.

Which output I should use to report standardized results for within and between -- STD for both? Or STDYX for continuous and STDY for categorical?

Thanks for your guidance!
 Linda K. Muthen posted on Tuesday, December 22, 2015 - 1:13 pm
For factor loadings use StdY. For binary covariates, use StdY. For continuous covariates, use StdYX.
 L posted on Sunday, April 16, 2017 - 6:45 am
I am testing several hypotheses based on a multi-level mediation model. All of them have continuous predictors, continuous mediators and categorical outcome with 4 categories. Here's the code (modified from Preacher et al., 2010) that I am planning to use.

USEVARIABLES ARE QA FPreD PBF Prior;
CLUSTER IS QA;
Categorical= Prior;
ANALYSIS: TYPE IS TWOLEVEL;
Estimator is WLSMV;
MODEL:
%WITHIN%
PBF on FPreD(aw);
Prior on PBF(bw);
Prior on FPreD(cpw);
%BETWEEN%
FPreD PBF Prior;
PBF on FPreD(ab);
Prior on PBF(bb);
Prior on FPreD(cpb);
MODEL CONSTRAINT:
NEW(indb indw);
indw=(aw*bw);
indb=(ab*bb);
OUTPUT: TECH1 TECH8 Tech3 CINTERVAL;

1. Are my modifications appropriate?

2. Should I be using the MLR estimator?

3. Further, most of the indirect effects at the between person level (where our interest lies) are not significant, in which case, does it make sense to interpret the probit coefficients?
 Bengt O. Muthen posted on Tuesday, April 18, 2017 - 6:33 pm
1. This looks ok, except that the indirect effect on Within concerns the continuous latent response variable behind your ordinal outcome. For causal effects pertaining to the observed ordinal outcome, a counterfactual approach is needed (but more advanced given the 2-level model).

2. You can use WLSMV, ML, or Bayes.

3. It sounds like between person is your Within level given that you mention probit coefficients. You can interpret probit coefficients if they are significant (even if indirect effects are not).
 L posted on Wednesday, April 19, 2017 - 2:06 pm
Dr. Muthen,

Thank you so much for your inputs. I have a couple of followup questions.

My model itself consists of 8 continuous IV, 8 continuous mediators and 1 outcome variable (categorical). I have alternate continuous variables that I can use as outcome variables, but the reliability could be somewhat low for the continuous variable since it appears that participants have interpreted the question (on how much time they spent on a work or family activity, when a conflict occurred) differently. So, I am using a categorical variable (which activity did you prioritize when a conflict arose?- 1. work, 2. both, 3. family 4. neither) as outcome (level 1 variable) instead.

While I have done my initial analysis assuming this could be thought of as ordinal, would it be a mistake to make that assumption.

So, a regular SEM framework would not work given the categorical outcome. Would you have any references for how to adopt a counterfactual approach with multi-level data?
 John C posted on Wednesday, August 02, 2017 - 5:56 pm
Hello,

I would like to check on the correct syntax for a model with a binary outcome and two predictors, one binary and the other continuous.

Both predictors do not vary within a cluster. Only the outcome variable can vary within a cluster. Is the following syntax correct; in particular, does one leave the %within% section empty (even though the outcome is not level 2)?

Variable:
...
CATEGORICAL ARE u1;
USEVARIABLES ARE u1 u2 x1;
CLUSTER = clus;
BETWEEN = u2 x1;

Analysis:
Type is TwoLevel;

Model:

%within%


%between%
u1 on u2 x1;
 Bengt O. Muthen posted on Thursday, August 03, 2017 - 5:20 pm
With a binary outcome you have nothing on Within. On Between you have the threshold and the residual variance for the continuous between part of u:

[u$1];
u;
 John C posted on Monday, August 07, 2017 - 7:43 am
Hello,

I also have a mediation relationship at the between level, where u2 mediates the effect of x1 on u1.

It seems you cannot request estimates for indirect/total effects with multilevel, but are the indirect estimates done by means of the model constraint command still valid?
 Bengt O. Muthen posted on Monday, August 07, 2017 - 7:45 am
Mediated effects on Between are formulated just like for single-level models - yes, you can use Model Constraint to express that.
 Kate Barford posted on Sunday, November 12, 2017 - 3:43 pm
Hello,

My question is similar to John C's with one variation.

My DV varies both within and between. It is binary at the within person level and thus at the between level is the proportion of 0s/1s. I have one binary(experimental condition) and several continuous predictors and am also interested in two and three way interactions between the binary and continuous predictors.

My DV is zero-inflated.

I am not sure what the best way to model this data is. Do you suggest using the model that John C is using (leaving the within-person part of the model blank?) If so, could you please explain to me what exactly MPLUS is doing in this model? How is it modelling the DV/what is being predicted?

Alternatively, as I am not interested in predicting anything at the within-person level, would it be better to not model this as multilevel, and instead compute a count variable counting the 1s (vs Os) that exist at the within-person level, and use a negative binomial regression predicting this count variable at the between person level, to account for the fact that the data is zero-inflated? And is my understanding correct that in the negative binomial model the DV is the probability of non-0 scores? If I create a count variable the scores range from 0 to 215 and are 0 inflated.


Thank you
 Bengt O. Muthen posted on Sunday, November 12, 2017 - 5:26 pm
I don't know that there is any point in doing two-level analysis when there is no within variation to model (as in this case).

The negbin model concerns outcomes that range from zero and up. Inflated models add a part for the probability of being at zero.
 Kate Barford posted on Sunday, November 12, 2017 - 6:06 pm
Thank you.

Not sure if I was clear about this before, but the DV does vary within. It is a binary variable collected at many moments across time, so participants have scores of 0 or 1 for each moment. But we are not interested in predicting this variable at the within person level.

Is it still not worth modelling it as two-level in your opinion?
 Bengt O. Muthen posted on Monday, November 13, 2017 - 3:59 pm
Right, but there is no variance parameter on Within; the binary variation is taken care of by the threshold parameter given that for a binary y, V(y) = p*(1-p) where p is the prob which is a fcn of the threshold. So maybe not worth modeling as two-level.
 sunil posted on Thursday, November 30, 2017 - 11:14 am
Hello,

I am trying to estimate the following model. I have unbalanced panel with observations at individual and business unit level. The individual level variables are continuous in nature and are nested within the business unit level variables that are categorical in nature. The model is a random slope and intercept model with first a DV that is measured at the individual level (mediator) with an ultimate DV at the business unit level that is categorical in nature.

What I gathered from the manual is that for unbalanced data we can use MUML estimators but MUML works for continuous variables only. Is there a work around that I can focus on. Any help will be much appreciated.
 Bengt O. Muthen posted on Thursday, November 30, 2017 - 4:06 pm
Full ML is available in newer versions of Mplus for any type of DV. See examples in the User's Guide on our website, chapter 9.
 Maria Pavlova posted on Wednesday, February 21, 2018 - 2:07 am
Dear Bengt, dear Linda,

we are estimating a twolevel model for categorical outcomes (measured at both levels) with a Bayesian estimator. We wanted to compute predicted probabilities and noticed that the derived probabilities deviate extremely from raw proportions in the data (even in the model where only the DV is included, without any predictors). However, if we use standardized thresholds in the calculation, we do get probabilities that are very close to raw proportions. Is it really the case that we need to use standardized thresholds if we have a DV measured on both levels?
Excerpts from our output:

UNIVARIATE PROPORTIONS AND COUNTS FOR CATEGORICAL VARIABLES

SMOKEA
Category 1 0.665 18313.000
Category 2 0.335 9224.000

Between Level

Thresholds
SMOKEA$1 1.791

Variances
SMOKEA 16.659

STDYX Standardization

Between Level

Thresholds
SMOKEA$1 0.438

Variances
SMOKEA 1.000
 Bengt O. Muthen posted on Wednesday, February 21, 2018 - 4:53 pm
Please send your output to Support along with your license number. Show how you calculate the probabilities. With random effects, it is not straightforward.
 Christoph Weber posted on Thursday, August 02, 2018 - 2:49 am
Dear Mplus-team,
if I use algorithm = GAUSSHERMITE to fit a twolevel logistic model, does this correspond to the Approach outlined by Liu and Pierce (1994)?
best Christoph

Liu, Q., & Pierce, D. A. (1994). A note on Gauss—Hermite quadrature. Biometrika, 81(3), 624-629.
 Tihomir Asparouhov posted on Friday, August 03, 2018 - 8:55 am
Not quite because by default Mplus uses adaptive quadrature. You need both of these commands
algorithm = GAUSSHERMITE; adpative=off;
to get that method.
 Christoph Weber posted on Friday, August 03, 2018 - 12:24 pm
Thanks, actually I'm looking for the performance of the different algorithms (bias of level 2 variance component in a Nullmodell) when there is a small number of clusters. The Liu and Pierce approach as implemented in the glmer R-package was used in a simulation study by Austin (2010).
Do you have internal simulation results or do you know of other studies on the performance of the available algorithms?
Christoph
 Tihomir Asparouhov posted on Friday, August 03, 2018 - 1:42 pm
Our own unpublished simulations, which are used for settung up Mplus defaults and have a slightly different objective (than your objective of optimal performance with small level 2 units), showed that the rectangular (evenly spaced integration points) adaptive integration is generally slightly better than the alternatives. You may find this articles useful and find references for other related articles
http://www.statmodel.com/bmuthen/articles/Article_128.pdf
 Christoph Weber posted on Saturday, August 04, 2018 - 3:01 pm
Thanks again, I've conducted a small MC-study regarding the bias of the level 2 variance component. Now I want compare ML-Estimation with bayes estimation, where only a probit link is availaible. How is this best done? Use a probit population model that is equivalent to the logit Parameters?
 Bengt O. Muthen posted on Monday, August 06, 2018 - 11:05 am
You can get only approximate correspondence between logit and probit. I would recommend comparing ML and Bayes by using the link=probit option for ML.
 Christoph Weber posted on Tuesday, August 07, 2018 - 3:21 am
Is this done by just adding link = Probit in the analysis section? I compared a probit model with a logit model (roughly the same population Parameters based on empirical data) and the probit model (ML) yields much a greater bias in the threshold as well as in the L2-variance than the logit model. What might be the reason for this?
 Bengt O. Muthen posted on Tuesday, August 07, 2018 - 11:13 am
Q1: Yes.

Q2: This should not be - perhaps you have set it up incorrectly.
 Christoph Weber posted on Tuesday, August 07, 2018 - 2:32 pm
I used thresholds of 2.197 (logit) and 1.282 (probit) corresponding to 10% of category 2 in the outcome. The Level 2 variances are 0.41 (logit) and 0.12 (probit). Thus, set up should be roughly equivalent, isn't it?

Bias in thresholds is 0,2% (logit) and -39% (probit) and in variance estimates -18% (logit) -73%(probit).


I generated data for 7 clusters of size n = 35.
 Bengt O. Muthen posted on Tuesday, August 07, 2018 - 2:47 pm
Doesn't seem plausible but I would have to see the full outputs to see what's going on. 7 clusters for twolevel analysis is much too few with ML - even with continuous outcomes.
 Christoph Weber posted on Tuesday, August 07, 2018 - 3:56 pm
may I send the Outputs?

regarding number of clusters: A simulation study by Austin (2010; including also a L2-predictor) shows that the L2-variance component (and this is the only estimate I'm interested in) is biased about 10% for ML-based estimators for 7 clusters. My model doesn't include predictors, thus I was interested wheter ML would work even for 7 clusters.
 Bengt O. Muthen posted on Tuesday, August 07, 2018 - 4:47 pm
Please send them to Support (I think you have an up-to-date license).
 Maria Pavlova posted on Tuesday, December 10, 2019 - 7:57 am
Hello, we would like to test for the effects of various predictors on the probability to experience a certain event. The data contain many waves, and with these data, we usually use twolevel modeling with clustering within participants. My question is, what is the most sensible way to disentangle the within and between variance in the outcome?
1. Within: for each wave, code 1/0 whether the event was experienced or not between the waves; Between: code 1/0 for the event had ever been experienced (the events are not exactly rare but not very frequent either).
2. Within: same as above; Between: use individual rate of experiencing the event (however, most experience it only once).
3. Use the latent variance decomposition and centering (in this case I don't understand the meaning of within coefficients).
4. Don't use twolevel analysis and switch to type=complex to correct for non-independence of observations within participants (but in this case, interindividual differences in the probability to experience the event are confounded with intraindividual factors).
Many thanks in advance!
 Tihomir Asparouhov posted on Wednesday, December 11, 2019 - 11:27 am
I would start with 4 and then move to 3. When you use method 3 you probably need to fix the variance of the binary outcome to 0, given that it is a rare event like that and most experience it one time. Even if you have it as a free parameter it will probably get estimated to 0. The reason you want the binary item to be within-between is not the latent centering but so that you can include time invariant (between level) predictors
if you have such. The coefficients on the within level will have exactly the same interpretation as in method 4.

In method 2, having an almost constant variable is not a great idea. In method 1 adding the additional variable also doesn't sound like a great idea - that is an observed variable that doesn't have any information in it (all the information is already contained in the within variable).
 Y.A. posted on Thursday, June 11, 2020 - 12:39 am
Dear Prof. Muthen,

I am practicing with ex9.3 data on a twolevel logistic regression. I have read the Larsen & Merlo (2005) paper you recommended, and succeeded in calculating MOR and IOR, now I am trying to calculate predictive probability for the DV. The paper did not talk about this issue in particular, so I followed the instruction on the user guide. I assume that the logits in different level of the model can be summed up, is it reasonable? I created some new parameters, but I am not sure if my logic makes sense. Could you give me some advice please?

MODEL:
%WITHIN%
!s1|u on x1; s2|u on x2;
u ON x1(b1_sub)
x2(b2_sub);
%BETWEEN%
u ON w(b1_clus); u(v);[u$1](a);

model constraint:
new
(b1_pop b2_pop ior_l_w ior_h_w
log1_l_w log1_h_w p1_l_w p1_h_w
log1_x_sub p1_x_sub log1_x_pop p1_x_pop
log_f5 log_f6 log_f7 log_f8
pred_f5 pred_f6 pred_f7 pred_f8
);

b1_pop=b1_sub/(sqrt(1+(16**2*3/(15*3.1415926)**2)*v));
b2_pop=b2_sub/(sqrt(1+(16**2*3/(15*3.1415926)**2)*v));
ior_l_w=exp(b1_clus*1+sqrt(2*v)*(-1.2816)); !lower bound of ior
ior_h_w=exp(b1_clus*1+sqrt(2*v)*(1.2816)); !upper bound of ior
 Y.A. posted on Thursday, June 11, 2020 - 12:41 am
syntax follow-up:

! logits and predictive probability of category 1 using w=1
log1_l_w=a+b1_clus*1+sqrt(2*v)*(-1.2816); !lower bound of logit of cagegory 1 predicted by w
log1_h_w=a+b1_clus*1+sqrt(2*v)*(1.2816); !upper bound of logit of cagegory 1 predicted by w
p1_l_w=exp(log1_l_w)/(exp(0)+exp(log1_l_w));
p1_h_w=exp(log1_h_w)/(exp(0)+exp(log1_h_w));

! logits and predictive probability of category 1 using x1=0.3 and x2=0.6
log1_x_sub=a+b1_sub*0.3+b2_sub*0.6;
p1_x_sub=exp(log1_x_sub)/(exp(0)+exp(log1_x_sub));
log1_x_pop=a+b1_pop*0.3+b2_pop*0.6;
p1_x_pop=exp(log1_x_pop)/(exp(0)+exp(log1_x_pop));

! full model logits and predictive probability of category 1 using w=1, x1=0.3, and x2=0.6

log_f5=log1_l_w+log1_x_pop; log_f6=log1_h_w+log1_x_pop;
log_f7=log1_h_w+log1_x_pop; log_f8=log1_h_w+log1_x_pop;
pred_f5=exp(log_f5)/(exp(0)+exp(log_f5));
pred_f6=exp(log_f6)/(exp(0)+exp(log_f6));
pred_f7=exp(log_f7)/(exp(0)+exp(log_f7));
pred_f8=exp(log_f8)/(exp(0)+exp(log_f8));
 Y.A. posted on Thursday, June 11, 2020 - 3:15 am
Sorry, the full model part should be as:

! full model logits and predictive probability of category 1 using w=1, x1=0.3, and x2=0.6
log_f1=log1_l_w+log1_x_pop; log_f2=log1_h_w+log1_x_pop;
log_f3=log1_l_w+log1_x_sub; log_f4=log1_h_w+log1_x_sub;
pred_f1=exp(log_f1)/(exp(0)+exp(log_f1));
pred_f2=exp(log_f2)/(exp(0)+exp(log_f2));
pred_f3=exp(log_f3)/(exp(0)+exp(log_f3));
pred_f4=exp(log_f4)/(exp(0)+exp(log_f4));

My questions are:

1. how to estimate the random slope of x1 and x2 on the binary u?
2. is the between-level [u$1](a) the right parameter to be used for calculating the predictive probability within level x1 and x2?
3. can the logits calculated using between level w and within level x1 and x2 be summed up to an overall logits for subsequent calculation of the full model predictive probability?

I hope I describe my questions in clearly. Thank you very much!
 Bengt O. Muthen posted on Thursday, June 11, 2020 - 8:14 am
We need to see your full output - send to Support along with your license number. Also, we request that postings be limited to one window - if it can't fit, send to Support.
 Yvonne Terry-McElrath posted on Wednesday, August 19, 2020 - 12:54 pm
I have a question regarding correct specification of a multilevel model with a categorical L1 outcome. Is it necessary to model the variance of the outcome at L2 to correctly specify the model, and if so, why? Example:

MODEL:
%WITHIN%
cnsq ON day013;
%BETWEEN%
cnsq; !is this line required?
cnsq ON male;

Many thanks.
 Bengt O. Muthen posted on Wednesday, August 19, 2020 - 3:57 pm
I think you get the residual variance of cnsq on Between even without mentioning it given the regression on male. But you do want it estimated because csn on Between is a random intercept which is likely to vary and its variation not be fully explained by male.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: