

LPA (LCA) with negative binomial regr... 

Message/Author 

Alice Frye posted on Thursday, September 25, 2008  6:16 am



I've been running some LPA with continuous and rare event count variables. I have used NB(i) for the rare event count variables. I find that within a 3 class model, for example, the estimate for the inflation term is identical (really identical, not just close) across classes within a model. The term representing scores of one or more varies across classes like other estimates. This also occurs if I use zero inflated poisson regression for the count variables. I'd be grateful for any thoughts on why that is and/or what it means. 


It is the Mplus default that these parameters are held equal across classes. To relax the equalities, mention the parameter in the classspecific parts of the MODEL command. 

Alice Frye posted on Wednesday, October 15, 2008  1:15 pm



This is another question about using NB (i) with LPA. I have run a model in which the estimates of the inflation terms are allowed to vary across classes (as are all the other point estimates of variables in the model). For example in one class an inflation term representing having not committed spouse abuse or having committed spouse abuse is estimated at 1.25. Can anyone tell me how I can express an inflation term as probabilitythe probability of having not committed the act or having committed the act? Or what syntax I would use to produce this information along with the other results? Many thanks, Alice 


In Mplus, u# is a binary latent inflation variable for a certain count outcome u and u#=1 indicates that the individual is unable to assume any value except 0. So an estimate of [u#] is a logit intercept/mean, say m, so that P(u#=1) = 1/(1+exp(m)). For example, m = 15 implies that this probability is zero so there is no inflation  nobody is unable to assume any value except 0 (prob=0 for the zero class), i.e. everyone follows the regular NB with counts 0, 1, 2,.... 

Kathleen posted on Saturday, February 22, 2014  3:33 pm



A beginner's question on using the negative binomial: Is it possible to implement a negative binomial latent class model in Mplus 7.1? I see the UG examples in which negative binomial is used for a growth mixture model, but don't have a hypothesis about the functional formthe slopes. I'd like to compare the NB to the ZIP. If it is possible, how is it specified? Thank you much. 


Yes, you can do that. See UG ex 7.11 for ideas. 

Kathleen posted on Sunday, February 23, 2014  6:03 pm



Thank you for your reply. I examined a zeroinflated negative binomial LCA, allowing the thresholds of the latent class indicator variables to vary across classes. However, I can't find resources on how to interpret the output. The output shows means for c and c#1 classes, and an intercept for the c#1. Are the means the ln of the count? How should I interpret the intercept for class c? A different question is that I was reviewing your slides from Beijing, October 2012, which you discussed mixture regression models. In the twopart mixture model for which you provide syntax, are you modeling two classes with two decision processes within those classes? Or, are you modeling just two processes? I want to compare two classes with 2 processes; two processes; and to two latent classes. Thanks very much!! 


For paragraph 1, please send output and license number to Support. For paragraph 2, please send the name of the pdf (or link) and the slide numbers you refer to. 


Hi, Mplus team, We are estimating a LPA model to predict the unobserved class membership for a sample of patients (N ~ 100K). Near all the measures are counts with excessive zeros. ZINB model provides the best fit to the data. The question that I have concerns the logit part of the model, where parameters for many measures are fixed at /+15 in different classes. I understand why that happens (the prob. is ~zero or one), but we have instances where this is not the case. For example, in one class (with N~11,000) observed median of a measure is 1.0, mean=218.1. However, Mplus sets the corresponding parameter at 15. Correspondingly, when we compute expected mean for the measure as [exp(15)/(1+exp(15)]*exp(count_estimate)], we get a value that is essentially zero. This is very different from the observed mean, based on the predicted class assignment. I realize there’s a lot that can explain why the parameters get fixed (model complexity; skewness; variances constrained across classes; outliers), but I was wondering about your take on this. Can we ignore the zeropart of the model and focus on the count part instead to compute expected mean for the count part? Should we be reporting observed means instead (which has been our plan)? Should we have any concerns about the model? I should add that the model(s) replicate pretty well on different samples. 


I think we need to see your full output  send to Support along with your license number. 

Back to top 

