Class separation and entropy PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Jon Heron posted on Friday, March 14, 2014 - 10:17 am
Dear Bengt/Linda

this isn't strictly Mplus related but i'm at the end of my tether and would really appreciate any insight you may have.

I have in mind a 3-class mixture model, e.g. derived from a single tri-modal Y.

Whilst entropy would be a measure of the separation of those three classes, in theory one could also derive 3 additional measures of entropy, each from a different pairwise comparison, by working with the appropriate assignment probabilities.

Have you seen this done anywhere? I'm grappling with currently unanswerable questions regarding whether I discard the probabilities for the third class, perhaps rescale the two remaining probabilities so they sum to one within person and whether i also drop cases for which the modal class is neither of the two of interest.

many thanks, Jon
 Bengt O. Muthen posted on Friday, March 14, 2014 - 12:50 pm
Wouldn't you just consider the 3 x 3 classification table where you can see which classes are more clearly formed than others?
 Jon Heron posted on Friday, March 14, 2014 - 12:56 pm
thanks Bengt
that was where I started, and to be honest still looks like the best thing I have done.

For a class1/class2 fuzzyness I took the product of the first/second main diagonal element, or alternatively the sum of the [2,1] and [1,2] elements. I feel they both capture the same issue.

both have intuitive appeal but perhaps lack the statistical robustitude of a formula

best, Jon
 Jon Heron posted on Monday, March 17, 2014 - 7:52 am
No question, just an update.

Have stumbled across a paper which provides a formula for overlap between pairs of clusters in the case of a profile analysis

http://www.public.iastate.edu/~maitra/papers/SimMix.pdf

see equation in section 2.1, page 4.

Have calculated for a simulated example of one Y and 3-class mixture and the results are very close to the off-diagonal elements of Mplus' second classification matrix (the D-matrix).

Delighted to see that their recommendation - summing both off-diagonal elements - was just what I have been doing :-)
 Yan Wang posted on Friday, December 13, 2019 - 7:45 am
Hello Drs. Muthen,

I am running nonparametric multilevel latent class analysis and wonder how entropy in the Mplus output was computed. I found a formula in this article: Variable-Specific Entropy Contribution by Asparouhov and Muthen (2018). However, I assume that the formula applies to single-level latent class analysis (i.e., only one C and one K in the formula). Could you please provide any insights on how entropy is calculated in the multilevel context where I have latent class variables at both levels?

Thanks very much!

Best,
Yan
 Tihomir Asparouhov posted on Monday, December 16, 2019 - 1:54 pm
We compute one entropy for the whole model, i.e., for all the class variables together. If you have two class variables C1, C2 of size 2, we consider the joint variable C=(C1,C2) which essentially is a variable of size 4. The entropy is computed using the posterior probabilities of C.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: