What is a good value of entropy PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Jon Heron posted on Thursday, September 20, 2007 - 5:37 am
Ramaswamy's paper on entropy does not appear to indicate what a good value of entropy would be - merely that 0.62 would indicate 'fuzzyness'

I would like to add a sentence to my methods section with words to effect of

"Entropy values over 0.8 indicate a good separate of the latent classes (ref)"

pl can you help?


cheers



Jon
 Matthew Cole posted on Thursday, September 20, 2007 - 6:02 am
Here's what I use:

Entropy with values approaching 1 indicate clear delineation of classes (Celeux & Soromenho, 1996).

Celeux, G., & Soromenho, G. (1996). An entropy criterion for assessing the number of clusters
in a mixture model. Journal of Classification, 13, 195-212.
 Jon Heron posted on Thursday, September 20, 2007 - 6:05 am
Cheers Matthew, I was unable to get hold of that paper.

I have an entropy of 0.935, I guess that's approaching one, whichever way you look at it.



J
 Jon Heron posted on Thursday, September 20, 2007 - 6:09 am
Found it after all - hurrah!

Note to self - dont always rely on Pubmed
 Andy Ross posted on Friday, November 21, 2008 - 4:56 am
Dear Prof. Muthen

Following on from Jon's query

In your opinion what is the general thinking on quality of classification as a criterion for accepting a latent class solution as useful? For example, would you disregard a solution with an entropy lower than .8 as a fairly poor representation of a population, because it cannot distinguish very well?

Also, are there any strategies for improving entropy - i.e. is poor classification often linked with a specific attribute of a model? I ask because I've recently found models that start with an entropy of appoximately .5/.6 when estimating two classes often remain fairly poor at classifying even as the number of estimated classes is increased.


Many thanks

Andy
 Bengt O. Muthen posted on Friday, November 21, 2008 - 7:53 am
The quality of classification as measured by entropy has different impact in different settings. For example, you could have poor entropy and still be able to distinguish some of the classes very clearly. Or, you could use your LCA to predict a distal outcome from the latent classes and get a significant relationship that is estimated with small SE even with a low entropy. The use of "most likely class membership" as a variable for further analysis, however, is problematic when the entropy goes much lower than 0.8.

Best strategy for improving entropy is to add good indicators - indicators that discriminate well between the classes. Given a certain set of indicators, however, you would first find the model that fits the data best and then accept the entropy it gives.
 Harald Gerber posted on Tuesday, March 17, 2009 - 6:37 am
I've heard that entropy value partially depends on number of classes (unfortunatelly, I can't remember where) and tends to be smaller for less classes.
If so, could you - very briefly - explain why? Thank you very much.
 Bengt O. Muthen posted on Friday, March 20, 2009 - 1:17 pm
I have not heard that, but I can imagine that entropy might be lower for smaller numbers of classes due to more classes being needed to clearly separate clusters of people.
 Djuke Brinksma posted on Thursday, June 16, 2016 - 1:21 am
The -LL, BIC and aLRT all favor the 4-class solution. But my entropy is .75 and on the diagonal the propabilities are .89, .92, .73 and .69. Do I have to stick with the 3-class model although the 4-class model has beter fit statistics and the additional group is distinct from other groups
 Bengt O. Muthen posted on Thursday, June 16, 2016 - 10:14 am
With such a clear support for 4 classes, I would not base the decision on entropy.
 Haema Nilakanta posted on Wednesday, June 29, 2016 - 9:55 am
Hi Professor Muthen,
I have a question about how entropy is calculated for latent growth mixture models. I'm trying to compare results between Mplus and the lcmm package in R, where the R package does not already include an entropy output, hence I'm writing up a function for entropy based off of the one used in MPlus. I'm referring to this document https://www.statmodel.com/download/relatinglca.pdf pg. 8.

Is this the correct entropy equation used for Mplus models (when Analysis = Mixture)? Thank you for your time.
 Bengt O. Muthen posted on Friday, July 01, 2016 - 9:44 am
We use the formula on the first page of the technical appendix on our website at

http://www.statmodel.com/techappen.shtml


Variable-specific entropy contribution.
 Paulette posted on Tuesday, December 06, 2016 - 1:15 pm
Hi!
Is looking for a 0.8 entropy unrealistic for certain fields. I work in education and we usually can't get R2 in regression above 0.3-0.4 and is a given in the field that is really unrealistic to get much more than that. So I was wondering if an entropy of 0.45-0.5 might be as much as I can get in my field
 Bengt O. Muthen posted on Tuesday, December 06, 2016 - 2:13 pm
I think it depends much more on the context/substance and the variables than the field of study. In some applications it seems easy/common to get over 0.9. Note also that the model can be quite good statistically also with a smallish entropy. And, that the distinction between some of the classes can be good while for others it is harder to tell them apart (this is seen in the classification table which carried more information that the single-number entropy summary).
 lisa Car posted on Saturday, February 18, 2017 - 12:09 pm
Hello

I am trying to consider ways in which I can improve the clinical utility/translation of my LPA model as some of the indicators (recommended by expert working group in the field) are not necessarily clinically friendly. I am wondering if I can do a sort of sensitivity analysis by examining individual indicator entropy values, and exclude those indicators with low values and rerun to see if we get similar classes? I am also going to employ PCA before LPA to try and stream line my model.

I am wondering 1. what your thoughts are on this approach and 2. if you think it viable, do you have any recommendations re: cut values for poor individual entropy?
thanks in advance
 Bengt O. Muthen posted on Saturday, February 18, 2017 - 2:54 pm
The UG index points to the option Entropy on page 749 where it also gives a reference that is on our website under Papers. This gives variable-specific entropy.

Good entropy is maybe >0.8. Bad entropy is hard to specify. Note also that the classification table gives more detailed information than the single-number entropy. You may have certain classes that it is easy to distinguish between whereas it is hard for certain other classes.
 Katharine Buek posted on Friday, January 19, 2018 - 7:45 am
I have a latent growth mixture model with classification probabilities over .80, but entropy is .56. The classes make sense and means are a full standard deviation apart. However, I am concerned my advisor will nix it because of the low entropy. I might be able to get it up to .6 with further tinkering, but none of the models I've tested get much above that.

Are you aware of any published studies that validate the use of class models with entropy below .7?
 Bengt O. Muthen posted on Friday, January 19, 2018 - 11:36 am
A low entropy doesn't mean that the model isn't useful. Because you mention .80, it sounds like some of the classes can be well distinguished between - but not all. I am sure many published studies have entropy lower than .7. Check our Papers section under Latent Class Analysis.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: