Message/Author 

Jon Heron posted on Thursday, September 20, 2007  5:37 am



Ramaswamy's paper on entropy does not appear to indicate what a good value of entropy would be  merely that 0.62 would indicate 'fuzzyness' I would like to add a sentence to my methods section with words to effect of "Entropy values over 0.8 indicate a good separate of the latent classes (ref)" pl can you help? cheers Jon 

Matthew Cole posted on Thursday, September 20, 2007  6:02 am



Here's what I use: Entropy with values approaching 1 indicate clear delineation of classes (Celeux & Soromenho, 1996). Celeux, G., & Soromenho, G. (1996). An entropy criterion for assessing the number of clusters in a mixture model. Journal of Classification, 13, 195212. 

Jon Heron posted on Thursday, September 20, 2007  6:05 am



Cheers Matthew, I was unable to get hold of that paper. I have an entropy of 0.935, I guess that's approaching one, whichever way you look at it. J 

Jon Heron posted on Thursday, September 20, 2007  6:09 am



Found it after all  hurrah! Note to self  dont always rely on Pubmed 

Andy Ross posted on Friday, November 21, 2008  4:56 am



Dear Prof. Muthen Following on from Jon's query In your opinion what is the general thinking on quality of classification as a criterion for accepting a latent class solution as useful? For example, would you disregard a solution with an entropy lower than .8 as a fairly poor representation of a population, because it cannot distinguish very well? Also, are there any strategies for improving entropy  i.e. is poor classification often linked with a specific attribute of a model? I ask because I've recently found models that start with an entropy of appoximately .5/.6 when estimating two classes often remain fairly poor at classifying even as the number of estimated classes is increased. Many thanks Andy 


The quality of classification as measured by entropy has different impact in different settings. For example, you could have poor entropy and still be able to distinguish some of the classes very clearly. Or, you could use your LCA to predict a distal outcome from the latent classes and get a significant relationship that is estimated with small SE even with a low entropy. The use of "most likely class membership" as a variable for further analysis, however, is problematic when the entropy goes much lower than 0.8. Best strategy for improving entropy is to add good indicators  indicators that discriminate well between the classes. Given a certain set of indicators, however, you would first find the model that fits the data best and then accept the entropy it gives. 


I've heard that entropy value partially depends on number of classes (unfortunatelly, I can't remember where) and tends to be smaller for less classes. If so, could you  very briefly  explain why? Thank you very much. 


I have not heard that, but I can imagine that entropy might be lower for smaller numbers of classes due to more classes being needed to clearly separate clusters of people. 


The LL, BIC and aLRT all favor the 4class solution. But my entropy is .75 and on the diagonal the propabilities are .89, .92, .73 and .69. Do I have to stick with the 3class model although the 4class model has beter fit statistics and the additional group is distinct from other groups 


With such a clear support for 4 classes, I would not base the decision on entropy. 


Hi Professor Muthen, I have a question about how entropy is calculated for latent growth mixture models. I'm trying to compare results between Mplus and the lcmm package in R, where the R package does not already include an entropy output, hence I'm writing up a function for entropy based off of the one used in MPlus. I'm referring to this document https://www.statmodel.com/download/relatinglca.pdf pg. 8. Is this the correct entropy equation used for Mplus models (when Analysis = Mixture)? Thank you for your time. 


We use the formula on the first page of the technical appendix on our website at http://www.statmodel.com/techappen.shtml Variablespecific entropy contribution. 

Paulette posted on Tuesday, December 06, 2016  1:15 pm



Hi! Is looking for a 0.8 entropy unrealistic for certain fields. I work in education and we usually can't get R2 in regression above 0.30.4 and is a given in the field that is really unrealistic to get much more than that. So I was wondering if an entropy of 0.450.5 might be as much as I can get in my field 


I think it depends much more on the context/substance and the variables than the field of study. In some applications it seems easy/common to get over 0.9. Note also that the model can be quite good statistically also with a smallish entropy. And, that the distinction between some of the classes can be good while for others it is harder to tell them apart (this is seen in the classification table which carried more information that the singlenumber entropy summary). 

lisa Car posted on Saturday, February 18, 2017  12:09 pm



Hello I am trying to consider ways in which I can improve the clinical utility/translation of my LPA model as some of the indicators (recommended by expert working group in the field) are not necessarily clinically friendly. I am wondering if I can do a sort of sensitivity analysis by examining individual indicator entropy values, and exclude those indicators with low values and rerun to see if we get similar classes? I am also going to employ PCA before LPA to try and stream line my model. I am wondering 1. what your thoughts are on this approach and 2. if you think it viable, do you have any recommendations re: cut values for poor individual entropy? thanks in advance 


The UG index points to the option Entropy on page 749 where it also gives a reference that is on our website under Papers. This gives variablespecific entropy. Good entropy is maybe >0.8. Bad entropy is hard to specify. Note also that the classification table gives more detailed information than the singlenumber entropy. You may have certain classes that it is easy to distinguish between whereas it is hard for certain other classes. 

Back to top 