Message/Author 

Roger Brown posted on Monday, February 16, 2015  9:44 am



I have been playing around with the new univariate entropy measures with an LCA using a variety of binary items. Sample size is fairly large >1500 cases. Once I seem to get larger than 17 binary items univariate entropy fails to estimate (999.000) and receive the following: THE MODEL ESTIMATION TERMINATED NORMALLY THE CHISQUARE TEST IS NOT COMPUTED BECAUSE THE FREQUENCY TABLE FOR THE LATENT CLASS INDICATOR MODEL PART IS TOO LARGE. I am assuming a limitation on the size of the matrix? Is there a way to override this? Thanks. Roger 


Yes  you can add "output:tech10". 


Hi, I am working with a model which is similar to example 7.20 in the user guide (p. 175), including both a continuous (f1), a categorical (c) latent variable and a set of indicators (y1y3). Is it possible to compute the univariate entropy for the continuous latent variable (f1 in example 7.20)? I have no problems obtaining the univariate entropy values for my indicators y1y3. Thanks in advance, David 


No, that is not available. 


Thanks for the quick reply! 


Another question on univariate entropy: My model contains a single categorical latent variable c and a number of indicators y1y6. Some of these indicators (y1y4) correspond to single items on a questionnaire, together they make up a factor (f), whilst other indicators (y5y6) correspond to the domains of a scale and are based on the mean of a number of items. Is it possible to directly compare the univariate entropies of indicator y1 with indicator y6? In example 7.20, I would be trying to compare the univariate entropy of indicator y3 with a hypothetical indicator y7 which is not part of any of the factors. Thanks in advance, 


I don't see any problems to compare those. 


Hi, Another question based on example 7.20 in the user guide. I understand that I cannot directly compute the univariate entropy for the continuous latent variable f1. Is there another method to compute that value, e.g. summing univariate entropies for the indicators y1y3? Thanks, David 


That sounds reasonable  use the average entropy of the indicators. 


Hello, Is it possible to manually compute univariate entropies based on Mplus output? Similar to how model entropy can be calculated based on the cprobs output. Thanks in advance, David 


You have to setup the "univariate" models, with 1 dependent variable and all parameters fixed to the parameters from the multivariate model. At that point the univariate model entropy is the univariate entropy and you can compute it manually using the cprobs from the univariate model. 


Thanks! 


Hi, If you were to specify which univariate entropy values would correspond with a low, medium and high univariate entropy, what would those values be? Thanks in advance, David 


Don't quote me, but I think of High: 0.8  1.0 Medium: 0.6  0.8 Low: Below 0.6 


Hi, I'm wondering whether univariate entropy values can be used to evaluate the quality of indicators for longitudinal mixture models? I'm trying to troubleshoot for a 4 class LCGA model for a 5category ordinal outcome using 7 waves of data that has low entropy (0.53). I’d thought that the low entropy was largely because of substantial sample attrition. However, I requested the univariate entropy values and they’re all uniformly low (range: 0.2700.295!). Are univariate entropy values valid for these types of models? Does this suggest that we should reconsider our latent class indicators? 


Sharon Currently the univariate entropy doesn't take into account the missing data and it sort of implies what the entropy would be if all the data is observed. We will change this in the next Mplus version to account properly for the missing data. Thanks for pointing this out. Tihomir 


how to get the univariate entropy for each item to select the items with higher values of univariate entropy to represent the latent variable? 


Variablespecific entropy is given in the output since version 7.3. 


how to get the univariate entropy in the output? what is the command? 


Our online UG says that you say Entropy in the Output command. 


Dear all, I am trying to compute the univariate entropy for 11 continuous indicators of my LPA. I am having trouble reading the output. The value of the first variable is 999.000 and the output is not neat as the one in Asparouhov and Muthen (2018) paper but it seems shifted. In fact, the final row ends up with a value that is not in column with the correspondent variable (see below). Univariate Entropy RT_GO [] SW _______ [] _____ 999.000 [] 0.202 [] 0.194 (NB the squared brakets are just for formatting purposes) I am wondering therefore if I should disregard the 999.000 and shift the reading of my univariate entropy values. Also, only two of my indicators make a substantial contribution to my overall entropy. The remaining values range between .191/.205, would you get rid of all of them? would you remove only those below .200? any suggestion or resource on this matter would be greatly appreciated. Kind regards Silvia 


UPDATE: I have just realized that the reason for my univariate entropy output to be shifter is that my censored variable is not fully reported. Meaning that I have 10 variable names (instead of 11) but 11 univariate entropy values. I reckon that the 999.000 value, although it is matched with another indicator, actually refers to my censored variable but I am not entirely sure if this is the proper way to read the output. Kind regards Silvia 


Please send your output to Support along with your license number. 


Dear Dr Muthen, thank you for your reply, I have checked my code again and I have solved the issue. Thank you for your help. Best wishes Silvia 

Mplushope posted on Saturday, December 21, 2019  2:57 pm



I am running a latent class analysis with categorical indicators: Probabilities for the Most Likely Latent Class Membership are high (above 0.9) Overall Entropy is 0.7 other fit indices support the model too (for example tech10) But the univartiat entropy for all indicators is low (ranging from 0.10.35) How can I interpret that issue? Thank you so much for your answer. 


I don't see any problems. The idea is that univariate entropy can be thought of as being additive, but not directly of course. It is how much each indicator contributes to the class identification. When you add all the information together to get the overall entropy (except for overlapping information) you would be essentially adding the entropy in some loose sense. If you have 20 or 50 indicators with entropy 0.1 or 0.2 it would not be surprising to have the overall entropy close to 1. The univariate entropy is primarily meant to be compared within model to determine which indicators contribute the most to the latent class identification, although it is revealing in absolute sense as well. 

Back to top 