Univariate entropy PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Roger Brown posted on Monday, February 16, 2015 - 9:44 am
I have been playing around with the new univariate entropy measures with an LCA using a variety of binary items. Sample size is fairly large >1500 cases. Once I seem to get larger than 17 binary items univariate entropy fails to estimate (999.000) and receive the following:

THE MODEL ESTIMATION TERMINATED NORMALLY

THE CHI-SQUARE TEST IS NOT COMPUTED BECAUSE THE FREQUENCY TABLE FOR THE LATENT CLASS INDICATOR MODEL PART IS TOO LARGE.

I am assuming a limitation on the size of the matrix? Is there a way to override this? Thanks.

Roger
 Tihomir Asparouhov posted on Tuesday, February 17, 2015 - 11:09 am
Yes - you can add "output:tech10".
 David Buitenweg posted on Monday, March 16, 2015 - 8:24 am
Hi,
I am working with a model which is similar to example 7.20 in the user guide (p. 175), including both a continuous (f1), a categorical (c) latent variable and a set of indicators (y1-y3). Is it possible to compute the univariate entropy for the continuous latent variable (f1 in example 7.20)? I have no problems obtaining the univariate entropy values for my indicators y1-y3.
Thanks in advance,
David
 Bengt O. Muthen posted on Monday, March 16, 2015 - 9:17 am
No, that is not available.
 David Buitenweg posted on Monday, March 16, 2015 - 9:23 am
Thanks for the quick reply!
 David Buitenweg posted on Tuesday, March 17, 2015 - 4:31 am
Another question on univariate entropy:
My model contains a single categorical latent variable c and a number of indicators y1-y6. Some of these indicators (y1-y4) correspond to single items on a questionnaire, together they make up a factor (f), whilst other indicators (y5-y6) correspond to the domains of a scale and are based on the mean of a number of items. Is it possible to directly compare the univariate entropies of indicator y1 with indicator y6?

In example 7.20, I would be trying to compare the univariate entropy of indicator y3 with a hypothetical indicator y7 which is not part of any of the factors.

Thanks in advance,
 Tihomir Asparouhov posted on Tuesday, March 17, 2015 - 9:41 am
I don't see any problems to compare those.
 David Buitenweg posted on Tuesday, March 24, 2015 - 1:34 am
Hi,

Another question based on example 7.20 in the user guide. I understand that I cannot directly compute the univariate entropy for the continuous latent variable f1. Is there another method to compute that value, e.g. summing univariate entropies for the indicators y1-y3?

Thanks,

David
 Tihomir Asparouhov posted on Tuesday, March 24, 2015 - 9:09 am
That sounds reasonable - use the average entropy of the indicators.
 David Buitenweg posted on Friday, May 29, 2015 - 3:01 am
Hello,

Is it possible to manually compute univariate entropies based on Mplus output? Similar to how model entropy can be calculated based on the cprobs output.

Thanks in advance,

David
 Tihomir Asparouhov posted on Friday, May 29, 2015 - 8:29 am
You have to setup the "univariate" models, with 1 dependent variable and all parameters fixed to the parameters from the multivariate model. At that point the univariate model entropy is the univariate entropy and you can compute it manually using the cprobs from the univariate model.
 David Buitenweg posted on Tuesday, June 02, 2015 - 4:56 am
Thanks!
 David Buitenweg posted on Monday, August 24, 2015 - 12:43 pm
Hi,

If you were to specify which univariate entropy values would correspond with a low, medium and high univariate entropy, what would those values be?

Thanks in advance,
David
 Bengt O. Muthen posted on Monday, August 24, 2015 - 3:28 pm
Don't quote me, but I think of

High: 0.8 - 1.0

Medium: 0.6 - 0.8

Low: Below 0.6
 Sharon Simonton posted on Friday, September 04, 2015 - 9:24 am
Hi,

I'm wondering whether univariate entropy values can be used to evaluate the quality of indicators for longitudinal mixture models? I'm trying to troubleshoot for a 4 class LCGA model for a 5-category ordinal outcome using 7 waves of data that has low entropy (0.53). I’d thought that the low entropy was largely because of substantial sample attrition. However, I requested the univariate entropy values and they’re all uniformly low (range: 0.270-0.295!). Are univariate entropy values valid for these types of models? Does this suggest that we should reconsider our latent class indicators?
 Tihomir Asparouhov posted on Friday, September 04, 2015 - 4:35 pm
Sharon

Currently the univariate entropy doesn't take into account the missing data and it sort of implies what the entropy would be if all the data is observed. We will change this in the next Mplus version to account properly for the missing data. Thanks for pointing this out.

Tihomir
 samah Zakaria Ahmed posted on Sunday, January 29, 2017 - 5:53 pm
how to get the univariate entropy for each item to select the items with higher values of univariate entropy to represent the latent variable?
 Bengt O. Muthen posted on Monday, January 30, 2017 - 3:15 pm
Variable-specific entropy is given in the output since version 7.3.
 samah Zakaria Ahmed posted on Wednesday, February 01, 2017 - 6:11 am
how to get the univariate entropy in the output? what is the command?
 Bengt O. Muthen posted on Wednesday, February 01, 2017 - 1:52 pm
Our online UG says that you say

Entropy

in the Output command.
 Silvia Colonna posted on Monday, December 17, 2018 - 1:57 am
Dear all,

I am trying to compute the univariate entropy for 11 continuous indicators of my LPA.

I am having trouble reading the output. The value of the first variable is 999.000 and the output is not neat as the one in Asparouhov and Muthen (2018) paper but it seems shifted. In fact, the final row ends up with a value that is not in column with the correspondent variable (see below).

Univariate Entropy
RT_GO [] SW
_______ [] _____
999.000 [] 0.202 [] 0.194

(NB the squared brakets are just for formatting purposes)

I am wondering therefore if I should disregard the 999.000 and shift the reading of my univariate entropy values.

Also, only two of my indicators make a substantial contribution to my overall entropy. The remaining values range between .191/.205, would you get rid of all of them? would you remove only those below .200? any suggestion or resource on this matter would be greatly appreciated.

Kind regards
Silvia
 Silvia Colonna posted on Monday, December 17, 2018 - 2:14 am
UPDATE: I have just realized that the reason for my univariate entropy output to be shifter is that my censored variable is not fully reported. Meaning that I have 10 variable names (instead of 11) but 11 univariate entropy values. I reckon that the 999.000 value, although it is matched with another indicator, actually refers to my censored variable but I am not entirely sure if this is the proper way to read the output.

Kind regards
Silvia
 Bengt O. Muthen posted on Monday, December 17, 2018 - 4:41 pm
Please send your output to Support along with your license number.
 Silvia Colonna posted on Thursday, December 20, 2018 - 5:05 am
Dear Dr Muthen,
thank you for your reply, I have checked my code again and I have solved the issue.
Thank you for your help.

Best wishes
Silvia
 Mplushope posted on Saturday, December 21, 2019 - 2:57 pm
I am running a latent class analysis with categorical indicators:

Probabilities for the Most Likely Latent Class Membership are high (above 0.9)

Overall Entropy is 0.7

other fit indices support the model too (for example tech10)


But the univartiat entropy for all indicators is low (ranging from 0.1-0.35)

How can I interpret that issue?
Thank you so much for your answer.
 Tihomir Asparouhov posted on Monday, December 23, 2019 - 2:35 pm
I don't see any problems. The idea is that univariate entropy can be thought of as being additive, but not directly of course. It is how much each indicator contributes to the class identification. When you add all the information together to get the overall entropy (except for overlapping information) you would be essentially adding the entropy in some loose sense.

If you have 20 or 50 indicators with entropy 0.1 or 0.2 it would not be surprising to have the overall entropy close to 1.

The univariate entropy is primarily meant to be compared within model to determine which indicators contribute the most to the latent class identification, although it is revealing in absolute sense as well.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: