Message/Author 


Prof. Muthen, I have longitudinal dataset (3 timepoints) with four informants measuring children's social competence. I have run my analysis using Latent Transition Analysis over time and Latent Class analysis with baseline. The BIC is best with 3 groups, but the division makes most logical sense with 2 groups. Is there a threshold where even if the BIC improves, I should go with the more parsimonious model if it makes more sense? Also, when using 2 groups, LTA the entropy is .93 and the predictability is high, but 99% of participants remain in their original group and do not transition. If I go with the less complicated LCA for parsimony, the LCA entropy is only .80? Is it better to proceed with the less complex model or the one with higher entropy? Thank you! Karen 


The standard change in BIC is a minimum of 10. I would tend to go with the number of classes that make sense theoretically. I wonder if the third class is really different from the other two or just a variation. 

Karen Silcox posted on Wednesday, February 02, 2011  12:21 pm



Prof. Muthen, Thank you for your answer to my first question! You affirmed my gut feeling. Is this true even if the change in the BIC is greater than 10. To make sure I understand, if the change is less than 10 then it probably does not indicate that more classses is a better model. When you have time, can I also get your input about my second question as to whether it is better to use the 2 group LCA results or the 2 group LTA results explained above. I have one more question that has come up...when I compare the cross tabs of the group classifications using the LCA vs. using the LTA there are 67 cases out of 319 that are categorized in different groups. In other words there are 32 cases that are classified in LC 1 using LCA and latent status 2 using LTA. There are 35 cases that are in LC2 using LCA and in Latent status 1 using LTA across time. My groups seem to distinct when comparing means on predictors and on outcome var of social competence. Should I be concerned about this apparent discrepancy when I run the model one way vs. the other? Thank you so much! 


These differences are likely due to measurement noninvariance of the thresholds. Look at the threshold profiles for the LCA's. They are likely different. In the LTA, they are held equal. 


Prof. Muthen, Thank you for your help! I went through the Mplus user's guide for more information about thresholds and ex. 7.4 seemed like the closest match. 2 latent classes with continuous latent indicators. However, I showed the error messages to a friend familiar with mplus and she said she couldn't use standardized coef. with her data. I searched the discussion board and you mentioned using raw data instead a few times. I have 4 datasets; one from each informant, child, mother, teacher, and observer. In order to compare the scores I created a zscore for each report. I used these scores as the latent class indicators. I can't use the raw data because the questions are on different metrics. Even within the individual datasets some of the questions are on different metrics. Therefore, can I not trust the results from mplus using these scores? Using LCA and LTA in mplus was the plan suggested by my dissertation committee, so if I can not use it, can you guide me to any reference about not using mplus with standardized coef.? Thank you so much. 


Did each informant answer the same questions? 


No, I am using four different datasets (child selfreport, mother report, teacher report, observer report) that all answered questions about the same child, but did not answer exactly the same questions. I selected questions from each dataset that best assessed children's social wellbeing. Different questions on different scales were used to create a "global" score for each child from each informant at each wave. Thank you. 


You should not standardize these variables. Latent class indicators do not need to be on the same metric. Use them as is. You will only compare means of the same variables across classes. 

Back to top 