Karen Silcox posted on Wednesday, February 02, 2011 - 1:27 am
I have longitudinal dataset (3 timepoints) with four informants measuring children's social competence. I have run my analysis using Latent Transition Analysis over time and Latent Class analysis with baseline. The BIC is best with 3 groups, but the division makes most logical sense with 2 groups. Is there a threshold where even if the BIC improves, I should go with the more parsimonious model if it makes more sense?
Also, when using 2 groups, LTA the entropy is .93 and the predictability is high, but 99% of participants remain in their original group and do not transition. If I go with the less complicated LCA for parsimony, the LCA entropy is only .80? Is it better to proceed with the less complex model or the one with higher entropy?
The standard change in BIC is a minimum of 10. I would tend to go with the number of classes that make sense theoretically. I wonder if the third class is really different from the other two or just a variation.
Karen Silcox posted on Wednesday, February 02, 2011 - 6:21 pm
Thank you for your answer to my first question! You affirmed my gut feeling. Is this true even if the change in the BIC is greater than 10. To make sure I understand, if the change is less than 10 then it probably does not indicate that more classses is a better model.
When you have time, can I also get your input about my second question as to whether it is better to use the 2 group LCA results or the 2 group LTA results explained above.
I have one more question that has come up...when I compare the cross tabs of the group classifications using the LCA vs. using the LTA there are 67 cases out of 319 that are categorized in different groups. In other words there are 32 cases that are classified in LC 1 using LCA and latent status 2 using LTA. There are 35 cases that are in LC2 using LCA and in Latent status 1 using LTA across time. My groups seem to distinct when comparing means on predictors and on outcome var of social competence. Should I be concerned about this apparent discrepancy when I run the model one way vs. the other?
Thank you for your help! I went through the Mplus user's guide for more information about thresholds and ex. 7.4 seemed like the closest match. 2 latent classes with continuous latent indicators. However, I showed the error messages to a friend familiar with mplus and she said she couldn't use standardized coef. with her data. I searched the discussion board and you mentioned using raw data instead a few times.
I have 4 datasets; one from each informant, child, mother, teacher, and observer. In order to compare the scores I created a z-score for each report. I used these scores as the latent class indicators. I can't use the raw data because the questions are on different metrics. Even within the individual datasets some of the questions are on different metrics. Therefore, can I not trust the results from mplus using these scores? Using LCA and LTA in mplus was the plan suggested by my dissertation committee, so if I can not use it, can you guide me to any reference about not using mplus with standardized coef.? Thank you so much.
No, I am using four different datasets (child self-report, mother report, teacher report, observer report) that all answered questions about the same child, but did not answer exactly the same questions. I selected questions from each dataset that best assessed children's social well-being. Different questions on different scales were used to create a "global" score for each child from each informant at each wave.
is there specific Mplus code to fix a particular latent transition when using LTA? In other words, if a particular transition is improbable (high resilience to high depression) how do you fix this transition to zero? the current model language is D2 on D1 R2 R1 (where R is resilience and D is depression).