I was wondering how MPlus calculates the values that appear in the "Average Latent Class Probabilities for Most Likely Latent Class Membership by Latent Class" section from an LCA. Also, are the mathematics that MPlus uses similar for a LPA.
1 and 2 are identical when the model has no restriction on the categorical latent variable distribution.
Number 3 is different because it is based on the single most likely class membership not the set of estimated posterior probabliiles for each class. They would only agree in the cae of perfect classification.
Thanks for the explanation on the posterior probabilities.
For the purpose of reporting, what counts/proportions should I use ? In my case, the model proprtions are quite similar to the classification proportions. So I guess either one is OK. But in case they differ my sizable amounts which one should we use ?
Also I get a warning message
*** WARNING in Model command All variables are uncorrelated with all other variables within class. Check that this is what is intended.
Under conditional independence isn't this what we expect ? Please advise.
When I run my LTA model with full measurement invariant models with 3 time points I get the following warning:
There are more equality labels given than there are parameters. Some equality labels will not be used. Equality: 1-6
Here is my code : %c1#1% [vic1$1 vic2$1 vic3$1 vic4$1 vic5$1 dep](1-6);
I want these 6 items to be the same in all 3 time points in my LTA model. What could be the reason I get this warning ?
My other question is : the class counts and proportions (for a 3 class model) I get from my LTA model are different from what I observed on the cross-sectional analysis where I did 3 LCA's for the 3 time points . What could be the reason for this?
I would need to see the full output to answer your first question. Please send the output and your license number to email@example.com.
The reason you don't get the same classes from the LTA versus the separate LCA's could be due to different observations being analyzed due to listwise deletion, equalities in the LTA which cause the model not to fit, or the need for not only first-order but perhaps second-order effects.
Thanks for your reply to my question about the warnings. I have another question.
Can we use continuous variablese (items) in LCA to determine the groups. For example, I have a depression indicator which is continuous and I used it in my corss sectional LCA to deternmine the latent classes together with another 5 binary indicators. M-plus did not give any error and I just want to find out whether we can use a continuous item in LCA ? If possible can you give me a reference to look into.
Yes. Latent class indicators can be continuous, censored, binary, ordered categorical (ordinal), unordered categorical (nominal), counts, or combinations of these variable types. I don't know of a reference that says this. But a model with all continuous latent class indicators is often referred to as Latent Profile Analysis.
I have a logitudinal data set with 3 time points and at each of the time points I'm measuring a latent construct that has 3 classes using a 6 item measurement model, 5 of which are binary and one continuous item.
My LTA model gives me 35 parameters, and I do not understand how to determine the number of parameters in my model.
I used full measurement invariance and a 1st oder model with NON homogeneous transition probabilities.
Me and my colleagues were asking us, if the following classification matrix (average Latent Class Probabilities for most likely Latent Class Membership) implies enough classfication quality to publish our 4-groups solution. We want to predict class membership.
If you look at the information you provide, you see that it is class 2 that is contributing most to the low entropy. It has a large number of observations in class 3. It is not always the case that classes are distinct. It depends on what the classes are. Do the classes make sense? Does the fact that they are not distinct make sense? If you decide on these four classes, you can add covariates to the model by regressing the latent class variable on a set of covariates.
thank you! further analysis revealed interesting outcomes. Covariates lead to higer entropy. But, BLRT points to a 3 group solution, BIC points to the 4 group solution. Since I've heard one should trust BLRT more, I decided for the 3 group solution. But, the problem here is: depending on the set (i have two possible sets) of my covariates the 3 group solution entails different of the smaller groups of the 4 group solution in both covariate sets. The 3 group solution seems not stable with respect to which of the smaller groups are isolated. Since, theoretically, all of the smaller groups from the 4 class solution make sense, I have a hard time to decide which 3 group solution I can trust more. Could that be a reason to switch back to the 4 groups?
It seems that I am having a similar issue as posted above by chinthaka kuruwita, September 26, 2007 - 10:59 am. Specifically, I am trying to test model invariance as I build my LTA model. However, I keep getting this error for all of my classes as I run the constrained model:
*** WARNING in MODEL command There are more equality labels given than there are parameters. Some equality labels will not be used.