Hello, I generated two groups data¡XC1~N(0,1) and C2~N(0,1) under 21 items condition. In 21 items, there are 6 invariant items. In each group, there are 1000 people. After I ran LCA model, I have problem knowing C1 is assigned to #1 or# 2 based on Mplus results. For example, if I set C1 belongs to 1 and C2 belongs to 2 in the generated data, how do I know after running LCA, Mplus will assign C1 as 1 or 2? Another question is in the Mplus result, how could I interpret threshold? Is it similar to item difficulty in IRT? In addition , is there any default in LCA model in Mplus?
I am sorry to confuse you. I generated the two groups data. One group¡¦s ability distribution is N(0,1), and the other group ability distribution is N(0.5,1). In this data, I set the lowest ability¡¦s membership as 1 , and the highest ability¡¦ membership as 2. So, I have known the real membership in generated data. I want to know what the percentage of membership will be assigned correctly in Mplus, so I need to compare the data output with my generated data. In the data output, the value of membership is 1 either 2, but I have problem knowing that which value is assigned to the lowest ability?
Are you saying that you generate an IRT model with different latent ability distribution means in two groups and then you analyze with LCA to try to recover the group membership? Or are you not using LCA but factor mixture modeling?
Yes, I was generating an IRT model with different ability distribution means in two groups and use LCA to find the correct group membership. In the generating data, I have the real membership in each group. So , now I try to know how LCA will classify the membership correctly.
So in your second step - not knowing the group membership - it sounds like you are saying that you use LCA, not factor mixture modeling. Note that LCA with m classes usually recovers factor analysis (IRT) with m-1 factors. It seem like you should instead use UG ex 7.17 to recover your unknown groups. You get the most likely class membership if you request cprobs in the Savedata command (see UG).
Ali posted on Tuesday, February 04, 2014 - 9:41 pm
Thank you! Sorry , I still have a questions. I have simulated 10 data with an IRT model with different ability distribution also I have the real membership in the generating data, then running LCA 10 times. By using command ¡§ SAVEDATA:SAVE=CPROB¡¨, the output shows the probability of a person belonging class 1 or class2. But, how could I tell if the class 1 in Mplus output corresponds to the class 1 in the generating data? I mean if I assign class 1 as 1 in my generating data, how could I know class 1 will be assigned 1 or 2 in Mplus output? Are there any parameter estimates in mplus that I could compare with the generating values?
You have to infer which class it is by comparing the estimated means/probabilities of the observed variables to those that generated the data. But, again, I am not sure that applying an LCA model to data generated by a multiple-group IRT model is a good idea - you need 2 classes to capture the IRT ability factor and then you need 2 more classes to capture the two groups; it might be hard to sort things out from those 4 classes.
Ali posted on Thursday, February 06, 2014 - 2:01 am
Thank you for your suggestion. I tried to use probabilities of the items, but I find it's not easy to match the item difficulties in the generated data. For example,the mplus result show Latent Class 1 U1 Category 1 0.105 Category 2 0.895 U2 Category 1 0.350 Category 2 0.650 Latent Class 2 U1 Category 1 0.342 Category 2 0.658 U2 Category 1 0.670 Category 2 0.330 And, I set the item 1 has the same item difficulty -1.5 in group 1 and group 2, and item 2 is -1 and 1 in group1 and group2,respectively. However, I could not tell the real membership from the probability. And, why does mplus estimate thresholds in LCA, because from the LCA formula , it seems no paramters is for thresholds.
I think your difficulties are related to my earlier statement:
"But, again, I am not sure that applying an LCA model to data generated by a multiple-group IRT model is a good idea - you need 2 classes to capture the IRT ability factor and then you need 2 more classes to capture the two groups; it might be hard to sort things out from those 4 classes."
Instead of LCA, I think you should use the model of ex 7.17 that I mentioned.
All Mplus models with categorical outcomes use threshold parameters. See the handouts and videos for Topic 2 and Topic 5 on our website.
The mean is the logit for the probability of being in class1. For k classes, k-1 logits are estimated.
[f*1] means that the mean of f is free with a starting value of 1.
yuki toyota posted on Tuesday, March 11, 2014 - 8:44 am
Hello, I am trying to analyse my data with LPA method. My study outline is that
(1)to examine how many latent classes with emotional intelligence(EI) will be appeared in LPA model.
(2)to compare how the differences of mental health(such as depression and burnout) in each detected-EIclass are.
So far, I managed to do (1) , and the results said that 3 class solution is the best. But I don't know how to conduct (2). Although some info told me that it can be carried out by ANOVA in SPSS, I don't understand how it is possible.
I will attach my program. Could someone help meH
DATA: FILE IS "F:\Latent profile analysis\EIrawdata.txt";
VARIABLE: NAMES ARE v1 v2 v3 v4; USEVARIABLES ARE v1 v2 v3 v4; CLASSES = c(3);
ANALYSIS: TYPE IS MIXTURE;
MODEL: %OVERALL% %C#1% [v1-v4]; %C#2% [v1-v4]; %C#3% [v1-v4]; Plot: Type=PLOT3; Series is v1(1) v2(2) v3(3) v4(4);