Anonymous posted on Tuesday, May 11, 2004 - 10:42 am
I have run my LCA analyses separately by gender and have found some differences in the conditional probabilities and class sizes. Is there a way I can test to see if the differences in the measurement models are statistically significant? They are all in the same ballpark in terms of probability size, so I wanted to see if there's a way to test this.
I am working on an LCA and need to determine if I need to run the LCA by gender, or if I can combine the genders. When I run the LCA for each gender I found 3 classes for each model, and the classes look fairly comparable. To "officially" test that I can combine the genders and run one LCA, I ran the LCA with 3 classes and included the gender variable in the KNOWNCLASS statement (H0 loglikelihood=-13622.577). I then ran the LCA with 3 classes without the KNOWNLCLASS statement (H0 loglikelihood=-11724.179). I want to be sure I am calculating the likelihood ratio test correctly. I took -2*(the difference between the loglikehood H0 for each model) and get 3796.8 for my chi-square value. My confusion is with the degrees of freedom.
1. Do I subtract the DFs reported with the Likelihood ratio chi-square for each model? If so, I get DF=2010, and the p-value for the likelihood ratio test is 0.0.
2. If I've done this correctly, does the test indicate that I should run the LCA separately for each gender becasue the LCA structure is significantly different for each gender?
Linda, thank you for your reply. I'm very new to Mplus and am struggling with the code that would restrict the parameters across classes. Would it be possible for you to guide me through the MODEL statements that would do this? Thanks.
Thank you again for your quick response. I have read both chapters and consulted with a colleague who attended the Mplus training in Alexandria. She helped me write me some code that she thinks is assigning the appropriate constraints. However, it does not seem to me to be doing what you described. I would greatly appreciate it if you could review this code and let me know if it is correct. I will be sending my program and data set to the support e-mail address. Thank you in advance for your help!
Hello. I try to specify a model with several classes by the KNOWNCLASS option. If there are only two classes (e.g. "gender"), I can estimate the probability of one class by estimating the mean of this class in the whole group (%OVERALL%) and inserting it into (1/(1+exp(-mean))). If there are more than two classes, this method fails. Is there a possibility to get the probabilities of the (more than two) classes via their latent means?
I am building an LTA model with three latent classes at the first two time points and four classes at the third time point. I would like to establish partial measurement invariance (i.e., full invariance between the first two time points as well as two of the classes from the third time point). The way that it appears from the baseline models is that the third class splits into two classes at the third time point.
Is it possible to test this simultaneously or does it require sequential testing of the full invariance across the two time points followed by partial invariance of only two of the classes across all three time points?
Would the KNOWNCLASS statement be used for both of these analyses?
Also, the code for such analyses is becoming increasingly complicated. For the three time points, is it necessary to specify separate models for each time point?
Thank you very much in advance for you help with these issues.
The KNOWNCLASS option is not used to compare across time. For the two classes that do not change, you can compare across time using equality constraints. Examples of these equalities can be see in Examples 8.13 and 8.14. This requires a model without equalities being compared to a model with equalities.
db40 posted on Thursday, August 06, 2015 - 5:56 am
I have ran a multi-group analysis using the KNOWNCLASS command and have settled upon a 4 class solution, grouping the model by gender.
The Classes upon inspection appear to be qualitatively and quantitatively different. Are there any examples or codes in the manual which can help me learn how to test if the models are statistically significant from each other.
I might add the gender split is M=43% /F=56% if that makes any difference.