Jen Rose posted on Wednesday, March 12, 2014 - 8:34 am
I have a 3 class latent class analysis model with categorical observed variables, several categorical covariates and a categorical distal outcome.
When I look at the latent class odds ratios results comparing Class 1 to Class 2, I see that for a couple of my categorical observed variables, the odds ratios are not significant based on the p value. I'm interpreting this as meaning that there are no significant differences in the logits between the two classes for these variables. However, when I look at the 95% confidence intervals for the latent class odds ratio results (using CINTERVAL), the confidence intervals do not cross 1.0, which suggests to me that the difference between the 2 classes is significant.
Can you tell me what might be causing the discrepancy between the p values for the latent class odds ratio results and the corresponding confidence intervals?
I'm running an LCA with 3 classes and I'm interested in obtaining 95% confidence intervals for the item-response probabilities and latent class probabilities. I have added the CINTERVAL option to the output.
Are these the equations used to estimate the latent class probabilities for a 3-class model? class 1: EXP(C#1)/(1+EXP(C#1)+EXP(C#2)) class 2: EXP(C#2)/(1+EXP(C#1)+EXP(C#2)) class 3: 1/(1+EXP(C#1)+EXP(C#2))
If so, do I then just plug in the lower estimates for both C#1 and C#2 into each equation to obtain the lower confidence limit? And then plug in the upper estimates for the upper limit?
I have tried this, but I have found that these confidence limits are not symmetric and sometimes do not contain the actual estimate. Thanks in advance for your help!
Q2. I don't think that approach works when there isn't a 1-1 relation between the logit and probability. Try bootstrapping to capture any non-symmetry in the probability estimate distribution. Or, Bayes.
Even with these bootstrapped results, I still run into the problem of not being able to apply an equation to somehow exponentiate the upper and lower limits. Is there another equation I can use to exponentiate this output and obtain reportable confidence intervals?
Alternatively, is there another way to obtain confidence intervals for latent class probabilities for a 3-class LCA? Thanks again for your help!
Thanks! In the output, I do get the confidence intervals in probability scale for categorical variables. However, I donít get confidence intervals in the probability scale for the latent class probabilities because I have 3 latent classes - I only get confidence intervals of model results for the latent class probabilities. How can I these model results to obtain the confidence intervals in the probability scale for the latent class probabilities?
However, there are 3 errors and no indication of what is incorrect. The indicated errors immediately follow the second parentheses for each term - EXP(C#2) just before the division sign, EXP(C#1) just before the addition sign, and EXP(C#2) just before the second parentheses - in the formula.
How do I fix this error and is the code correct? Additionally, the output for the PROB estimate and its upper and lower limits is the same single estimate. I assume this is because of the error in the formula? If this error is fixed, would I obtain bcbootstrap cintervals?
Finally, C#1 and C#2 are the latent classes, so I haven't defined them in the input - do I need to somehow define them in order to obtain the cintervals? If so, how do I code that? Thanks again!!
What is considered large latent class odds ratios with ordinal variables? I'm running a LCA with four ordinal variables, each with 6 categories. I've heard that latent class ORs greater than 5 or less than .2 for binary variables are large. Is the same true for ordinal variables? Seems like the ORs should be expected to be smaller. Thanks!
I have a similar question to CB above (June 3 2015).
I would like confidence intervals around estimated class prevalences from a GMM.
When I apply the formulas given in CB's June 3rd post by hand, I am able to get from the logit estimates back to the prevalence estimates. But when I use the lower and upper limits of the logits (from the cinterval command) in those formulas, the limits do not always contain the prevalence estimate.
I cannot use cinterval(bootstrap) because my data are weighted so I am using MLR.
It sounds like you want a non-symmetric interval for the prevalence estimate (you get the symmetric one). If you have several logit estimates that are used to create the prevalence I don't think using their limits works. I'm not sure how to go about this given your weights.
Errors: *** ERROR EXP(CLASS#1) /(1 + EXP(CLASS#1) + EXP(CLASS#2) ) ^ ERROR (this is actually placed after the first closing paren) *** ERROR EXP(CLASS#1) /(1 + EXP(CLASS#1) + EXP(CLASS#2) ) ^ ERROR (this is placed after the second closing paren) *** ERROR EXP(CLASS#1) /(1 + EXP(CLASS#1) + EXP(CLASS#2) ) ^ ERROR (this is placed after the third closing paren)
In addition, the estimate I got did not match the estimated prevalence of class #1. Also, it listed that value for all of the cinterval results ( i.e. the lower .5%, lower 2.5%, etc. results were all the same).
I imagine I must have written the statement incorrectly. Any advice would be greatly appreciated.
Please send your output to Support so they can help.
Nicole S posted on Wednesday, May 09, 2018 - 7:22 pm
Hi I am working on an analysis in which we present predicted transition probabilities using a 3 level categorical variable, measured at two time points. Our syntax is based on that covered in the LTA in Mplus webnote. There are a couple of issues we would appreciate clarification on.
An example of our syntax for estimating probabilities (for low values of our focal covariate) is as follows:
1. A reviewer has requested confidence intervals for the transition probabilities, but we are unclear on the most appropriate way to obtain these. Is it appropriate to simply present the upper and lower intervals obtained using CINTERVAL? In an above response to another post bootstrapping is suggested, but we are unable to use bootstrapping with our estimation method.
2. We also compared our probabilities generated with the above syntax to those obtained with the LTA calculator. Most probabilities were identical, but for some specified values of our focal covariate, one probability will differ by .001. This means the transition probabilities for C1#2 sum to .999 rather than 1. Is it safe to assume that this is just a result of rounding errors?