Message/Author 

Jen Rose posted on Wednesday, March 12, 2014  8:34 am



Hi, I have a 3 class latent class analysis model with categorical observed variables, several categorical covariates and a categorical distal outcome. When I look at the latent class odds ratios results comparing Class 1 to Class 2, I see that for a couple of my categorical observed variables, the odds ratios are not significant based on the p value. I'm interpreting this as meaning that there are no significant differences in the logits between the two classes for these variables. However, when I look at the 95% confidence intervals for the latent class odds ratio results (using CINTERVAL), the confidence intervals do not cross 1.0, which suggests to me that the difference between the 2 classes is significant. Can you tell me what might be causing the discrepancy between the p values for the latent class odds ratio results and the corresponding confidence intervals? Thanks, Jen 


The zscores and confidence interval will agree only for the symmetric confidence interval. If this is not the issue, please send your output and license number to support@statmodel.com. 

CB posted on Wednesday, June 03, 2015  12:39 pm



Hello Drs. Muthen, I'm running an LCA with 3 classes and I'm interested in obtaining 95% confidence intervals for the itemresponse probabilities and latent class probabilities. I have added the CINTERVAL option to the output. Are these the equations used to estimate the latent class probabilities for a 3class model? class 1: EXP(C#1)/(1+EXP(C#1)+EXP(C#2)) class 2: EXP(C#2)/(1+EXP(C#1)+EXP(C#2)) class 3: 1/(1+EXP(C#1)+EXP(C#2)) If so, do I then just plug in the lower estimates for both C#1 and C#2 into each equation to obtain the lower confidence limit? And then plug in the upper estimates for the upper limit? I have tried this, but I have found that these confidence limits are not symmetric and sometimes do not contain the actual estimate. Thanks in advance for your help! 


Q1. Yes. Q2. I don't think that approach works when there isn't a 11 relation between the logit and probability. Try bootstrapping to capture any nonsymmetry in the probability estimate distribution. Or, Bayes. 

CB posted on Thursday, June 04, 2015  6:22 am



Thank you for your quick response! Based on your suggestion, I performed bootstrapping to obtain confidence intervals by adding cinterval(bootstrap) and setting the number of bootstrap iterations to perform. Here is some of my output. Estimate, Std. Error, Lower 2.5%, Upper 2.5% C#1: 0.587, 0.449, 0.293, 1.467 C#2: 3.503, 0.488, 4.459, 2.547 Even with these bootstrapped results, I still run into the problem of not being able to apply an equation to somehow exponentiate the upper and lower limits. Is there another equation I can use to exponentiate this output and obtain reportable confidence intervals? Alternatively, is there another way to obtain confidence intervals for latent class probabilities for a 3class LCA? Thanks again for your help! 


You want to consider your probability expressions like Prob = EXP(C#1)/(1+EXP(C#1)+EXP(C#2)) When you ask for Cinterval(bcbootstrap) you get a 95% CI for Prob. That's what you want to use. It is a nonsymmetric CI. 

CB posted on Friday, June 05, 2015  12:45 pm



Thanks! In the output, I do get the confidence intervals in probability scale for categorical variables. However, I don’t get confidence intervals in the probability scale for the latent class probabilities because I have 3 latent classes  I only get confidence intervals of model results for the latent class probabilities. How can I these model results to obtain the confidence intervals in the probability scale for the latent class probabilities? 


But when you put Prob = EXP(C#1)/(1+EXP(C#1)+EXP(C#2)) in Model Constraint you would get the bcbootstrap cintervalas. 

CB posted on Monday, June 08, 2015  9:12 am



I apologize for all of the followup questions  I'm still relatively new to Mplus. This is the code I added to obtain the bcbootstrap cintervals: MODEL CONSTRAINT: NEW(PROB); PROB = EXP(C#2) / (1 + EXP(C#1) + EXP(C#2)); However, there are 3 errors and no indication of what is incorrect. The indicated errors immediately follow the second parentheses for each term  EXP(C#2) just before the division sign, EXP(C#1) just before the addition sign, and EXP(C#2) just before the second parentheses  in the formula. How do I fix this error and is the code correct? Additionally, the output for the PROB estimate and its upper and lower limits is the same single estimate. I assume this is because of the error in the formula? If this error is fixed, would I obtain bcbootstrap cintervals? Finally, C#1 and C#2 are the latent classes, so I haven't defined them in the input  do I need to somehow define them in order to obtain the cintervals? If so, how do I code that? Thanks again!! 


Please send the output and your license number to support@statmodel.com. 

CB posted on Monday, June 08, 2015  10:04 am



Unfortunately, my license number was purchased more than a year ago. Do you have any thoughts as to why I'm getting this error and/or how I can fix it? Or any resources that could help with this matter? 


Read MODEL CONSTRAINT in the user's guide to see how to label the parameters you want to refer to in MODEL CONSTRAINT. you can use latent class labels in this way. 


Hi, What is considered large latent class odds ratios with ordinal variables? I'm running a LCA with four ordinal variables, each with 6 categories. I've heard that latent class ORs greater than 5 or less than .2 for binary variables are large. Is the same true for ordinal variables? Seems like the ORs should be expected to be smaller. Thanks! Eric 


I'm afraid I am not aware of such standards. 

Diana P posted on Sunday, June 05, 2016  9:43 am



Hello, I have a similar question to CB above (June 3 2015). I would like confidence intervals around estimated class prevalences from a GMM. When I apply the formulas given in CB's June 3rd post by hand, I am able to get from the logit estimates back to the prevalence estimates. But when I use the lower and upper limits of the logits (from the cinterval command) in those formulas, the limits do not always contain the prevalence estimate. I cannot use cinterval(bootstrap) because my data are weighted so I am using MLR. Do you know of a solution to this? Thank you! Diana 


It sounds like you want a nonsymmetric interval for the prevalence estimate (you get the symmetric one). If you have several logit estimates that are used to create the prevalence I don't think using their limits works. I'm not sure how to go about this given your weights. 

Diana P posted on Sunday, June 05, 2016  1:05 pm



Dr. Muthen, Thank you for your response. Where is the symmetrical interval given? Thank you, Diana 


If you ask for Cinterval in the Output command, you will get the symmetric interval also for a quantity that you have defined in Model Constraint (where you get the prevalence). 

Diana P posted on Sunday, June 05, 2016  3:19 pm



Dr. Muthen, I attempted this using this statement, but received errors that I had trouble understanding: Model Constraint: NEW(PROB1); PROB1 = EXP(Class#1) / (1 + EXP(Class#1) + EXP(Class#2)); Errors: *** ERROR EXP(CLASS#1) /(1 + EXP(CLASS#1) + EXP(CLASS#2) ) ^ ERROR (this is actually placed after the first closing paren) *** ERROR EXP(CLASS#1) /(1 + EXP(CLASS#1) + EXP(CLASS#2) ) ^ ERROR (this is placed after the second closing paren) *** ERROR EXP(CLASS#1) /(1 + EXP(CLASS#1) + EXP(CLASS#2) ) ^ ERROR (this is placed after the third closing paren) In addition, the estimate I got did not match the estimated prevalence of class #1. Also, it listed that value for all of the cinterval results ( i.e. the lower .5%, lower 2.5%, etc. results were all the same). I imagine I must have written the statement incorrectly. Any advice would be greatly appreciated. Thank you, Diana 


Please send your output to Support so they can help. 

Back to top 