Message/Author 


Hi, I have two questions on how Mplus works for LCA with ordinal indicators. I read the Mplus manual and technical appendix, but I could not find the answer. I hope you can help me. First, I'd like to know if it is true that considering the LCA indicators categorical instead of nominal in Mplus just order the thresholds differently, without any change in df or fit of model. If not, what kind of constrain does this entail? Second, suppose my ordinal indicators are on an interval scale. My understanding is that I should constrain the distance between thresholds to be equal, for each class/item combination. If this is correct, what is the best way to do it in Mplus? Can I do it via the OVERALL statement, even if the distance between thresholds is the same within class, but not between classes? Best, Guillaume 


The LCA model does not have a continuous latent variable predictor of the indicators, but essentially a categorical one. In such a case, there is no difference between nominal and categorical indicator modeling. You can apply such constraints by labeling the parameters in MODEL and constraining them in MODEL CONSTRAINT. See UG examples. 


Hello Professor Muthen, Thank you for your answers. I’m using the following constraints, to specify that the S ordinal response categories are on an interval scale (S = 3): ..... %c#1% [u1$1*1] (t1); [u1$2* 0] (t2); [u1$3* 1] (t3); %c#2% [u1$1*1] (t4); [u1$2* 0] (t5); [u1$3* 1] (t6); MODEL CONSTRAINT: t3 = 2*t2  t1; t6 = 2*t5 – t4; t2 = t1 + t5  t4; ..... I wonder if instead, it would be possible to specify the S1 thresholds so that they are equal to (alpha + beta * s), where s = 1,2,3… S1. Best, Guillaume 


Hello, I think I got it, I use ................. MODEL: %c#1% [u1$1] (t1); [u1$2] (t2); [u1$3] (t3); %c#2% [u1$1] (t4); [u1$2] (t5); [u1$3] (t6); MODEL CONSTRAINT: NEW(a1,a2,b); t1 = a1 b ; t2 = a1 ; t3 = a1 +b ; t4 = a2 b ; t5 = a2 ; t6 = a2 +b ; ................. I find this a lot more straightforward to use. Also, I’d like to know, am I justified in my belief that interval ordinal items corresponds to equidistant thresholds (here represented by the common b parameter)? I’m afraid I might be mistaken in the interpretation of the Mplus logistic proportional odds model for LCA. Best, Guillaume 


Either way should work. 


When performing an LCA for 8 ordinal indicators, I have found that my model does not converge when I constrain the thresholds to be equal across classes. I would think that it would be easier to estimate a model where the thresholds are constrained, versus forcing MPLUS to freely estimate all of the thresholds at once. Secondly, I was wondering if the best fit indice to examine for a LCA with ordinal indicators was the BIC vs. pearson chisquare and likelihood? Pearson chi square and the likelihood ratio test tell me I have a pvalue of 1 with over 10000 degrees of freedom whenever I don't constrain my thresholds to be equal across classes. I don't know, although I'm not getting any warnings that my model is not identified, I was wondering if pearson and the likelihood chi square test are hinting that something is wrong, or if I should just ignore them. 


With equal thresholds across classes there is no difference between the classes and a solution cannot be found. Use the Mplus default instead. When the Pearson and LR chi2 disagree, neither should be trusted. That and the fact that the pvalue is 1 are indications of too many cells in your frequency table being empty or near empty so that these tests cannot be used. Look at Tech10 instead. 


As you stated, "With equal thresholds across classes there is no difference between the classes and a solution cannot be found." Does this fact also hold true if I am performing a factor mixture model? In other words, equal thresholds across classes (regardless of what code I use in the %overall% statement) result in having no difference between the classes, such that a solution cannot be found? Thank you. Rachel 


The general rule is that some parameters have to be different across classes for classes to be found. The more parameters that are different, the easier it is to work with the mixture model (you have an easier time finding the loglikelihood optimum). With factor mixture modeling, the thresholds could be equal across classes while the factor means are different, and perhaps also the factor loadings. That type of FMM, however, is harder to work with than when the thresholds/intercepts of the indicators vary across classes. 

RDU posted on Monday, November 17, 2008  11:40 pm



Hello. I am a novice trying to substantively interpret a factor mixture model with two classes. There are two continuous latent factors for the CFA portion of the model with eight ordinal indicators, such that the factor model has a congeneric structure. I've read several articles on the topic, and I have not seen one that really explains how to substantively interpret the class means that are given as output for class one. Yes, so what does it mean exactly to have a value of 3.78 for the first class mean? Thank you. 


The class mean for the first class is relative to the zero mean for the reference class. With ordinal outcomes, you should think ordinal logistic regression (proportionalodds logistic regression) where the factor is the predictor variable. A change in the factor value implies a change in the probabilities for the observed categories in line with ordinal logistic regression (lower value gives lower probabilities). This means that the ultimate interpretation of a certain factor mean value being lower in one class than another can be understood in terms of different probabilities for the ordinal outcomes for different classes. Although not talking about latent predictors, the Agresti categorical data analysis books on logistic regression are helpful in understanding the details better. 

anonymous posted on Monday, November 05, 2012  2:41 pm



Is there any way to depict in graphical form Latent Class Profiles with nominal variables as indicators akin to Latent Class Profiles with dichotomous indicators? Although the Mplus plot function provides plots of the probabilities for every level of every class separately by indicator, is there any way to get the "bigger picture" of the profiles by plotting values for each class across the indicators? 

anonymous posted on Tuesday, November 06, 2012  11:25 am



In the post above, the indicators are ordinal not nominal. Apologies for the error. 


We don't provide this type of plot. 


Dear Discussion Community, I am running a CFA with four nominal indicators for one continuous factor. I only have access to Mplus 4.12 at my university. Therefore I have manually transformed my indicators into a set of binary variables. I am unsure about the right settings to make Mplus run the correct analysis. I am freeing all loadings and fixing the factor variance to 1 with LINK=PROBIT and CHOLESKY=off. 1. Are these settings correct? 2. If I calculate probabilities from the parameter estimates would that be equivalent to the multinomial output that I receive when specifying variables as NOMINAL in later Mplus versions? I am running the model for several groups separately. Some cases led to these messages: ONE OR MORE PARAMETERS WERE FIXED TO AVOID SINGULARITY OF THE INFORMATION MATRIX. THE SINGULARITY IS MOST LIKELY BECAUSE THE MODEL IS NOT IDENTIFIED, OR BECAUSE OF EMPTY CELLS IN THE JOINT DISTRIBUTION OF THE CATEGORICAL VARIABLES IN THE MODEL. THE FOLLOWING PARAMETERS WERE FIXED: 3 12 and "** Of the 256 cells in the latent class indicator table, 1 were deleted in the calculation of chisquare due to extreme values." Some items are skewed so that empty cells might exist in some groups. I am unsure about how to deal with these parameters being fixed. 4. Can the model be used under these conditions or are estimates too untrustworty? Thank you very much. 


I should add that I am testing the individual group models in view of then proceeding with MIXTURE KNOWNCLASS models to test for measurement invariance. 


1. You should use the default settings unless you have reason to do otherwise. 2. CATEGORICAL and NOMINAL are the same for binary items. 34. We would need to see Version 7.11 output and you license number at support@statmodel.com. We can troubleshoot only the most recent version of Mplus. 


hello I have some questions regarding nominal variables. 1. Some of my logits are high e.g. 15 with SE and P= 999. I am thinking this isn't a good thing. Do you know what the problem could be? 2. I want to make sure that I am coding and interpreting correctly..If I have coded a nominal variable using 0,1,2 and then used var#1, var#2 in the input, the levels will correspond i.e. 0 is the referent category? 3. Lastly I have found the example to convert the logits on pg49596 of the users guide and understand that I should only use the intercepts with the covariates set to 0 but I am still having difficulty. Here are the logits of one of my nominal variables V3_POSAF#1 1.928 V3_POSAF#2 0.967 Intercepts C#1 0.721 C#2 0.721 I understand that you exp the intercept 0.721 exp =0.486 and exp 0.721=2.056 so the p=0.486/2.542=.19 and 2.056/2.542=0.81. What I don't follow is how this gives me the probabilities for each level of my nominal variables. thank you in advance. 


1. A high logit is a high probability for that category. 2. In multinomial logistic regression with a nominal DV the last category is the referent. 3. I don't understand where these 4 values come from V3_POSAF#1 1.928 V3_POSAF#2 0.967 Intercepts C#1 0.721 C#2 0.721 That looks like a mixture model. You probably have to send your output to support@statmodel.com along with your license number. 


I am running a latent class analysis with 3 classes, 4 nominal indicators (u1u4), and no covariates. Indicators u1 and u4 have 3 levels, and indicators u2 and u3 have 4 levels. I need help please on interpreting the model results (means) to obtain the conditional probabilities for each indicator response, given class membership. For example, for latent class #3, I have: Estimate U1#1 0.013 U1#2 0.302 U2#1 13.732 U2#2 12.920 U2#3 12.861 U3#1 2.328 U3#2 1.446 U3#3 0.772 U4#1 0.527 U4#2 0.134 I thought that the means would be the logits of the conditional probabilities, with the conditional probability for the last level for each variable calculated as 1  the sum of other categories  e.g. U1#3 = 1  (U1#1 + U1#2). But this must be incorrect, because then (for example) the conditional probabilities for U2#1, U2#2, and U2#3 would all be >99% and could not sum to 1.0. I saw references above to pages 495496 of the user's guide, but I don't think that applies here since there are no covariates. How can I obtain the conditional probabilities, please? 


Those pages at the end of Chapter 14 of the UG are applicable. The first part of the example we give concerns using only the intercepts (slopes=0) so that is like not having covariates. 


Dr. Muthen, Thanks for your reply. It looks like the probabilities calculated at the end of Chapter 14 are the probability of class membership conditional on the response pattern. I am interested in the probability of each response for each indicator, conditional on class membership. Is there any output that corresponds directly to those probabilities, or do I have to calculate them by hand? 


Sorry, please disregard the above post. I've figured it out. Thank you 


Hello Drs. Muthén and Muthén, I am having trouble interpreting the results from a LCA using ordinal (5point Likert) data. In the past I have dichotomized such data but I was trying to maintain more variability. My code is as follows: TITLE: MT emotions LCA 3 class DATA: FILE is "C:\Users\ejangresearch\Desktop\Jeanne\emotions6tp.csv"; VARIABLE: NAMES ARE Participant EJ2 HF2 PD2 FR2 AX2 AS2 BR2 CT2 CF2 CR2 ; MISSING are all (9999); IDVARIABLE = Participant; CATEGORICAL = EJ2 HF2 PD2 FR2 AX2 AS2 BR2 CT2 CF2 CR2 ; USEVAR = EJ2 HF2 PD2 FR2 AX2 AS2 BR2 CT2 CF2 CR2 ; CLASSES = C(3) ANALYSIS: TYPE = mixture; MODEL: %OVERALL% Essentially what is confusing is that in my results, a variable which has high endorsement of Likert points 1, 2, 3, 4 (out of 5) is only showing high probabilities (in any class) in Categories 4 and 5 (in the probabilities section), which is hard to interpret. I suspect this has to do with the thresholds. Any advice you might be able to offer would be appreciated! Kind regards, Jeanne 


Please send your output to Support along with your license number. 

Back to top 