Message/Author 

Anonymous posted on Friday, April 29, 2005  12:09 pm



Can you use the model constraints option with mixture models? 


Yes, this is possible. 


Hello Dr. Muthen, I am trying to estimate a latent class SEM in which i need to place constraints on parameters in one of the classes. Can i do this by incorporating the model constraint command in the latent class part of the model for one of the classes? Regards 

bmuthen posted on Wednesday, June 08, 2005  6:57 am



No, you should put the model constraint section after your model section. But, you label the parameters that you want to constrain in the relevant class. 


Hi Dr. Muthen, Thanks for the clarification. Regards 


I am conducting latent class analysis using eleven variables from a complex survey. In terms of my results, a four class model appears to be the best solution. When I examined the latent class graph, all of the classes appear to have a similar structure but differ in terms of magnitude (i.e., four classes that seem to represent a normal group and a mild, moderate and severe form of an illness). In order to test this idea, I would like to attempt to constrain or restrict the model somehow but I uncertain as to how to go about this. I hope I have made my query clear. Any thoughts/help would be very much appreciated. Thank you 


See factor mixture articles under Recent Papers on our web site. 

George Y posted on Tuesday, May 31, 2011  9:42 am



Dear Prof/s Muthen, I would be very grateful if you could provide me with guidance on some questions I have. I am trying to feel my way through mplus and LCA for the first time so I apologize if these questions seem very basic. I am using LCA with 15 indicators (mostly 2 or 3 level) and a sample of 167. My initial results suggest a 4 class solution and now I intend to include covariates in my model. My questions are in relation to specifying starting values for my model (e.g. Example 7.4 from user manual). Specifically, could you explain why you might want to set specific starting values? They seem arbitrary to me at the moment. For example, in Example 7.4, I do not understand why two of the indicators are set at 1, and the other at 1 for one class, then the next class is the opposite. What is the significance of changing these thresholds? I have done a bit of trial and error but it does not seem like it alters my results at all (probably because I am not understanding the significance of it!). Additionally, I have been considering someone’s advice of reducing the complexity of my model with parameter constraints. Is this the same thing as setting starting values? Any help would be greatly appreciated. Regards 


Starting values are not necessary unless you have a reason to want classes to be in a particular order or if you want to speed up the analysis. In both cases the starting values would be taken from the parameter estimates of the model. Parameter constraints are not the same as starting values. Constraining parameters to be equal is done using equality constraints. See the user's guide for further information. 

George Y posted on Tuesday, May 31, 2011  5:58 pm



Thankyou so much for your reply. So just so I am clear, if I wanted to obtain the same results but with the classes in a different order, I would set the thresholds that were obtained in the initial output? Additionally, is it correct to assume that the threshold that is included is a representation of the probability of endorsing a particular level of an item? So in the case of example 7.4: %c#1% [u1$1*1 u2$1*1 u3$1*1 u4$1*1]; %c#2% [u1$1*1 u2$1*1 u3$1*1 u4$1*1]; this essentially could be interpreted as an a priori hypothesis that class 1 will have "low" probability of endorsing items 1&2 and "high" probability of endorsing item 3&4 and vice versa for class 2? And then I assume you would comparing the basic model, with the user specified model (which is driven by research)? One final question (sorry!) is in regards to conditional independance. I have a few bivariate residuals which are significant in Tech10 and would like to allow for local dependance amongst those indicators. Is this as simple as including something like; f by u1 u2; g by u1 u4; in the overall model? Thank you so much for your time 

George Y posted on Tuesday, May 31, 2011  10:19 pm



Further to my previous post and in relation to example 7.4, would the syntax for an example of constraining the u1 to be equal accross groups take the form of: %c#1% [u1$1*1] (1); [u2$1*1 u3$1*1 u4$1*1]; %c#2% [u1$1*1] (1); [u2$1*1 u3$1*1 u4$1*1]; So sorry for the beginner questions! 


Everything you say in the first window is true. With mixture modeling, I would not use equality constraints to test nested models. I would instead use MODEL TEST. 

George Y posted on Wednesday, June 01, 2011  10:05 pm



Great! Thank you so much for your help. It has been very helpful indeed. Best regards, 

MT posted on Thursday, November 10, 2016  6:11 am



Hello, I have the following problem and am not sure which option to solve it is the most appropriate: Using factor mixture modeling I got two classes and three factors within each class. I would now like to know whether the correlation between F1 and F2 is significantly larger in class 2 than in class 1. Option 1 I used was model test: %c#1% F1 with F2 (a1) %c#2% F1 with F2 (a2) Model test: a1 = a2. This test was not significant. Which one is the null hypothesis in this case – the model with restrictions or the one without? Option 2 (about which I was not sure) was to compare latent correlations for independent samples with either the unstandardized or standardized covariation/correlation coefficients using standard Fisherztransformation. Option 3: I could save class membership and compute manifest correlations which could then be tested against each other. Which one, in your opinion, works out best for this question? Thanks for your advice! 


Use option 1. The null hypothesis is the model with the restriction. 

Janna Kook posted on Thursday, November 17, 2016  4:22 pm



Hello, I'm trying to constrain my classes so that all members of each class either endorse or do NOT endorse one dichotomous item. Using example 7.13, I see how to constrain a class so that all members endorse the item... %OVERALL% %c#1% [u1$1@15]; But when I try to constrain it so that all members do not endorse the item... %OVERALL% %c#1% [u1$0@15]; ...this isn't recognized. How can I constrain a class so that all members have the value '0'? Thank you! 


You want to use [u1$1@15], not 15. If this doesn't help, send to Support along with your license number. 


Dear Drs. Muthen, I have enumerated a latent class model (N = ~1,000; 12 binary indicators used for LCA) and decided based on theory and fit statistics that a 5class model fits the data best. I am interested in testing measurement invariance across three groups (state). After running separate statespecific LCA models, I can see that I will need my final LCA model on the full sample to include both overlapping and statespecific classes. So, in my final model I would like to allow class prevalences to vary across states and I would like to constrain certain class prevalences to be zero in certain states (so that my final model best reflects what I observed in the statespecific models). My questions are : 1) My 'overall' statement allows class proportions to vary across states but I'm wondering how do I specify that I'd like constrain certain class prevalences to be zero for certain groups(states)? %OVERALL% C on state; ! state is a categorical variable coded 1,2,3 2) Is it okay to include this "state" variable in the overall 'C on state' command even though "state" is a categorical variable with three values? Thank you for your time. 


Let state be a Knownclass and then say in the Overall part c#x ON state#y@15; where x and y are the appropriate categories of the two latent class variables and 15 gives a zero probability. 


Thank you for your very quick reply, Dr. Muthen. 

Back to top 