Message/Author 

Anonymous posted on Wednesday, May 01, 2002  2:41 pm



Is there any way to constrain parameters to be equal across latent classes? 


Constraining parameters to be equal across latent classes is done in the same way as it is done in all other models in Mplus. A number in parentheses is used. For example, MODEL: y1 ON x1 (1); y2 ON x2 (1); would constrain the regression coefficients in the regression of y1 on x1 and y2 on x2 to be held equal. If you look under Examples, Mixture Modeling, you will find equality constraints of the type you want in Mix14. 

J.W. posted on Wednesday, May 01, 2002  3:26 pm



I think his/her question is about constraining parameters to be equal across latent classes. Mplus constrains parameters (e.g., time scores, variances, and covariances of growth factors) to be equal across latent classes by default unless you set the parameters different across classes. 


The thresholds of the latent class indicators are held equal across classes by default in a latent class analysis if they are mentioned only in the %OVERALL% model command. To remove the equality constraint, mention the thresholds in the classspecific MODEL commands. To impose other equality constraints, for example, to have some held equal and others not, use the normal convention of the same number in parentheses following the parameters that are to be constrained. 

tony posted on Monday, January 20, 2003  12:57 pm



Hi. I have a quick question. Can you direct me to examples of code that compare heterogeneous tclass models to partial homogeneity latent class models for say two populations (i.e., men and women)? 

bmuthen posted on Tuesday, January 21, 2003  5:40 pm



You can study such questions by including the grouping variable (e.g. gender) as a covariate. See Example 25.10 on page 270 in the Mplus Users' guide. Direct effects capture group differences in measurement. This approach covers the models studied in the Clogg & Goodman chapter of Sociological Methodology, 1985. 


hello my question relates to examining strict factorial invariance across four latent classes in a factor mixture model including 4 factors and 4 covariates. I have run the default model where factor loadings, residual variances and intercepts are held equal across classes, so I now want to free these parameters so as to compare the two models. However, i am a little unsure as to how the input instructions need to be set up. do i free the parameters in the %overall% model command through assigning different start values, or do i free them by merely mentioning them in the class specific model commands for each class? also, i understand that i need to fix the factor means to zero when doing this, but are there any other parameters i need to take into consideration in the input instructions? many thanks 


Factor loadings and intercepts are constrained to be equalacross groups in Mplus as the default. To relax the equality constraint, mention these parameters in the groupspecific MODEL commands. It is not necessary to give starting values. Note that you do not want to mention the factor loading that sets the metric of the factor. For residual variances, leaving the equality constraint out of the overall MODEL command will relax the equality constraint. When intercepts are free across groups, factor means should be fixed to zero in all groups. Otherwise, factor means shoud be zero in one group and free in the others. A brief description of testing for measurement invariance is contained in Chapter 13 of the Version 4 Mplus User's Guide which is available in pdf form on the website. 

Sean Mullen posted on Saturday, April 25, 2009  10:05 am



Enders and Tofighi (2008) examined the impact of misspecifying classspecific residual variances. If the MPlus default in the general MODEL command is to free them across classes, which values should we use (or what steps might we follow) to improve the model fit if the tendency is for "level1" (class 1) to be off the mark. Moreover, authors note that these parameters are rarely reported, so can you recommend a format for doing so (or a paper that does report residual variances)? For example, should they be reported for each class solution compared, or just the final solution? 


Variances and residual variances are held equal across classes as the default. To see where these variances should be free, use the PLOT command to look at estimated means and observed individual values. 


I’m running an EFA with 43 dichotomous variables (Mplus 5.1). It is my understanding that the “modification indices” indicate the drop in chisquare if I allow a correlated error between two given indicators. And that also, it would improve the other estimators (CFI, TLI, RMSEA, AND SMRM). Thus, I need to allow a correlation between two of my dichotomous indicators (x and y). I am using the following instructions doing so X with y@; But it does not change anything; the chisquare, CFI, TLI, RMSEA and SMRM did not change at all. Am I using the right instruction? I'll appreciate your help 


The @ symbol fixes a parameter. Try x WITH y; 


Thank you very much for the feedback. It is highly appreciate. I followed your suggestion and added it under the model section as it is shown below: Model: x with y; My previous EFA output showed for each factor solution a substantial high chisquare change in the modification indices for adding a correlated error between the two given indicators. However, after implementing the “with” command, the modification indices show the same substantial high square change that I previously observed. I was expecting 0 or at least a lower number in the modification indices between these two indicators. Furthermore, I revised the output and I could not find any information regarding the size of the correlation between these two indicators and its associated statistical significance, is it possible to get this information in MPLUS? Is any place in the MPLUS web site that provides examples to use statements such as WITH using the 5.1 version? Sincerely, Esperanza 


Please send your full output and license number to support@statmodel.com. 

Alvin posted on Monday, May 05, 2014  12:31 am



Hi Dr Muthen, I estimated a 2class model with covariates. The outcome makes sense with good class separation and homogeneity within each class. Itemresponse probabilities show that however there is some ambiguity in the response pattern of one of the items in class 1, with this item showing similar probabilities (0.493 and 0.507) in terms of endorsing and not endorsing that item. Is this acceptable? RESULTS IN PROBABILITY SCALE Latent Class 1 K10 Category 1 0.624 0.079 7.920 0.000 Category 2 0.376 0.079 4.779 0.000 EPDS Category 1 0.000 0.000 0.000 1.000 Category 2 1.000 0.000 0.000 1.000 PTSD Category 1 0.493 0.088 5.612 0.000 Category 2 0.507 0.088 5.775 0.000 IED Category 1 0.246 0.077 3.182 0.001 Category 2 0.754 0.077 9.730 0.000 


PTSD doesn't work well for classification. This is acceptable as long as the classes are not substantively describing ptsd. 


Is it possible to do invariance testing across groups using the XWITH command? For my sample, I believe I have an interaction that is different between males and females, but it looks like I can't run the XWITH code in a separate group ("Random effect variables can only be declared in the OVERALL model") Is there a way around this? 


This is possible. Send our output to Support along with your license number. 

G. H. posted on Tuesday, February 27, 2018  1:32 pm



Dear Dr. Muthen, I am running a twolevel latent class model with a categorical dependent variable. I have time points at the within level and individuals at the between level. I would like to constrain the thresholds to be equal across classes and set the intercept to 0 in the first class and estimate it freely in the other classes. However, since I cannot do this directly with a categorical variable, I tried to implement it with model constraints: MODEL: %WITHIN% %OVERALL% s  y on time; s2  y on time2; %BETWEEN% %OVERALL% [s*]; s@0; [s2*]; s2@0; [y$1] ; [y$2] ; [y$3] ; y@0; %BETWEEN% %cb#1% [s*]; [s2*]; [y$1] (a); [y$2] (b); [y$3] (c); %BETWEEN% %cb#2% [s*]; [s2*]; [y$1] (d); [y$2] (e); [y$3] (f); MODEL CONSTRAINT: new(int2); int2 = ad; int2 = be; int2 = cf; Does that make sense? Also, if I use this specification, can I compare means of s and s2 across classes? Thank you. 


Something like that might work  but a perhaps more down to earth approach is given in the 2016 Psychometrika article by Wu and Estabrook which includes an Mplus Appendix script for it. 

Back to top 