Constraining parameters to be equal across latent classes is done in the same way as it is done in all other models in Mplus. A number in parentheses is used. For example,
y1 ON x1 (1); y2 ON x2 (1);
would constrain the regression coefficients in the regression of y1 on x1 and y2 on x2 to be held equal. If you look under Examples, Mixture Modeling, you will find equality constraints of the type you want in Mix14.
I think his/her question is about constraining parameters to be equal across latent classes. Mplus constrains parameters (e.g., time scores, variances, and covariances of growth factors) to be equal across latent classes by default unless you set the parameters different across classes.
The thresholds of the latent class indicators are held equal across classes by default in a latent class analysis if they are mentioned only in the %OVERALL% model command. To remove the equality constraint, mention the thresholds in the class-specific MODEL commands. To impose other equality constraints, for example, to have some held equal and others not, use the normal convention of the same number in parentheses following the parameters that are to be constrained.
tony posted on Monday, January 20, 2003 - 12:57 pm
Hi. I have a quick question. Can you direct me to examples of code that compare heterogeneous t-class models to partial homogeneity latent class models for say two populations (i.e., men and women)?
bmuthen posted on Tuesday, January 21, 2003 - 5:40 pm
You can study such questions by including the grouping variable (e.g. gender) as a covariate. See Example 25.10 on page 270 in the Mplus Users' guide. Direct effects capture group differences in measurement. This approach covers the models studied in the Clogg & Goodman chapter of Sociological Methodology, 1985.
hello my question relates to examining strict factorial invariance across four latent classes in a factor mixture model including 4 factors and 4 covariates. I have run the default model where factor loadings, residual variances and intercepts are held equal across classes, so I now want to free these parameters so as to compare the two models. However, i am a little unsure as to how the input instructions need to be set up. do i free the parameters in the %overall% model command through assigning different start values, or do i free them by merely mentioning them in the class specific model commands for each class? also, i understand that i need to fix the factor means to zero when doing this, but are there any other parameters i need to take into consideration in the input instructions? many thanks
Factor loadings and intercepts are constrained to be equalacross groups in Mplus as the default. To relax the equality constraint, mention these parameters in the group-specific MODEL commands. It is not necessary to give starting values. Note that you do not want to mention the factor loading that sets the metric of the factor. For residual variances, leaving the equality constraint out of the overall MODEL command will relax the equality constraint. When intercepts are free across groups, factor means should be fixed to zero in all groups. Otherwise, factor means shoud be zero in one group and free in the others. A brief description of testing for measurement invariance is contained in Chapter 13 of the Version 4 Mplus User's Guide which is available in pdf form on the website.
Sean Mullen posted on Saturday, April 25, 2009 - 10:05 am
Enders and Tofighi (2008) examined the impact of misspecifying class-specific residual variances. If the MPlus default in the general MODEL command is to free them across classes, which values should we use (or what steps might we follow) to improve the model fit if the tendency is for "level-1" (class 1) to be off the mark. Moreover, authors note that these parameters are rarely reported, so can you recommend a format for doing so (or a paper that does report residual variances)? For example, should they be reported for each class solution compared, or just the final solution?
Variances and residual variances are held equal across classes as the default. To see where these variances should be free, use the PLOT command to look at estimated means and observed individual values.
I’m running an EFA with 43 dichotomous variables (Mplus 5.1). It is my understanding that the “modification indices” indicate the drop in chi-square if I allow a correlated error between two given indicators. And that also, it would improve the other estimators (CFI, TLI, RMSEA, AND SMRM). Thus, I need to allow a correlation between two of my dichotomous indicators (x and y).
I am using the following instructions doing so
X with y@;
But it does not change anything; the chi-square, CFI, TLI, RMSEA and SMRM did not change at all. Am I using the right instruction?
Thank you very much for the feedback. It is highly appreciate.
I followed your suggestion and added it under the model section as it is shown below:
Model: x with y;
My previous EFA output showed for each factor solution a substantial high chi-square change in the modification indices for adding a correlated error between the two given indicators.
However, after implementing the “with” command, the modification indices show the same substantial high square change that I previously observed. I was expecting 0 or at least a lower number in the modification indices between these two indicators.
Furthermore, I revised the output and I could not find any information regarding the size of the correlation between these two indicators and its associated statistical significance, is it possible to get this information in MPLUS? Is any place in the MPLUS web site that provides examples to use statements such as WITH using the 5.1 version?
Hi Dr Muthen, I estimated a 2-class model with covariates. The outcome makes sense with good class separation and homogeneity within each class. Item-response probabilities show that however there is some ambiguity in the response pattern of one of the items in class 1, with this item showing similar probabilities (0.493 and 0.507) in terms of endorsing and not endorsing that item. Is this acceptable?
Is it possible to do invariance testing across groups using the XWITH command? For my sample, I believe I have an interaction that is different between males and females, but it looks like I can't run the XWITH code in a separate group ("Random effect variables can only be declared in the OVERALL model") Is there a way around this?
This is possible. Send our output to Support along with your license number.
G. H. posted on Tuesday, February 27, 2018 - 1:32 pm
Dear Dr. Muthen,
I am running a two-level latent class model with a categorical dependent variable. I have time points at the within level and individuals at the between level. I would like to constrain the thresholds to be equal across classes and set the intercept to 0 in the first class and estimate it freely in the other classes. However, since I cannot do this directly with a categorical variable, I tried to implement it with model constraints:
MODEL: %WITHIN% %OVERALL% s | y on time; s2 | y on time2;
Something like that might work - but a perhaps more down to earth approach is given in the 2016 Psychometrika article by Wu and Estabrook which includes an Mplus Appendix script for it.
Juan Caro posted on Monday, May 20, 2019 - 3:32 pm
Dear Dr. Muthen,
I want to understand how to implement a mixture factor model where the weighted mean of the factors across classes is fixed at zero (instead of the mean of one of the classes to be set as zero, the default).
In particular, I want to estimate a model similar to ex7.17:
You have to use Model Constraint. In the Model command, you give labels to the class logits and the factor means and then you use those labels to impose the zero restriction to the weighted factor mean.
Juan Caro posted on Monday, May 20, 2019 - 6:38 pm
Thank you Dr. Muthen,
I made the appropriate changes and estimated the following model:
MODEL: %OVERALL% f BY y1-y5; [c#1](pi);
Model constraint: 0=pi*mu1+(1-pi)*mu2;
However, Mplus indicates that for this constraint only the ODLL algorithm is possible. Could you explain why EM is not a feasible algorithm? Thank you for your assistance
With the EM algorithm Mplus will maximize/estimate the pi parameter separately from the mu1 and mu2 parameters and so it is unable to tackle the joint constraint. You have three alternatives.
1. Use algo=odll;
2. If the estimate of pi is stable and reliable you can replace the model constraint with Model constraint: 0=0.3*mu1+0.7*mu2; where 0.3 and 0.7 are the estimates from the model where [F@0] in class 1. Estimates here will be approximate.
3. Use algebraic reparameterization. Estimate the model %c#1% [f@0]; %c#2% [f*](mu);
Model constraint: New(mu1 mu2); mu1=-(1-pi)*mu; mu2=pi*mu;
This should give exactly the same result as 1.
Juan Caro posted on Thursday, May 23, 2019 - 5:34 am
Option 3 did the trick (fact is that EM is much more efficient that ODLL). If you don't mind, I still have trouble understanding what ODLL does (I haven't found references about it).
By the way, if anyone is following a similar model, note that in the model I posted, an additional constrain needs to be placed:
where p1 is the label of [c#1]
Juan Caro posted on Thursday, May 23, 2019 - 7:20 am
Just a quick follow up. The reparametrization in (3) is not equivalent to (1), in the sense that the factor mean in class 1 is fixed to zero, and even with the restriction, the mean of the factor overall is different from zero. you can certainly obtain the same values for mu1 an mu2 with both methods, but mu1 will no longer be the mean of the factor in class 1.
ODLL stands for observed data log-likelihood and that optimization is based on using the Fletcher-Ppowell directly on the observed data log-likelihood. Indeed EM is more efficient. The genius of EM is replacing one big computation with many small ones. You can see the difference between observed and expected log-likelihood methods here http://www.statmodel.com/download/Muthen_Shedden_1999.pdf
Regarding the difference between 1 and 3, I don't see the entire model and certainly the algebraic manipulation depends on the rest of the model but from what I can see both should be obtaining the same log-likelihood value so they provide the same model fit to the data.
Juan Caro posted on Thursday, May 23, 2019 - 9:35 am
Thank you very much for the input. The entire model is exactly as posted on my original comment. Like you said, they are equivalent, however the restrictions create a different output in terms of factors.
Regardless, all this info is extremely helpful. Regards,