Equality Constraints Across Latent Cl... PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Anonymous posted on Wednesday, May 01, 2002 - 2:41 pm
Is there any way to constrain parameters to be equal across latent classes?
 Linda K. Muthen posted on Wednesday, May 01, 2002 - 3:12 pm
Constraining parameters to be equal across latent classes is done in the same way as it is done in all other models in Mplus. A number in parentheses is used. For example,

MODEL:

y1 ON x1 (1);
y2 ON x2 (1);

would constrain the regression coefficients in the regression of y1 on x1 and y2 on x2 to be held equal. If you look under Examples, Mixture Modeling, you will find equality constraints of the type you want in Mix14.
 J.W. posted on Wednesday, May 01, 2002 - 3:26 pm
I think his/her question is about constraining parameters to be equal across latent classes.
Mplus constrains parameters (e.g., time scores, variances, and covariances of growth factors) to be equal across latent classes by default unless you set the parameters different across classes.
 Linda K. Muthen posted on Wednesday, May 01, 2002 - 4:03 pm
The thresholds of the latent class indicators are held equal across classes by default in a latent class analysis if they are mentioned only in the %OVERALL% model command. To remove the equality constraint, mention the thresholds in the class-specific MODEL commands. To impose other equality constraints, for example, to have some held equal and others not, use the normal convention of the same number in parentheses following the parameters that are to be constrained.
 tony posted on Monday, January 20, 2003 - 12:57 pm
Hi. I have a quick question. Can you direct me to examples of code that compare heterogeneous t-class models to partial homogeneity latent class models for say two populations (i.e., men and women)?
 bmuthen posted on Tuesday, January 21, 2003 - 5:40 pm
You can study such questions by including the grouping variable (e.g. gender) as a covariate. See Example 25.10 on page 270 in the Mplus Users' guide. Direct effects capture group differences in measurement. This approach covers the models studied in the Clogg & Goodman chapter of Sociological Methodology, 1985.
 Katharina Schmid posted on Wednesday, March 22, 2006 - 4:41 am
hello
my question relates to examining strict factorial invariance across four latent classes in a factor mixture model including 4 factors and 4 covariates.
I have run the default model where factor loadings, residual variances and intercepts are held equal across classes, so I now want to free these parameters so as to compare the two models.
However, i am a little unsure as to how the input instructions need to be set up.
do i free the parameters in the %overall% model command through assigning different start values, or do i free them by merely mentioning them in the class specific model commands for each class?
also, i understand that i need to fix the factor means to zero when doing this, but are there any other parameters i need to take into consideration in the input instructions?
many thanks
 Bengt O. Muthen posted on Wednesday, March 22, 2006 - 6:11 am
Factor loadings and intercepts are constrained to be equalacross groups in Mplus as the default. To relax the equality constraint, mention these parameters in the group-specific MODEL commands. It is not necessary to give starting values. Note that you do not want to mention the factor loading that sets the metric of the factor. For residual variances, leaving the equality constraint out of the overall MODEL command will relax the equality constraint. When intercepts are free across groups, factor means should be fixed to zero in all groups. Otherwise, factor means shoud be zero in one group and free in the others. A brief description of testing for measurement invariance is contained in Chapter 13 of the Version 4 Mplus User's Guide which is available in pdf form on the website.
 Sean Mullen posted on Saturday, April 25, 2009 - 10:05 am
Enders and Tofighi (2008) examined the impact of misspecifying class-specific residual variances. If the MPlus default in the general MODEL command is to free them across classes, which values should we use (or what steps might we follow) to improve the model fit if the tendency is for "level-1" (class 1) to be off the mark. Moreover, authors note that these parameters are rarely reported, so can you recommend a format for doing so (or a paper that does report residual variances)? For example, should they be reported for each class solution compared, or just the final solution?
 Linda K. Muthen posted on Monday, April 27, 2009 - 10:02 am
Variances and residual variances are held equal across classes as the default. To see where these variances should be free, use the PLOT command to look at estimated means and observed individual values.
 Esperanza Camargo posted on Sunday, May 24, 2009 - 4:39 pm
I’m running an EFA with 43 dichotomous variables (Mplus 5.1). It is my understanding that the “modification indices” indicate the drop in chi-square if I allow a correlated error between two given indicators. And that also, it would improve the other estimators (CFI, TLI, RMSEA, AND SMRM). Thus, I need to allow a correlation between two of my dichotomous indicators (x and y).

I am using the following instructions doing so

X with y@;

But it does not change anything; the chi-square, CFI, TLI, RMSEA and SMRM did not change at all. Am I using the right instruction?

I'll appreciate your help
 Linda K. Muthen posted on Monday, May 25, 2009 - 6:09 am
The @ symbol fixes a parameter. Try

x WITH y;
 Esperanza Camargo posted on Monday, May 25, 2009 - 2:05 pm
Thank you very much for the feedback. It is highly appreciate.

I followed your suggestion and added it under the model section as it is shown below:

Model:
x with y;

My previous EFA output showed for each factor solution a substantial high chi-square change in the modification indices for adding a correlated error between the two given indicators.

However, after implementing the “with” command, the modification indices show the same substantial high square change that I previously observed. I was expecting 0 or at least a lower number in the modification indices between these two indicators.

Furthermore, I revised the output and I could not find any information regarding the size of the correlation between these two indicators and its associated statistical significance, is it possible to get this information in MPLUS? Is any place in the MPLUS web site that provides examples to use statements such as WITH using the 5.1 version?


Sincerely,

Esperanza
 Linda K. Muthen posted on Monday, May 25, 2009 - 2:13 pm
Please send your full output and license number to support@statmodel.com.
 Alvin  posted on Monday, May 05, 2014 - 12:31 am
Hi Dr Muthen, I estimated a 2-class model with covariates. The outcome makes sense with good class separation and homogeneity within each class. Item-response probabilities show that however there is some ambiguity in the response pattern of one of the items in class 1, with this item showing similar probabilities (0.493 and 0.507) in terms of endorsing and not endorsing that item. Is this acceptable?

RESULTS IN PROBABILITY SCALE

Latent Class 1

K10
Category 1 0.624 0.079 7.920 0.000
Category 2 0.376 0.079 4.779 0.000
EPDS
Category 1 0.000 0.000 0.000 1.000
Category 2 1.000 0.000 0.000 1.000
PTSD
Category 1 0.493 0.088 5.612 0.000
Category 2 0.507 0.088 5.775 0.000
IED
Category 1 0.246 0.077 3.182 0.001
Category 2 0.754 0.077 9.730 0.000
 Linda K. Muthen posted on Monday, May 05, 2014 - 9:33 am
PTSD doesn't work well for classification. This is acceptable as long as the classes are not substantively describing ptsd.
 Carillon J Skrzynski posted on Friday, December 22, 2017 - 7:55 am
Is it possible to do invariance testing across groups using the XWITH command? For my sample, I believe I have an interaction that is different between males and females, but it looks like I can't run the XWITH code in a separate group ("Random effect variables can only be declared in the OVERALL model") Is there a way around this?
 Bengt O. Muthen posted on Friday, December 22, 2017 - 1:53 pm
This is possible. Send our output to Support along with your license number.
 G. H.  posted on Tuesday, February 27, 2018 - 1:32 pm
Dear Dr. Muthen,

I am running a two-level latent class model with a categorical dependent variable. I have time points at the within level and individuals at the between level. I would like to constrain the thresholds to be equal across classes and set the intercept to 0 in the first class and estimate it freely in the other classes. However, since I cannot do this directly with a categorical variable, I tried to implement it with model constraints:

MODEL:
%WITHIN%
%OVERALL%
s | y on time;
s2 | y on time2;

%BETWEEN%
%OVERALL%
[s*];
s@0;
[s2*];
s2@0;
[y$1] ;
[y$2] ;
[y$3] ;
y@0;

%BETWEEN%
%cb#1%
[s*];
[s2*];
[y$1] (a);
[y$2] (b);
[y$3] (c);

%BETWEEN%
%cb#2%
[s*];
[s2*];
[y$1] (d);
[y$2] (e);
[y$3] (f);

MODEL CONSTRAINT:
new(int2);
int2 = a-d;
int2 = b-e;
int2 = c-f;

Does that make sense? Also, if I use this specification, can I compare means of s and s2 across classes?

Thank you.
 Bengt O. Muthen posted on Tuesday, February 27, 2018 - 2:11 pm
Something like that might work - but a perhaps more down to earth approach is given in the 2016 Psychometrika article by Wu and Estabrook which includes an Mplus Appendix script for it.
 Juan Caro posted on Monday, May 20, 2019 - 3:32 pm
Dear Dr. Muthen,

I want to understand how to implement a mixture factor model where the weighted mean of the factors across classes is fixed at zero (instead of the mean of one of the classes to be set as zero, the default).

In particular, I want to estimate a model similar to ex7.17:

MODEL:
%OVERALL%
f BY y1-y5;
[f@0];

%c#1%
f BY y1-y5;
[f*];

%c#2%
f BY y1-y5;
[f*];

When I get the output, the weighted combination of the means for the factor among classes does not equal to zero. Could you please advice on how to specify the model? thank you,
 Bengt O. Muthen posted on Monday, May 20, 2019 - 5:35 pm
You have to use Model Constraint. In the Model command, you give labels to the class logits and the factor means and then you use those labels to impose the zero restriction to the weighted factor mean.
 Juan Caro posted on Monday, May 20, 2019 - 6:38 pm
Thank you Dr. Muthen,

I made the appropriate changes and estimated the following model:

MODEL:
%OVERALL%
f BY y1-y5;
[c#1](pi);

%c#1%
[f*](mu1);

%c#2%
[f*](mu2);

Model constraint:
0=pi*mu1+(1-pi)*mu2;

However, Mplus indicates that for this constraint only the ODLL algorithm is possible. Could you explain why EM is not a feasible algorithm? Thank you for your assistance
 Tihomir Asparouhov posted on Wednesday, May 22, 2019 - 11:36 am
With the EM algorithm Mplus will maximize/estimate the pi parameter separately from the mu1 and mu2 parameters and so it is unable to tackle the joint constraint. You have three alternatives.

1. Use algo=odll;

2. If the estimate of pi is stable and reliable you can replace the model constraint with
Model constraint:
0=0.3*mu1+0.7*mu2;
where 0.3 and 0.7 are the estimates from the model where [F@0] in class 1. Estimates here will be approximate.

3. Use algebraic reparameterization. Estimate the model
%c#1%
[f@0];
%c#2%
[f*](mu);

Model constraint: New(mu1 mu2);
mu1=-(1-pi)*mu;
mu2=pi*mu;

This should give exactly the same result as 1.
 Juan Caro posted on Thursday, May 23, 2019 - 5:34 am
Hello Tihomir,

Option 3 did the trick (fact is that EM is much more efficient that ODLL). If you don't mind, I still have trouble understanding what ODLL does (I haven't found references about it).

By the way, if anyone is following a similar model, note that in the model I posted, an additional constrain needs to be placed:

pi=exp(p1)/(exp(p1)+1);

where p1 is the label of [c#1]

thanks again,
 Juan Caro posted on Thursday, May 23, 2019 - 7:20 am
Just a quick follow up. The reparametrization in (3) is not equivalent to (1), in the sense that the factor mean in class 1 is fixed to zero, and even with the restriction, the mean of the factor overall is different from zero. you can certainly obtain the same values for mu1 an mu2 with both methods, but mu1 will no longer be the mean of the factor in class 1.
 Tihomir Asparouhov posted on Thursday, May 23, 2019 - 9:22 am
ODLL stands for observed data log-likelihood and that optimization is based on using the Fletcher-Ppowell directly on the observed data log-likelihood. Indeed EM is more efficient. The genius of EM is replacing one big computation with many small ones. You can see the difference between observed and expected log-likelihood methods here
http://www.statmodel.com/download/Muthen_Shedden_1999.pdf

Regarding the difference between 1 and 3, I don't see the entire model and certainly the algebraic manipulation depends on the rest of the model but from what I can see both should be obtaining the same log-likelihood value so they provide the same model fit to the data.
 Juan Caro posted on Thursday, May 23, 2019 - 9:35 am
Thank you very much for the input. The entire model is exactly as posted on my original comment. Like you said, they are equivalent, however the restrictions create a different output in terms of factors.

Regardless, all this info is extremely helpful. Regards,
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: