Anonymous posted on Friday, December 07, 2001 - 5:24 pm
It's my understanding that the mixture framework may be viewed as a generalization of the multiple groups framework except that group membership is unobserved or missing, at least for the case where there are only y variables in the mixture. This seems to be implied in Appendix 8 of the Mplus manual where the mixture M step for y|c is described as a multiple groups analysis where the within-group likelihoods are weighted by the posterior probabilities of group membership.
To test my understanding of this, I ran a balanced samples multiple group model and then a mixture model with the same parameterization and full training data. In the mixture model I also fixed the intercept of the latent class regression model to 0 so that the proportions would be set to 50/50. Running the two models yielded equivalent parameter estimates and SEs, and the number of estimated parameters was the same for both models. But I'm puzzling over the fact that the log-likelihood values (and hence the AIC, BIC etc) from the two models were different, and substantially so. The LL from the MG model was -1995.8 and the LL from the Mixture model was -2273.1.
Is there some scalar that goes into the LL for the mixture model that does not go into the LL for multiple groups (e.g., some penalty for estimating more than one class)? I can't think of any other reason why, with the same parameter estimates within groups (and hence same within-group model implied means and covariances), the LL would differ. Or do I not understand the relation between the multiple groups model and mixture model correctly?
There is a direct correspondence between a mixture model with training data and a multiple group model. This is the same for the one class model and a regular model. There must be something in your setup that is causing the difference. I would be happy to look at it and find the difference if you send me both outputs and the data. Please send them to firstname.lastname@example.org.
Thank you for sending the two outputs. You set the models up exactly correctly. As an alternative, you could have let the thresholds be free. They would have been estimated at the same values at which you fixed them.
Parameter estimates and standard errors are exactly the same between the multiple group and mixture model with training data. The loglikelihoods are different because the training data essentially turns the latent class variable into an observed variable so that it becomes part of the likelihood. The loglikelihood for the mixture model differs by the value 400*log 0.5 where 400 is the sample size and .5 in the probability of being in each class.
In the manual (p. 371) you state that "More general analyses are also possible using training data in this way, such as path analysis with an unordered, polytomous observed mediating variable, extending the model to include the u and y parts." Do you have an example of an analysis using both training data and a latent class indicator? I'd be interested in seeing how this works.
I haven't thought of a specific model, but am just exploring the range of models Mplus offers. Could you give an example of the latent class as mediator? That sounds like a compelling model type that may come in handy.
Under Examples Using Mplus/Mixture Applications, the paper by Muthen and Muthen (2002) shows such a situation.
EXAMPLE 2 and 3: Growth mixture modeling of NLSY cohort 64 with covariates centered at 25: four-class model of heavy drinking with classes predicting alcohol dependence.
Anonymous posted on Friday, April 04, 2003 - 12:40 pm
A quick question about training data. I have a four class model. I would like to specify the training data so that one subset of cases will be constrained to go into either class 1 or class 2 but NOT class 3 or class 4. Conversely, the other subset of cases would be constrained to go into either class 3 or class 4 but NOT class 1 or 2. For each case, I want the posterior probability of class membership to be free for the two classes they are allowed to belong to. Would using the following training data for the two subsets of cases accomplish this? .5 .5 0 0 (subset 1) 0 0 .5 .5 (subset 2) If not, any suggestions would be appreciated.
bmuthen posted on Friday, April 04, 2003 - 1:01 pm
Use 0/1 to denote not allowed/allowed to be classified into the class in question. So, in your example you should have:
1 1 0 0 0 0 1 1
Anonymous posted on Thursday, July 22, 2004 - 4:24 am
In regard to mixture models with training data, is there any work indicating how many (or what proportion) of the sample need to be classified cases in order for them to have a substantial influence on the estimates? Does this differ between categorical and continuous manifest indicators?
You may want to take a look at Hosmer (1973) in the mixture reference list on the Mplus web site. Apparently a rather small proportion helps. More know-how is most likely needed here so perhaps you want to study it using Mplus Monte Carlo simulations.
Emil Coman posted on Wednesday, October 19, 2011 - 1:40 pm
I am trying to run a simple mixture analysis, the ‘model’ is just 2 variables correlated…but I seek to unmix a latent class of specific size, say N=27 off all 290 cases in my data, is it possible? Something like CLASSES = who (2); ANALYSIS: TYPE = MIXTURE; model: %overall% varX with varY;
but can something be added to fix the size of the groups extracted, like NOBSERVATIONS = 263 27;, which I know is not the right code… Thanks, Emil
You cannot specify the size of the latent classes. This is determined by the model and the data.
Yi-fu Chen posted on Tuesday, June 26, 2012 - 6:42 pm
Hi, Dr. Muthen,
I have four probability variables derived from previous mixture analysis. My next step is to use the four variables as training variables and estimate more complicated mixture models. In these models, group 1 and group 3 should have similar proportions in the sample.
When using training=t1 t2 t3 t4 (Probabilities) or (Membership), I am able to put the following constraint in the %overall% section [c#1] (a); [c#3] (a); But when I used the priors option (which is preferred), mplus gave me an error message that I can't estimate [c#1] and [c#3].
My question is: is there a way that I can constrain c#1 and c#3 equal when training (priors) was used?