Sara posted on Saturday, January 17, 2009 - 7:35 am
I am conducting mixture modeling where I have a continuous outcome of the categorical latent class variable.
What I am after is the predicted outcome variable score based on the mixture model. That is, does MPLUS compute the predicted/model-implied outcome variable values for each individual? I would like to plot these predicted outcome values from the mixture model.
To do this, you must put a factor behind each observed variable such that the factor is identical to the observed variable. Then ask to save factor scores using the SAVEDATA command and the SAVE=FSCORES option. The following input is based on Example 7.9:
You can also plot these using the PLOT command. [y1-y4@0]; %c#2% [f1-f4];
Sara posted on Saturday, January 17, 2009 - 9:42 am
One more quick, slightly unrelated question. We are doing mixture modeling with 2 samples: calibration and validation. We estimated the model using the calibration sample. We now want to use this models parameter estimates to compute the posterior probabilities for individuals in the validation sample. To do this, we were going to fix all the parameters estimates to the estimates from the calibration sample and use the validation sample data as input and save their posterior probabilities. Does that sounds correct? If so, we understand how to fix the means, variances, and covariances, but how do we fix the proportion associated with each class. We have 5 classes.
You will have four means because you have five classes.
Sara posted on Saturday, January 17, 2009 - 12:40 pm
Thanks Linda. This worked well for our validation sample.
With respect to the factor score command to get predicted individual outcome scores, is there a way to get these scores on the original metric of the observed variable. That is, what we are looking for is the individual predicted score for our continuous outcome variable based on the mixture model, but on the original metric of the observed variable. Currently I am getting factor score values that are positive and negative and that have an overall mean of -7.26 for our outcome variable. Also, I looked at the Technical manual, Appendix 11 because I was getting 2 sets of factor scores. I see that one is weighted across classes (which is what we want) and one is for the class with the highest posterior probability. In our output these are the same. This doesn't seem correct.
The way the model is set up the factor scores should be in the original metric. The two types of factor scores will be nearly identifcal if entropy is high. If you want me to look at this further, please send your Version 5.2 output, saved data, and license number to firstname.lastname@example.org.
Regan posted on Tuesday, August 24, 2010 - 11:51 am
In estimating a PATH model, we want to use a calibration/validation approach. After getting estimates from the calibration group, is it correct that we fix parameter estimates for the validation sample to be those of the calibration group (as opposed to simply running the same code on different data)? If so, how is this done, and which estimates do we fix? Thank you!