Are there any Mplus examples which show how to test for best fit between a latent class model (LCM) and a latent trait model (LTM). In other words, could one use a set of four indicators and test whether a two class LCM provided better fit than a simple CFA where the indicators simply loaded onto a continuous latent variable (LTM). My understand thus far is that a fair test would involve a LTM with two graded levels versus a LCM with two classes
Muthén, B. & Asparouhov, T. (2006). Item response mixture modeling: Application to tobacco dependence criteria. Addictive Behaviors, 31, 1050-1066.
Paul Widdop posted on Tuesday, March 04, 2008 - 7:21 am
I am looking at participation in different sporting activities. I have participation habits (binary yes no) for 15 activities, but believe there to be a latent element that explains the types of individuals that partake.
I am using the following script......
Title: Stata2Mplus conversion for p:\methodology\sport.dta Data: File is p:\methodology\sport.dat ; Variable: Names are swimming snooker darts football fishing outdoor wintersp water tennis badmin squash cycling fitness cricket golf horserid yoga tenpin jog; Missing are all (-9999) ; USEVARIABLES swimming snooker darts football outdoor wintersp water tennis badmin cycling fitness cricket golf tenpin jog ethnic; CATEGORICAL swimming snooker darts football newoutdo wintersp water tennis badmin cycling fitness cricket golf tenpin jog ethnic; CLASSES = C (4);
Analysis: Type = mixture; Model: %Overall% Output: Tech11;
However, in previous work of this nature scholars have placed into the model a local dependency between the swimming and fitness activities, but I am unsure of how do this, I was wondering if anybody here could point me in the right direction.
Paul Widdop posted on Tuesday, March 04, 2008 - 8:53 am
Paul Widdop posted on Friday, March 07, 2008 - 10:12 am
Good Evening Linda,
Sorry to bother you again. I have ran my model using the example 7.16, and does improve my model fit.
Unfortunately I am a novice with Mplus, when I run the model with a local dependancy, the model results in the output identify thresholds for my latent classes but does not give me results in the probability scale. I guess my question is can I get these results in the probability scale from the model in 7.16.
You will not get the results in probability scale when numerical integration is used. You will need to compute the probabilities yourself for the items not included in the local dependency. For the items included in the local dependency, computing the probabilities would require numerical integration.
Paul Widdop posted on Wednesday, March 19, 2008 - 9:43 am
I have computed the probabilities for the items not included in the local dependency, but unsure of the method for calculating the probabilities of the items included in the local dependency. Especially using numerical integration, can I do this in MPLUS?
First of all, I would use a stepwise approach, namely, testing your single class model against two classes. Then three classes against two and so on.
You can use BIC or aBIC to compare non-nested class models regardless of the stepwise approach I suggested. However, Vuong test (as far as I know) requires a stepwise approach as described above, because it tests a k-class model against a (k-1)-class model. You can request Vuong-test via "tech11".
Is it possible to test differences between, say, a 3-class model and 1-class model? I understand that the stepwise approach has been recommended, but I have a marginally significant 3 class model (using the Vuong test) and a non-significant 2 class model. I would like to show that the 3-class model fits better than the 1-class. Any help or references regarding this would be appreciated.
Regarding Vuong test: it can happen that it's p-value bounces across various class models. However, once Vuong test has rejected let's say two classes you can irgnore significant p-values for 3 or 4 classes. That's why a stepwise approach is recommended when using tech11. In your case vuong test clearly points to a single class model. If you don't want to rely on tech11 alone you can use BIC to compare a single class model vs. 3 classes. You should also consider using BLRT (tech14). But here I would also test in a stepwise fashion.
Hi, I am using Mplus to run some MC simulations with misspecified mixtures and I have a quick question related to the Vuoung-Lo-Mendel-Rubin test.
The test requires evaluation of a weighted sum of chi-square(1) variables, the distribution of which is stated by Lo Mendel and Rubin (2001).
This same test distribution can ostensibly be used to for other hypotheses about strictly nested and possibly misspecified models.
I am wondering if there is a way to ask Mplus to evaluate the weighted sum chi-square for an arbitrary pair of models. Alternatively, given either the partial derivatives of the log likelihoods or the weights themselves, can I use Mplus to compute the weighted chi-square probabilities? If possible I would like to do this within Mplus since I am running a large number of simulations.
On a related note, does Mplus output the first-order derivatives of the loglikelihood evaluated at the ML estimates for each observation? And the Hessian of the loglikelihood at the ML estimates? I am having a hard time finding these options if they exist.