Carolyn Hou posted on Monday, November 22, 2010 - 7:21 am
I have some questions on the FMMs with categorical outcomes. I greatly appreciate your answers.
1. What are the model assumptions for FMMs with categorical outcomes? Also, is there any additional assumption associated with estimation methods for both ML (e.g.MVN?) and Bayes?
2. Is there a connection between unidentification and nonconvergence (or local Maxima)due to unstability.
3. Is there a way to judge if a FMM is identified or not if there is no formula for it? Is it possible that a FMM is identified and/or converged for some replications but not others in a simulation study?
1. You have the usual Latent Class Model using either logit (most common) or probit regression of the items on the classes, and you relax the LCA conditional independence assumption by specifying a normally distributed within-class factor. See Clark et al for more specifics. Bayes does it in probit form only currently. MVN (multivariate normal) is only relevant for the factors - certainly not for the categ items.
2. Non-identified models often have no problem converging. What we call unstable models are those which have have a bumpy likelihood with many local maxima and probably too many parameters or a very small sample.
3. No formula. Mixtures in general are difficult to prove identified. Some theoretical results for continuous outcomes are in Titterington et al's book, some for categorical outcomes in Goodman's (1974) article. The easiest way to tell is (a) by rules of thumb, and (b) if you get no error message from Mplus and you get SEs. There is also empirical non-identification where in some samples there is for example not enough subjects in a certain class so that the parameters specific to that class are not empirically identified (e.g. having 2 people in a class cannot support estimation of 3 parameters). If you have a large sample empirical non-identification is less likely.