Philip Jones posted on Wednesday, December 13, 2006 - 9:27 am
Hi. I have a few questions about the tests for comparing k- and (k-1)-class models:
(1) Should TECH11 *not* be used with the OPTSEED option? I notice that I get (sometimes dramatically) different results when using OPTSEED--usually a much lower p-value. Also I notice an extra set of iterations when using TECH11 without OPTSEED, so my hunch is that the H0 model is not getting properly optimized when OPTSEED is used. Is this correct?
(2) How much importance do you give the LMR test given the published criticism and the fact that the BLRT test is now available?
(3) For the BLRT test, why are zero or few random starts recommended for the H0 model? I'm sort of naively thinking that, e.g., if I needed 100/10 random starts for a 3-class model (i.e., "STARTS 100 10;"), then I would also need that many in the LRTSTARTS option when comparing it to a 4-class models (i.e., "LRTSTARTS 100 10 ...;").
1. Yes, Tech11 currently does not have the Tech14 facility of "LRTSTARTS" and should therefore not rely on OPTSEED but a regular STARTS = run.
2. LMR doesn't work too poorly, judging from the Nylund et al paper on our web site, see Papers and Latent Class Analysis: Deciding on the Number of Classes.
3. Because the artificial data are generated according to the H0 (k-1-class) model the k-1 model is expected to be easy to find the best solution for, while the k-class model is harder. Starts = 100 10 refers to the real-data analysis where a latent class model for any k is not exactly the data-generating model.
What about differences between BLRT and LRT LL-values (up to 20-30)? Is that an indication for too less no. of starting values in BLRT, i. e. BLRT-result is not trustworthy? However, I received no warning message concerning the latter issue.