Bart Simms posted on Friday, March 23, 2012 - 11:21 am
I've been reading and applying the Clark, S.L., Muthén, B., Kaprio, J., D’Onofrio, B.M., Viken, R., Rose, R.J., Smalley, S. L. (2009) how-to article on mixture modeling.
I'm wondering to what selecting from a large number of models according to the BIC corresponds to capitalization on chance when selecting from a large number of regression models.
It doesn't seem like that big a deal when comparing the a priori combinations of classes and model types (e.g., LCAs with different numbers of classes, CFAs with different numbers of factors). But when one moves to FMMs and starts making relatively fine adjustments, it seems like it could be more serious.
Really my question is, does it exactly parallel the problem when building a regression model and refining the model according to r-sq? Or is it somehow seen as not as bad because information criteria are descriptive rather than inferential?
Bart Simms posted on Friday, March 23, 2012 - 11:23 am