I conducted mixture modeling on 5 subscale scores 2 ways:
1. Using raw subscale scores.
2. re-scaling the scores by dividing them by the number of items on the subscale, because there was a different number of items on each subscale. This re-scaling made the means within class comparable.
Fit indices (e.g., log-likelihood, BIC)weren’t the same across the two ways. Why? When will fit indices be the same and when will they differ?
For many of the mixture models we've created, the BIC, SSA-BIC, CAIC, and BLRT never settle on a class solution. They continue getting smaller (or, in the case of the BLRT, remain significant) as more classes are attempted, even when the smallest class contains as few as two subjects. The LMR-LRT is the only index that picks a solution.
I often use BIC where this is a common occurrence. It probably implies that either the model type is wrong (perhaps a factor model is better), or model details are wrong (e.g. within-class correlations between some items), or there just isn't a simple model to be found for the data at hand.