Message/Author 


Hi, I am using both MPLUS and the widelyused MCLUST (https://cran.rproject.org/web/packages/mclust/index.html) package in R to estimate normal mixture models. While the results are identical for simple datasets (e.g., using the iris dataset), the results are very different for larger datasets with variables that are not very normally distributed. In looking as deeply as I can, both seem to use the EM algorithm for estimation, and I am curious as to why the results may be so divergent. Can you help me to understand why these results may be so different? Thank you. 


First make sure you have the same number of parameters in both runs. And the same maximum loglikelihood value. 


As followup to this fairly dated thread: Would the Mplus defaults for LPA models as denoted in the UG (p. 18283) correspond to the EEI model noted in table 3 of the freely available Scrucca et. al (2016) paper here: https://journal.rproject.org/archive/2016/RJ2016021/RJ2016021.pdf 


Yes, it looks like it is EEI, that is, the default withinclass covariance matrix is diagonal with unequal variances across variables but equality across classes. Many alternative forms can be specified and the withinclass distribution can even be skewt. 


Thanks, Bengt. It is interesting to me that in the mclust package several models are run simultaneously or are modeled simultaneously (i.e., different within class covariance structures) and then the best BIC is chosen as the target model (default), though the user can compare several different models using BIC plots and the model$BIC summary. I am wondering if there is a Type 1 error sort of issue that could be at play. In Mplus each model would need specified separately, though I guess you could run a batch using mplusautomation. 


Just bumping this up if chance to respond, Bengt. 


Seems like that the multiple model approach is simply done for exploration without a hypothesis being tested so no Type I error situation. 


Thanks, Bengt. I am wondering what the ramifications are then by allowing withinclass covariances among indicators and violating the LI assumption. I guess this to some degree depends on the inferences being made with regards to the mixture components/classes? 


Just was reviewing Table 3 in more detail, Bengt, and was concerned that it might actually be the VEI not EEI model that is the equivalent of the default Mplus LPA setup. Was hoping you might be able to doublecheck to see if you think any differently (Table 3 in the linked Scrucca .pdf above). 


You can allow free covariances within class as we show in one UG example. That is, LPA and variations of it is not the only possible model. Ultimately, it is what fits best (or has the best BIC) that matters  and then hopefully the covariances have substantive meaning. VEI allows a scalar difference across classes (lambda_k) so I think that's different from what we do as the default (although we can do that too). 


Thanks, Bengt. I was getting tripped up a bit (it's a bit tricky to distinguish) and discussing the issue with a colleague as want to be sure which is the model from Scrucca et al. paper that maps onto Mplus default invoking LI. After some further thought we came back around to EEI, so this is reassuring to hear. 

Back to top 