Message/Author 

Anonymous posted on Tuesday, March 27, 2001  12:51 pm



It looks like that Mplus 2.01 only provides loglikelihood and Information Criteria statistics for mixture models. Usually, how do we assess goodnessoffit for mixture models? Thank you very much for your help. 


In singleclass analysis, the chisquare compares the target model to the unrestricted model of mu and sigma, the first and secondorder moments. In mixture models, there is no unrestricted model. All higherorder moments are used. So there is no model to test againt, hence no chisquare. What is suggested is to first use BIC to determine the number of classes, the lowest BIC is chosen. Then do a series of model difference tests using the loglikelihoods. Two times the loglikelihood difference is a chisquare. 

Peter Tice posted on Wednesday, June 20, 2001  6:49 am



Could you explain for me the relationship between the loglikelihood H0 Value and the BIC value. How does the loglikelihood value contribute to BIC and ultimately in selecting the proper mixture model. 


BIC is equal to 2*loglikelihood + r*log n where r is the number of parameters and n is the sample size. 

Anonymous posted on Saturday, June 23, 2001  4:03 am



Could you please explain for novices like myself what you mean by 'a series of model difference tests using the loglikelihoods'. 

bmuthen posted on Saturday, June 23, 2001  10:41 am



When you have a model that is a special case of another more general model (e.g. having some parameters fixed at zero), you can test if the restrictions make the model fit significantly worse than the more general model. This is accomplished via a chisquare test computed as 2 times the difference in the log likelihood values for the two models, with degrees of freedom equal to the difference in number of parameters for the two models. 


Hi  I have a question about the fit statistics reported for a mixture analysis. I have two ordinal indicators (14) of a dichotomous latent class, where the thresholds are restricted to be equal across the two latent classes and the means of the underlying latent variables (i.e., u* in Eq. 149) are zero in one class and free in the other class. It appears one cannot free the mean of u* directly (because there is no intercept term in Eq. 150), so I did this by including factors with loadings of 1, as shown below. So, the model has alpha_u=0 in one class and alpha_u free in the other class where alpha_u is given in Eq. 151). Here's the syntax: VARIABLE: NAMES ARE y1 y2; CLASSES = class(2); CATEGORICAL=y1y2; ANALYSIS: TYPE=MIXTURE; ESTIMATOR=ML; MODEL: %OVERALL% f1 by y1@1; f2 by y2@1; [y1$1*1] (1); [y1$2*0] (2); [y1$3*1] (3); [y2$1*1] (4); [y2$2*0] (5); [y2$3*1] (6); %class#1% [f1@0 f2@0]; %class#2% [f1*1 f2*1]; (sorry, for some reason a \mail keeps getting inserted at the at sign!). I've compared the output to that obtained for a fit of the same model with LEM and everything agrees. The loglikelihood (and so AIC and BIC) is identical to that given by LEM to 3 decimal places. The estimated latent class sizes are identical, and the thresholds, alphas, and standard errors are very close. The chisquare and LR statistics differ considerably, however. For ex., in LEM the chisquare is 9.2 on 6 df whereas the "chisquare test of model fit for the latent class indicator model" reported by Mplus is 9071.86 on 6 df. The latter doesn't seem to be based on the observed and expected frequencies for the bivariate frequency table, although its described that way on p.372. For example, one can compute, using Mplus, the chisquare statistic (and LR) using the estimates of the latent class size and the "latent class indicator model part in probability scale" output given in the Mplus output, in which case the value is close that given by LEM. So how is the reported value being computed? I guess its because the model is specified somewhat differently than the usual case, but I'd like to understand exactly how the fit statistics are being computed. 


I would like to take a look at this. Can you send the data to support@statmodel.com? 


Thanks for sending the data. The Mplus chisquare is incorrect in the case where there is a factor mean in the model (u*). This will be corrected in the next update. Parameter estimates and standard errors are correct, and chisquare is correct in models without factor means. Thank you for reporting this. 

David Rein posted on Thursday, April 10, 2003  12:23 pm



Positive BIC Statistics When estimating mixture models with different numbers of groups, I commonly get a positive BIC statistic for the model with only 1 group. I've generally just followed the rule that a smaller BIC is better and have focussed on the model I am interested in (with multiple classes). Now I have to present this stuff. I am afraid I'll get a question such as "What's the deal with the positive BIC, I thought BIC's were always negative?" Does a BIC have ot be negative? Does a positive BIC indicate anything in particular? 

bmuthen posted on Thursday, April 10, 2003  5:34 pm



Some research groups such as Nagin's define BIC with the opposite sign of Mplus (and scaled by a factor 2) and therefore typically get negative values. In Mplus, BIC = 2logL + r ln n. Here, logL is typically negative (so the first term is typically positive), r is the number of parameters and ln n is the elog of the sample size (so the second term is positive), so BIC is typically positive. 

David Rein posted on Thursday, April 10, 2003  5:59 pm



So though uncommon in the way you have scaled the stat, a negative BIC in MPLUS is ok, and actually a good thing, since its VERY negative? Or does it indicate something strange about the loglikelihood function? 

bmuthen posted on Thursday, April 10, 2003  6:12 pm



The BIC in Mplus is not scaled in an uncommon way, but is the way that the original Schwarz article did it. A negative BIC in Mplus is rather uncommon but does happen. I don't think it indicates anything strange about the likelihood. 


I am working on a latent class analysis of relationship violence among a sample of 273 couples and have several questions about the fit statistics I've gotten. First, I have 32 variables and have run the LCA in two separate ways: first with all of the variables as either/or prevalence variables and second with most of the variables as prevalence but 4 variables as mean frequency of aggression in the past year. Can I compare the fit statistics across these two models to determine if the model with frequency variables is better than the model with only prevalence variables? The fit statistics I'm getting are as follows: for a 3 class solution with prevalence only variables the AIC is 6382.19, the BIC is 6735.92, and entropy is .92. For a 3 class solution with both prevalence and frequency variables the AIC is 15580.9, the BIC is 15974.35, and entropy is .94. Can I draw meaningful conclusions about which model is a better fit? Secondly, as I understand it, there aren't any absolute values for fit, but rather the fit statistics are simply interpreted comparatively as a better or worse fit, rather than a good or bad fit. Is this correct? And how would you suggest I formally test whether the fit of one model is better than another? Finally, what is the meaning of the entropy statistic? 


You cannot compare AIC and BIC between the models where all latent class indicators are treated as binary and where some latent class indicators are treated as binary because these values will not be on the same scale. Because variances and covariances are not sufficient statistics for mixture models, no fit statistics are available. We recommend using 2 times the difference in loglikelihoods as a way of testing nested models. The two models you describe would not be nested. 

Anonymous posted on Wednesday, February 09, 2005  4:57 am



BIC is equal to 2*loglikelihood + r*log n where r is the number of parameters and n is the sample size. Is there prior distributions assmuptions made for the parameters to be estimated and the estimation is based on a Bayesian approach, some n (1000) simulations for exmple?. Is this BIC equivalent to the deviance one would get if a Bayesian specification for a similar model was used?, or why is it Bayesian? , can you advice please!. 

Anonymous posted on Wednesday, February 09, 2005  6:27 am



BIC is equal to 2*loglikelihood + r*log n where r is the number of parameters and n is the sample size. Is there prior distributions assmuptions made for the parameters to be estimated and the estimation is based on a Bayesian approach, some n (1000) simulations for exmple?. Is this BIC equivalent to the deviance one would get if a Bayesian specification for a similar model was used?, or why is it Bayesian? , can you advice please!. 

bmuthen posted on Wednesday, February 09, 2005  10:51 am



No, the parameter estimation is via maximumlikelihood, not Bayes. Bayes in BIC is simply referring to Bayesian theory behind choosing this fit index. 

Anonymous posted on Monday, April 04, 2005  6:51 am



Hi, I'am running a multilevel model which contains 20 dependant variables, 12 independant variables and 3 continuous latent variables. I would like to know how mplus compute the degrees of freedom for both, the Chiaqure test of model fit and the Chisqaure test of model fit for the baseline model. 

BMuthen posted on Wednesday, April 06, 2005  3:13 am



The degrees of freedom is the number of parameters in the H1 model minus the number of parameters in the H0 model. The chisquare test of model fit for ML uses as H1 a model with free means, and free variances and covariances for both within and between. The baseline model is a model of free means and variances for between and within 

Anonymous posted on Tuesday, July 12, 2005  2:48 pm



Hi, I have a question, maybe it's obvious but ... I am trying to estimate a Latent class model with 4 binary outcomes. My objective is to estimate a mediator effect of my covariable (x1) in the final model I analyzed 3 different models: Model 1: 2class model with no covariates Model 2: 2class model with covariates but only direct effect with classes (it's like the multinomial logistic regression) For example: C#1 ON x1 x2 x3 x4; Model 3: Adding indirect effects in the model 2 For example: x1 ON x2 x3 x4; C#1 ON x1 x2 x3 x4; I would like like to use AIC and BIC to compare the different models so that I can choose the best one. But I noticed that for the model 3, the value of the loglikelihood is much smaller than for the 2 other models. I mean: for the first model, I got :loglikelihood = 5674; AIC = 11365; BIC = 11420 for the 2nd model, I got: loglikelihood = 5401; AIC = 10834; BIC = 10931 for the 3rd model, I got: loglikelihood = 13540; AIC = 27135; BIC = 27306 According to the AIC and BIC, model 3 is not good ... but it is because of the value of the loglikelihood ... So : do I have a problem? Thanks for your advices ... 

bmuthen posted on Tuesday, July 12, 2005  5:52 pm



The likelihood value of 5674 is much higher/better than 13540. Note that a small negative value represents a higher likelihood than a large negative value. The 3rd model seems to be justidentified with respect to the c/x1x4 relationships since you have both direct and indirect effects included, so it seems it shouldn't fit worse  perhaps you have gotten a local maximum instead of a global one  try a higher STARTS value. 

Anonymous posted on Wednesday, July 13, 2005  11:45 pm



Thanks a lot! Ok, I'll try higher starts value and see if I got a better likelihood! 

Pat posted on Thursday, August 18, 2005  12:35 pm



Hi, my question concerns the loglikelihood H0 value in the Mplus outputfile. I used latent mixture modeling and ran it for several data samples. Shouldn´t the loglikelihood H0 value always increase (smaller negative value) with increasing number of classes? I am asking because in some instances the loglikelihood value is smaller (larger negative value) for a 4 class solution compared to the three class solution in my outputs. Do you know of any such cases or does this indicate that something is wrong? How would I typically interpret this? Thanks for your input! 


I would try increasing the number of random starts. For example, STARTS = 50 5; and see if things don't look better. 


I have used the LoMendellRubin (LMR) test in LCAs in which 2 item sets were analyzed separately (each set consists if 12 binary items). The results for 5 vs. 6 classes were: adj. LMR = 61.86, p = .0026 for one item set and adj. LMR = 94.69, p = .0818 for the other (difference in the number of parameters = 13 in each case). Now, how can I explain that a larger LMR test value may be associated with a larger pvalue (for the same difference in the number of parameters)?? Or could this be an error? Thanks again for all your help! 

bmuthen posted on Saturday, August 27, 2005  10:08 am



When you say LMR = ..., I think you refer to the likelihood ratio (LR) value given in the Tech11 output. The LMR approach computes the p value for the LR essentially by determining the LR distribution (which is not chisquare here), giving a mean and a variance for this distribution. These depend on the data and the model estimates and are therefore specific to each of your two runs. So when you see a higher p value with a higher LR value that might just mean that the variance of the LR distribution is also higher in this case  so that a high LR value is more probable. The variance is printed in the tech11 output so you can see if my reasoning applies. 


Thank you very much for your helpful explanation. However, I have another problem with the LMR LR test. I test a model in which there are 3 latent classes and 3 latent factors (each factor measured by 2 continuous indicators). The model assumes measurement invariance across classes (only the factor means are allowed to vary across classes). Now, when I fix the scales of the factors directly by setting their variance to 1 (and setting all loadings free) I get a different LMR p value than for the same model in which one loading is fixed to 1 per measurement model and all factor variances are freely estimated. This seems strange to me (the loglikelihood values are exactly the same for both models, as one would expect). 

bmuthen posted on Monday, August 29, 2005  7:42 am



Are the loglikelihood values the same also for the 2class alternative? If not, perhaps you need more random starts. If they are, then please send your data, input, and output to support@statmodel.com. 

anon posted on Wednesday, January 11, 2006  1:22 pm



could you explain how the entropy measure ought to be interpreted? do you suggest any good references? thanks! 


See formula 171 in Technical Appendix 8 which is on the website. There is a reference given. 

Marc Brodsky posted on Thursday, January 12, 2006  10:00 am



Throughout the messages I see numerous references to the Technical Appendices. What is/are the URL for the Technical Appendices? 


See the homepage under Documentation. 

smeadows posted on Thursday, January 19, 2006  1:58 pm



Hello, I am running a twopart (or semicontinuous) growth model for continuous outcomes (i.e., Example 6.16). Brown et al (2005, Journal of Consulting and Clinical Psychology) use a similar model and report a CFI, TLI, and RMSEA for the semicontinuous part of the model. I'm not getting these values in my results. Is the model not estimating properly or are these fit statistics available only via hand calculations? Thanks for your help! 


In Example 6.16, we use maximum likelihood with numerical integration to estimate the model. In this situation, means, variances, and covariances are not sufficient statistics for model estimation. Therefore, chisquare and related fit measures are not available. Perhaps Brown et al used a different estimator. I think you can also use weighted least squares in Mplus. Then you would obtain chisquare etc. 

bmuthen posted on Thursday, January 19, 2006  5:56 pm



The Brown et al article presented fit statistics such as CFI only for "Part 2", that is, the continuous part of the model. This is not the fit for the whole model, including both the continuous and binary parts. It is unknown how well such fit indices would work to reflect the fit of the whole model, but perhaps they are a useful descriptive. For the whole model, although the conventional fit statistics are not available, you can always use loglikelihood differences between nested model to get a chisquare test of restrictions. For instance checking if a quadratic growth factor is really needed. WLSMV should not be used because MCAR missingness is assumed for the outcomes and this is not fulfilled in 2part modeling. Methods studies of 2part modeling will be made easy in Mplus Version 4 through new features that make Monte Carlo simulation possible. 


I'm working on pattern recognition and I'm new to this field. Can you please suggest me some ways of choosing the initial parameters for a model consisting of a mixture of three normals (8 parameters to estimate). Should I consider the peaks of my histogram as the muStart points and a single value for the variance (overall variance of the observed data). Moreover, I shall remain highly obliged if you kindly mail me the steps, in detail, for trying out a GOF test for the mixture model. 


I think you are asking about choosing starting values. This is not necessary. Just use the default starting values. With mixture models, you can compare nested models using the loglikelihood. How to do this is described in Chapter 13 of the Mplus User's Guide under Testing for Measurement Invariance Using Multiple Group Analysis. See Model Difference Testing. 

Anky Chan posted on Sunday, December 03, 2006  4:57 am



I have 2 questions: 1)The BIC of my GGMM is 15.870. Is this a good fit? 2)What indices should be included when reporting the model fit index of GGMM? Thank you very much. 


The value of BIC has meaning only in comparison to another BIC value. See the following paper for a description of how to determine the number of classes in a GMM: Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345368). Newbury Park, CA: Sage Publications. It can be downloaded from the website. 

Scott posted on Monday, June 25, 2007  8:49 pm



I am examining GMM of delinquency trajectories. Below are the fit indices for different trajectory classes, including covariates (I had previously conducted analyses without covariates in the models). Based on the fit indices, I am unsure which is the best fitting model (it seems that the 3 class solution is best). Should I just look at the BIC, or should I follow up with LMR LRT to see if these results are consistent with the BIC? Also, is there still no way to get the SK index for data with missing values? Class 2: BIC=20764, AIC=20344, Ent=.608, LL=10095 Class 3: BIC=20743, AIC=20230, Ent=.591, LL=10021 Class 4: BIC=20746, AIC=20141, Ent=.710, LL=9959 Class 5: BIC=20753, AIC=20055, Ent=.640, LL=9899 Thanks. 


Following are two papers that give strategies for deciding on the number of classes. Both can be downloaded from the website. We have found that it is preferable to determine the number of classes without covariates as a first step. Nylund, K.L., Asparouhov, T., & Muthen, B. (2006). Deciding on the number of classes in latent class analysis and growth mixture modeling. A Monte Carlo simulation study. Accepted for publication in Structural Equation Modeling. Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345368). Newbury Park, CA: Sage Publications. 

YJ Sohn posted on Sunday, June 08, 2008  6:44 pm



I ran the LCA with 21 observed variables, sample size of 381. No matter how many classes that the latent variable has, I couldn't get the chisquare test results. The messages is as follows: "THE MODEL ESTIMATION TERMINATED NORMALLY THE CHISQUARE TEST CANNOT BE COMPUTED BECAUSE THE FREQUENCY TABLE FOR THE LATENT CLASS INDICATOR MODEL PART IS TOO LARGE." How can I get the chisquare test result? Or is there any other options that will replace the chisquare test result in addition to AIC, BIC? 


When you have more than eight variables the chisquare test is not reliable because of the size of the multiway table. In your case, you would have more cells than you have observations. You need to use the other information available to decide on the number of classes. 

YJ Sohn posted on Monday, June 09, 2008  3:15 pm



I appreciate your response. But without the chisquare values, how can I be sure about whether the model fit is acceptable or not? AIC or BIC gives only relative information (i.e., related to other alternative models), does it? Is there any other way to make sure the model fit is good? The text book that I'm reading also use two chisquare test results and AIC and BIC information. 


You may use the bivariate and response pattern statistics produced by TECH10. For an application, see the crime curve example in the MuthenAsparuhov GMM chapter on our web site: Muthén, B. & Asparouhov, T. (2008). Growth mixture modeling: Analysis with nonGaussian random effects. Forthcoming in Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis. Chapman & Hall/CRC Press. 


dear dr Muthen, I have performed a knownclass analysis and compared an unconstrained model with a model in which I constrained intercepts and slopes of males and females to equality. The models are not the same; my pvalue is smaller than .01 However, I am slightly unsure on which model to choose as my best model. I assume I should look at the loglikelihood values but they are smaller than zero, and I am not sure if I should take the value closest to zero. My first model has a loglikelihood of 21810 whereas my second model has a loglikelyhood of 21855. Can you tell me which is the best loglikelihood value? many thanks 


The best loglikelihood is the highest which is 21810. In your case, I think a more meaningful test would be the the loglikelihood difference test between the nested models which would tell you if imposing equlaity constraints worsens the fit of the model or not. 


Hello, I am running a GMM model on percent days of substance use per month across 12 months. I ran the unconditional model and came out with a 4 class solution (based on LL, BIC, and LMR test). In this model, only the intercept was allowed to vary within class, and the variance was held constant without classes. Now I'd like to try allowing the variance to vary across classes. Could I compare the fit of this model to the original model using the relative fit indices (i.e., if this model has a smaller BIC, is it a better fit than the original model?). Also, I would like to add covariates into the model. Can I compare the fit of the model with covariates to the unconditional model the same way? Or is there some other way to do this? Thanks, Sarah Dauber 


Testing the equality of variances across classes can be done by regular likelihoodratio chisquare difference testing. BIC can be used too. Adding covariates, the loglikelihood and BIC are in a different metric than without covariates and so are not comparable. 


Hello, I have a question regarding the loglikelihood HO value in the Mplus output. I'm running a latent profile analysis with 4 continuous indicators of memory performance among a sample of 84 subjects. For different solutions I always get a positive loglikelihood value. What could be a reason for this? The small sample size? Thank you very much for your help. 


Loglikelihood values can be positive or negative. There is no particular reason they are one or the other. 


Thank you for your quick response. In my case a 2 class solution seems to be the best and is theoretically meaningful. For this soultion I get following fit: LL = 47.23, AIC = 56.46, BIC = 10.28. So, according to what you said, it's possible to get results like this and I have no reason to be concerned about my model? I also have a sample of only 84 subjects, do you think that the sample size might be too small for a latent profile analysis? Could you point me to a reference discussing this issue? Thanks. 


The results look fine. The only way to know about the sample size needed is to do a Monte Carlo study using parameter values from your particular data set as population values in the study. Search our website for papers by Gitta Lubke. 


Dear Linda, following your advice (see previous post) I did a Monte Carlo study with number of observations equal to my sample size (n= 84) and the estimated means and variances from my data set as population values in the study. The parameter and standard error biases for all parameters are minimal and the % Sig Coeff exceeds 0.8 for all means and variances. However when looking at the Loglikelihood, AIC and BIC means and standard deviations over the replications, they are very different from the values that I get in the actual latent profile analysis. What does this mean? Moreover, in the Monte Carlo data analysis, the number of individuals in each class is not the same as the number of individuals within classes in the actual latent profile analyses. Could this mean that either the latent profile analysis or the Monte Carlo study convergerd to a local maximum? Thanks. 


It sounds like you are not giving values for the intercepts of the categorical latent variable. These are given as logit values corresponding to the probabilities of being in each latent class, for example, [c#1*0]; 


Thank you very much for your prompt response. Indeed I didn't give values for the intercepts of the categorical variable. After I did this, the number of individuals in each class is the same as in the actual analysis and the entropy is nearly the same. However, the means and standard deviations of the fit indices (loglikelihood, AIC, BIC) are still not the same as in the actual analysis (although now the difference between the Monte Carlo study and the analyses is not as big as before). Given that there are no cutoff values for AIC and BIC, and the fact that I am using my data set as population values, how should I interpret the results for the fit indices from the Monte Carlo study? 


The generated data follow the model exactly. The real data don't. You can expect some discrepancy in the loglikelihood, AIC, and BIC. Note that these are not absolute fit indices. I would be more concerned about the other aspects of the results which tells you whether your sample size is large enough. 

mari posted on Monday, May 09, 2011  12:34 pm



Hello, I am running GMM with ordinal variables for 6 time points. Unconditional 2class and 3class models produced fitindices as follows: 2class: LL (14721.983) BIC (29549.459) Adjusted BIC (29508.152) LMR pvalue (0.08) BLRT pvalue (<0.001) 3class: LL (14709.088) BIC (29548.015) Adjusted BIC (29497.176) LMR pvalue (0.2219) BLRT pvalue (<0.001) I am thinking to select 3class model because of the BLRT result and theory, but BIC values keep bothering me. They are too close each other. Can I still argue 3class model? Otherwise, should I choose 2class model? Thank you for your advice! 


That's a tough call  both BIC and LMR point to 2 classes. But if the 3class model shows a new class in the sense of a different trajectory type rather than just a variation on the two earlier themes, I would discuss this solution as well. 

Yan Li posted on Friday, August 26, 2011  12:58 pm



I have a count and an ordinal categorical mediating variable in a multiple group path analysis. Mplus told me to use mixture analysis with known class. Three questions: 1. To compare the nested models, I learned from previous discussion that I should use the loglikelihood change (2 times of which = chisquare). But how use the H0 scaling correction factor for MLR reported under the Loglikelihood? 2. How to judge the BIC difference for the two nested models? What difference value is considered "different". 3. Is reporting model fit statistics needed for this type of analysis? I know I don't get that in the Mplus report. How to get any if I want to report them? Thanks! 


1. See our web description of this (check the left column of our home page). 2. See the FAQ on BIC  you find FAQs in the left column of the home page too). 3. No, there are no fit statistics used in the literature when count variables are included in a multivariate model like this. Instead, you have to compare this model with other competing models. 


Dear Dr. Muthen I'm using GMM to estimate the pdf of 8 dimensional data but I have a problem determining the best number of Gaussian components . Could you tell me how to do that? One more question, I want to test the goodness of fit of the GMM model to the data and I read a bout the Chisquare test, Kolmogorov–Smirnov test and other tests. I read also about the BIC and AIC, LRT as other methods for assessing the fit of the model to the data but I don't know what to use(I'm new to the field)Please can you explain the difference between these methods and could you please advise me what to use? Thanks; Ghada 


Please see the overview and Monte Carlo study of Nylund, K.L., Asparouhov, T., & Muthén, B. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling. A Monte Carlo simulation study. Structural Equation Modeling, 14, 535569. Although we provide TECH13 (see the UG), there is not really a good test of fit against the data unless you have categorical outcomes. See Muthén, B. & Asparouhov, T. (2009). Growth mixture modeling: Analysis with nonGaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis, pp. 143165. Boca Raton: Chapman & Hall/CRC Press. Both papers are on our web site. 


Hi, Please I'd like to know the difference between the goodness of fit test and the the criterion used for determining the approperiate number of components in GMM. Is there any difference or that when we determine the approperiate number of GMM comonents then we already havee chosen the model with the best fit to the data?? Thank you for your response in advance. Ghada 


The issue here is relative versus absolute fit. When you look at different numbers of classes and compare fit statistics such as BIC, you are looking at relative fit. Once you determine the number of classes in this way, you want to look at absolute fit. With mixture modeling, there are no absolute fit statistics. With categorical items you can look at TECH10 which give univariate and bivariate fit information. 


Thank you Linda for your response. Could you please clarify what is the meaning of " Categorical items". My problem is that I'm trying to estimate the pdf of 8D data using GMM. Is this means that there is no any way to assess the absolute fit? Another question please, is it possible to use the GMM to estimate the PDF of complex data or only real data? Ghada 


Categorical items are items or variables that are binary (dichotomous) or ordinal. I don't know what you mean by pdf. I don't know what you mean by 8D data. I don't know what you mean by complex versus real data. 


I'm trying to estimate the probability density function (PDF) of eight dimensional data using the Gaussian Mixture Model (GMM). The data is complex numbers(i.e. 1+j2) but I decompose it to its real and imaginary parts to estimate a real PDF. However, I want to estimate the PDF of complex randon variables,s this possible using GMM? This data is the scattered electric field from objects in radar system which contains amplitude and phase (complex numbers). 


I think what you need to do is estimate a bivariate model for the real and imaginary part. You can estimate any kind of model for the two variables  including Mixture model for bivariate data where you estimate bivariate Gaussian component in each class (and the real and imaginary parts are correlated within each class). 

J.D. Smith posted on Thursday, August 16, 2012  11:37 am



I am reporting the results of a multiple group analysis conducted using the KNOWNCLASS option in a MIXTURE model. The typical fit indices of a multiclass model don't seem relevant to a model using KNOWNCLASS. What fit statistics should be reported? 


There are no absolute fit statistics to report. If you are comparing models you can use BIC or a loglikelihood difference test. 

sam posted on Sunday, May 05, 2013  1:41 am



Hi. I'm trying to examine the fit indices of the mixture random model. What I did was comparing the loglikelihood of the model without interaction term (this model has an acceptable absolute fit) with that of the model with interaction term. However, I have no idea what I should do next. Could you please help me on this? What does it mean if the difference test is significant? Does it mean the model with interaction term has a poor fit? Thanks in advance. 


The difference test would test for the significance of the interaction. You can find that by looking at the ztest for the interaction. You don't need to do the difference test. 

xiangrong posted on Monday, August 18, 2014  8:17 pm



Dear Dr. Muthen I'm using LPA to estimate the profile. I want to get ICLBIC. Could you tell me how to do that? Thanks; xiangrong 


Mplus does not provide that. You would have to compute it outside Mplus using the posterior probabilities that go into the entropy formula. 

xiangrong posted on Tuesday, August 19, 2014  9:39 pm



Dear Dr. Muthen Thank you for your response. Could you please tell me how compute ICLBIC using the data provided by MPLUS. THANKS; xiangrong 


You have to google the formula and see how the posterior probabilities should be used (posterior probabilities are obtained by Save=cprob). In don't share the philosophy of ICLBIC because it involves classification quality. The SEM counterpart could be seen as model fit combined with Rsquare for the various DVs. 


So above there was a discussion about negative BIC values. In recent analysis I got negative BIC values. I understand that negative BIC values are rare, but I don't understand how it occurs or what it means. I understand that negative values don't necessarily indicate a problem, but I still wondering how I should interpret or explain this. 


You don't need to explain this as it is legitimate. Interpret in the same way as a positive BIC. Look for the lowest value. The larger negative value is the lowest. 

CB posted on Tuesday, January 27, 2015  7:15 am



I'm running a 2class LCA with 4 categorical indicators and I've been trying to interpret model fit. From the model output, I have obtained values for Pearson Chisquare and Likelihood Ratio Chisquare, but it says that degrees of freedom cannot be computed. However, could the degrees of freedom still be calculated from the number of cells in the contingency table (based on the number of levels of observed variables) minus the number of estimated parameters from the latent class model minus 1? Also, I'm running another 2class LCA with 4 categorical indicators, but I have added an exogenous variable to have a direct effect onto only one indicator. Is there a way I can still obtain absolute model fit statistics (as I can still obtain AIC and BIC)? Thanks!! 


The df cannot be computed when there are other model parts than the LCA indicators. An example is a covariate. The issue with a covariate is that you don't have one frequency table but rather one for each covariate value. 

db40 posted on Tuesday, February 24, 2015  12:53 pm



Hi Dr.Muthen, I have a issue that I have not come across before. I have run an LCA 26 models and I am comparing the information criteria. I see that from the 2 class model to the 6 class model the BIC continuously rises leaving me to think the 2 class model is the optimal fitting model since its the lowest. However the SABIC indicates the 3 class is also optimal. All LRT test are significant. Do I pick the class with the lowest BIC or should I consider the other fit statistics? 


I would go by BIC. SABIC is used by some and may be reasonable in practice because it doesn't penalize the number of parameters as much as BIC, but it doesn't really have a backing except for the bivariate normal case the authors looked at. 

db40 posted on Saturday, February 28, 2015  12:40 pm



Dear Bengt, I have read over the Nylund paper and it says " Based on the results of this study, when comparing across all modeling settings, we conclude that the BIC is superior to all other ICs. For categorical LCA models, the adjusted BIC correctly identifies the number of classes more consistently across all models and all sample sizes." My data is also categorical as well. SO, is it possible, for this particular case that the adjusted BIC is correct over and above the BIC? 


You can certainly use that reference to support your choice. 

db40 posted on Wednesday, April 22, 2015  1:44 pm



Hi, I have encountered an odd situation running a LCA with categorical indicators. Reading the Nylund paper I understand that the BIC, SSABIC and BLRT are the fit indices by which we should select classes (including interpretation). I see the model I have run that the BIC/SSABIC keep reducing and there appears to be no optimal class (currently 7) I cant use the BLRT because im using weights, so in this instance is there anything else in Mplus I can use to help me decypher the optimal class beside rethinking the models indicators? 


Changing the model can help. For instance, add one factor to the LCA and see which pairs of items have large loadings. Then remove the factor and add WITH statements for those item pairs. BIC might then find a minimum. 

Seth Frndak posted on Monday, October 26, 2015  10:24 am



I am running a latent class analysis, examining model fit with BIC, the LoMendellRubin test (tech11) and the bootstrapped likelihood ratio test (tech14). When testing a 5class vs a 4class solution, I'm finding improved BIC for the 5class solution, and a significant bootstrapped likelihood ratio test for the 5 vs 4 class solution. HOWEVER, the LoMendellRubin test is nonsignificant. What is the source of disagreement here? Which test do I trust? 


Q1. This is unknown. Q2. Because 2 out of 3 tests agree, I would go with 5 classes. I tend to use only BIC these days. 

Seth Frndak posted on Monday, October 26, 2015  8:35 pm



Thank you for your quick response Dr. Muthen. 

Seth Frndak posted on Monday, October 26, 2015  8:37 pm



This was very helpful! Your thoughts are what I expected. I have seen the BIC results published most often. 


I'm running some exploratory Latent Profile Analyses to see if there are particular configurations of social support with the ultimate goal of seeing if the winning configuration predicts wellbeing. I started running Mplus with two classes, then three classes, and so on with the intention of testing up to about eight classes since past research has generally found 45 latent profiles. (I suspect there will be a "winning" number of classes somewhere between 36.) My problem is that my outputs do not seem to include the MODEL FIT INFORMATION (with the Pearson ChiSquare or Likelihood Ratio ChiSquare). It's giving me entropy and AIC/BIC/aBIC, and I requested tech11, so I am getting some model fit info, just not the expected main one. Any thoughts? I am also receiving a warning that "All variables are uncorrelated with all other variables within class." I don't know if that is a separate problem or part of the same problem. 


Use BIC to decide on the number of classes. LPA has continuous outcomes and there is not a general overall model fit statistic in that case. You can look at the RESIDUAL results for each class and for large residuals add a WITH statement for the pair of variables in the model and see if it is significant and changes conclusions. That warning is ok with LPA. 


Hello, I am running a latent class growth analysis and I am trying to determine how many classes exist. I reviewed BIC, ADJ LMR LRT, BF, and cmPk. However, the BF and cmPk come to different conclusions than the adjlmrlrt. Which statistic would you rely on or do you have any additional reccomendations to alleviate this discrepancy? Please let me know if you have any questions or concerns. Thank you. Daniel Kern 


It is seldom that all statistics agree. One important consideration is the substantive interpretation of the classes and their validity, for example, how they relate to a distal outcome. 

Back to top 