Message/Author 


I am conducting an LCA on 6 categorical variables, and an analysis if their covariates. The distal outcomes are TSDO and TRWA I am using Mplus7, and have written my syntax as. DATA: FILE IS DATA.dat; format= 1F8.0 8F8.2; NOBSERVATIONS ARE 4442; VARIABLE: NAMES ARE IDT2 TET TAT TCT TNrT TOT THT TSDO TRWA; IDVARIABLE IS IDT2; MISSING ARE ALL Blank; USEVARIABLE ARE TET TAT TCT TNrT TOT THT; CLASSES ARE C(5); ANALYSIS: COVERAGE = 0.15; PROCESSORS = 4; TYPE = Mixture; STARTS = 500; LRTSTARTS = 20 10 160 80; MODEL: %OVERALL% OUTPUT: Tech11 Tech14; PLOT: TYPE = Plot3; SERIES = TET (1) TAT (2) TCT (3) TNrT (4) TOT (5) THT (6); 1) The output shows that each variable has the same variance across classes. Is this due to a syntax error? When I take away the 6th variable (which may not be included) the classes change more than dramatically the expected, and produce unlikely outcomes. Is this due to a syntax error? 2) With Mplus7, do I need to use manual code to analyse covariance, or do I use the same analysis as above, but exclude Tech11 and Tech14, and include VARIABLE: AUXILIARY = (Du3Step) TSDO TRWA; Thank you, Laina 


First, you don't want to include Tech11 and Tech14 before you have found the best solution; see Asparouhov, T. and Muthén, B. (2012). Using Mplus TECH11 and TECH14 to test the number of latent classes. Mplus Web Notes: No. 14. May 22, 2012. Note that you don't want to say STARTS=500, but instead for instance STARTS = 500 100. 1) This is the default, to give a stable and easily estimated baseline model. No, deleting the 6th variable may change the results for substantive reasons. 2) You can use the same analysis as above, but exclude Tech11 and Tech14, and include VARIABLE: AUXILIARY = (Du3Step) TSDO TRWA; Note also that Mplus version 7.1 has a new stepwise technique for distal outcomes, referred to as DCON/DCAT; see http://www.statmodel.com/verhistory.shtml 


Dear Dr. Muthen, Thanks for the Mplus V7.1 which has a lot of new and useful features. I have a question related to covariate testing in mixture model (factor mixture model). Originally I used DU3STEP/DE3STEP to analyze the covariates. However, the new V7.1 gave warning messages for a lot of my auxiliary variables, namely, that the classification error between Step 1 and Step 3 exceeded 20%. Apparently there were no results for those variables. I understand that this check is a new feature for V7.1. In this case, should I use the new DCON/DCAT for distal outcomes? This new approach did not give this warning message (except for listwise deletion). Thanks 


Yes, this is what you should do. Do not inlcude any c ON statements when you use these options. 


Dear Linda, It is relieving that I can go for the DCON/DCAT auxiliary approach in analyzing the distal outcomes, because I do not know how to deal with the many error warnings associated with the DU3STEP/DE3STEP approach. Thanks a lot for your swift response. 


Thanks for your help. I have a follow up question regarding the dcon/dcat. I am interested in using this analysis, but have Mplus7, not Mplus7.1. Is there a manual code that I can use to run the same analysis? Laina 


No, that's not possible. You need 7.1. 

Laina Isler posted on Wednesday, June 12, 2013  2:38 pm



Thanks, I will look into updating it. I am getting an entropy of 0.611, which I understand is not strong at all, but my average latent class probabilities for most likely class membership range from 0.815 to 0.74. And the classes obtain good chisquare values in the 3 step and 1 step equality tests. I am not sure whether I should interpret the data as demonstrating distinctive classes, or whether the entropy is too low. I have tried changing number of classes, and this does not improve entropy. Is there another way to improve entropy? Laina 

Laina Isler posted on Wednesday, June 12, 2013  3:12 pm



Also  I am dealing with data sets of 4500 to 6900. I am not certain whether entropy may be negatively effected by class size. 


An entropy of 0.6 is ok. You should not decide on the number of classes based primarily on entropy  use BIC instead. Sample size does not have to do with entropy. 

Laina Isler posted on Wednesday, June 12, 2013  4:15 pm



Thanks for the quick reply. The BIC are very large, so I am not certain how much of a difference is necessary to support one number of classes over another. Furthermore, the BIC keeps decreasing when I add classes long after further tests (Vuonglomendellrunin likelihood ratio test) indicate that fewer classes would be sufficiant. For instance, a BIC of 322937.566 vs. 322838.965 vs. 322769.701 were obtained for 4, 5, 6 classes, respectively. While the tech11 output (lomenellruban adjusted LRT) is 0.008, 0.0102, and 0.0922, respectively. Does this indicate the a 5 class solution is attained, despite BIC values continuing to decrease, (up to 9 classes continue to show a decrease in BIC) 


Yes, Tech11 gives support to 5 classes, but given that BIC continues to decline, the conclusion isn't clear. I would recommend what we teach in Topic 5, namely to look at how different the solutions are for 5 and higher number classes. Look at the profile plots  perhaps 6 and more classes are merely uninteresting variations on 5class themes. 


Thanks for your help. The 5 class seems the most reasonable. However, I am getting altered models (class size, average latent class probabilities and plot) depending on whether I run the data with syntax to test covariates (du3step), or likelihood ratio tests (tech11 and tech14). I thought these tests were not supposed to effect the classes themselves, and was wondering which output would be more accurate, or whether there is a way to run the tests wihtout impactin class results? Thank you 


Please send one output with neither DU3STEP or TECH11 and TECH14, one output with only DU3STEP, and one output with only TECH11 and TECH14, and your license number to support@statmodel.com. 

Natalie posted on Thursday, April 17, 2014  1:43 pm



Are variances of the continuous distal outcome free to vary or constrained equal across class when Dcon is used? 


They are unconstrained/unequal. 

Back to top 