Message/Author 

Sue Lee posted on Thursday, March 08, 2012  2:37 pm



Dear Drs. Muthen, Could you point to me resources that explain how the auxiliary (e) command tests the difference in the means across latent classes by a covariate? Explanation in the Mplus user guide stating that it is using the posterior probabilitybased multiple imputations was not sufficient for me. Thank you. 


There is a technical appendix on the website that covers this. It is called: Equality Test of Means Across Latent Classes Using Wald ChiSquare Based on Draws From Posterior Probabilities 

Sue Lee posted on Friday, March 09, 2012  8:06 am



Thank you very much, Dr. Muthen! 

Sue Lee posted on Monday, April 09, 2012  12:28 pm



Dear Drs. Muthen, If the predictor of interest is a categorical variable such as race, what is the interpretation of the probability of the covariate given class membership? I am trying to use this command to test if I need to stratify the LCA model by race. Would it be an appropriate use of an auxiliary(e) command? Thank you. 


It is the proportion in each class of the race equal to one in the dummy variable. The AUXILIARY command is used for screening purposes. 

Sue Lee posted on Wednesday, April 11, 2012  8:39 am



Thank you, Dr. Muthen. Do you suggest a way to test if the LCA model needs to be stratified by race? I can use the theory and the fact that conditional probabilities of indicators seem to differ substantially in a certain class, but I would like to know if there are ways to back it up statistically. 


You can look at the direct effects of the latent class indicators regressed on race and allow them to vary across classes. You cannot look at all of them at the same time because this would not be identified. 


I am estimating a growth mixture model and using AUXILIARY (e) function to test differences in predictor means across classes. A reviewer wants to know whether the procedure accounts for multiple testing. I could not find information about this in the technical appendices from 2010 and 2014. 


No, it does not. But if you focus on the overall test there is no such issue. 

Laura posted on Wednesday, April 01, 2015  2:01 am



Hi, When using auxiliary(e), means and standard errors are given in the output. Is it possible to calculate standard deviations for these means based on the standard error and the number of observations in the auxiliary variable in question? I have some missing values in these auxiliary variables, and I was wondering which would be the correct "n" in each case; the number of observations in the whole analysis or in each auxiliary variable? 


We no longer recommend aux(e), except for methods research. See recommendations in Table 6 and 7 of the paper on our website: Asparouhov, T. & Muthén, B. (2014). Auxiliary variables in mixture modeling: Using the BCH method in Mplus to estimate a distal outcome model and an arbitrary second model. Web note 21. The output for the BCH option gives the needed tests. They are based on the number of individuals with data on the distal outcome in question. 

Laura posted on Thursday, April 02, 2015  6:41 am



Thank you for your reply. I'm interested in predictors of latent classes (or trajectories). I have understood that the R3STEP method would be suitable for this purpose (instead of aux(r)), when including covariates and doing multinomial logistic regression. If I also would like to look at and report the means of predictors in the latent classes independently, would the BCH method be suitable for that (the automatic 3step analysis using AUXILIARY = y(BCH))? 


Yes. 

Anna Hawrot posted on Thursday, April 23, 2015  6:09 am



Hi! I'm trying to compare means of two distal outcomes across 5 latent classes controlling for set of covariates. I used manual BCH method, however I'm not sure if I fully understand the output. As I centered my covariates, the means I'm interested in are simply intercepts in the "regression of distal outcomes on covariates" part of the output, am I right? If yes, how can I compare the intercepts across classes (in the output they all are tested against zero)? Is there any option in BCH method to do that? Or maybe I should use "model test:" command? 


Give the intercepts labels and then express any difference you want as a NEW parameter in Model Constraint. 

Back to top 