Message/Author 

Anonymous posted on Friday, July 01, 2005  9:59 am



I am running a growth mixture model and wanted to assess some distal outcomes, I put them in the usevar list like an example in the manual and got an estimate of the means in each class; however, I would like to know if they differ from each other, is there a way to test this statistically? Thank you!! 

bmuthen posted on Saturday, July 02, 2005  6:00 pm



Run the model with and without holding the means equal across classes to get a chisquare test as 2 times the difference in log likelihood for the two models. 

Anonymous posted on Friday, July 08, 2005  3:01 pm



I am trying to predict from 3 latent classes to 4 continuous outcomes. I found the previous posting about how to determine if outcomes differ from one another to be useful. However, since I have 3 different classes, how can I determine which classes differ? I can imagine holding the means of the outcomes equal across two classes while allowing them to be free in a third and examining loglikelihood values. However, I am concerned that this will lead to multiple tests and will inflate my Type I error rate. Do you have any guidance on this? Thank you! 


I wouldn't worry too much about this. You can take a conservative approach to avoid criticism by using a smaller pvalue. 

Anonymous posted on Wednesday, July 20, 2005  2:35 pm



Following on the previous posting. I have 4 latent classes. The largest group is the best performing group. If I hold the distal outcome to be equal for all 4 classes, it is not surprising that null hypothesis will be rejected as the other 3 classes differ from the always good class. Do you have any suggestion on pairwise testing the differences in distal outcome in the 3 smaller classes? By the way, for binary distal outcome, where can I find the standard error for calculating confidence interval for odds ratio? Thank you so much!! 

bmuthen posted on Wednesday, July 20, 2005  2:49 pm



For the pairwise testing, you can compute the quantities from the estimated model and Tech3 information using the approach of correlated t tests (Tech3 gives the covariance between estimates in different classes). I think the OR confidence interval is presented in the output (at least in v 3.12). 

lulu posted on Wednesday, November 02, 2005  1:22 pm



I have a question about the confidence intervals for the parameter estimates. Mplus gives symmetric confidence intervals by default. I am wondering is that based on normal approximation, i.e. the estimate+/se*z? by the way, I was using growth mixture modeling 


Yes. 


When we want to compare means across classes, are we supposed to examine log likelihood manually? Or, is there a mplus command which leads us to get a chisquare test? Thanks! 


You can use Model Test to get a Wald chisquare test in a single run, or you can do 2 runs and compute chisquare as the 2*LL diff with and without means held equal across the classes. 

Jungeun Lee posted on Wednesday, July 18, 2007  4:52 pm



Thanks for your response! I have a three class growth mixture model and wrote a syntax to get a Wald chisquare test. Mplus gave me error message. Could you let me know what I am missing here? %c#1% [tcdel16] (m1); %c#2% [tcdel16] (m2); %c#3% [tcdel16] (m3); MODEL TEST: 0=m1m2; 0=m1m3; 0=m2m3; 


This should work. Please send your input, output, data, and license number to support@statmodel.com. 


Actually, you should try deleting the last Model Test line 0=m2m3; since that is redundant and causes a singularity problem. 


Thanks for the response! I deleted the line and got a Wald test. Down below is the output that I got from it. Can I say that all those means are different from each other based on the output? Since the line 0=m2m3 was deleted, I am not sure that the output reflects this relationship also or not... I think that I am not super clear about why 0=m2m3 adds redundancy to the test. Wald Test of Parameter Constraints Value 260.266 Degrees of Freedom 2 PValue 0.0000 


Your statements 0=m1m2; 0=m1m3; imply that m2=m1 and that m3=m1. Therefore m3=m2. So instead of estimating 3 parameters you estimate 1, leading to 2 df. The Wald test rejects the hypothesis that these parameters are equal. 


Dear Linda and Bengt, I have done a mixture analyses over 13 measurements of the development of delinquency. For each of these trajectories, I would like to examine whether they relate differentially to the development of parenting . That is, I have 10 measurements of parenting and I would like to do a multivariate growth model including the intercept and slopes for parenting and the intercept and slopes for the same measure of delinquency. Is this possible? Using the delinquency measure in the mixture analyses, and then subsequently using the growth factors of this delinquency measure in a multigroup multivariate model, with groups formed by latent trajectories? Are there maybe others who have used this approach before? Thank you in advance for your help. Loes K 


Are the ten measures of parenting a repeated measure of the same variable over ten time points or ten different variables? 


Dear Linda, The ten measures of parenting are a repeated measure of the same variable. Best, Loes 


You can include a growth model the for parenting variable as part of the GMM. It is not a good idea to estimate a model in steps if it can be estimated simultaneously. So I would do a parallel process growth mixture model. 


Thank you for your help! 

Michelle posted on Wednesday, June 17, 2009  12:01 pm



Hi  I am working on an LCGA model of healthy aging measured at 4 timepoints (4 waves of data across 20 years), with three latent classes and three known classes (based on age). I would like to predict mortality as a distal catgeorical outcome; mortality is recorded at each wave, so essentially this is a timevarying distal outcome. Can I predict death at each wave, or do I need to collapse this variable into a "mortalityatanytime" variable? If I can use the timevarying measure, how would I write this in the commands? All the examples I have seen have been for a single distal outcome. Finally, I am wondering how to instruct MPlus to handle the missing data for this model. Essentially, people become "missing" on the healthy aging measure once they have died  it seems to me this is not MAR, and I am wondering how to instruct MPlus to handle this. Thanks, Michelle 

Michelle posted on Wednesday, June 17, 2009  1:48 pm



Addendum to previous post  the question about the missing data would be twofold. We do have some people who are missing (as might be dealt with using MAR techniques) but then we also essentially have a dichotomous outcome (physically healthy or not) that would be censored when someone dies. Is there a way to handle two types of missingness? Or should I just create a dataset where people are removed after they die  i.e. they would have different numbers of observations of the outcome measure? Thanks! Michelle 


You are right that it is important to think carefully about missing data due to death. This is a big topic and there are many aproaches you can take using Mplus. I recently gave a talk on this but haven't written it up yet. MAR may hold if previous observed values on the outcome is what predicts death. To be on the safe side, however, you want to explore "NMAR" techniques  this essentially brings in dropout information into the model. For example, you can augment your growth mixture model with a dropout model by adding a survival model for the time of death. Survival can for example be a function of trajectory class. By adding this dropout information, you are making the MAR assumption more plausible. 

Michelle posted on Thursday, June 18, 2009  6:24 am



Thanks  this is helpful. I look forward to reading more about this  will the talk be posted here on the website? 


Yes but we don't know when yet. 

Anne Chan posted on Wednesday, April 14, 2010  11:16 am



I am trying to look for the ways to conduct pairwise test to check if there are signficant differenes between each pair of classes generated from mixture growth modeling in terms of a continuous distal outcome. May I ask: (1) Do I have to do it by saving the class membership into the data file. And then use the class membership as a observed variable for further test? (2) If I test the differences by running the model with and without holding the means equal across classes and check if the chisquare are significantly differences between models, does it mean I have to run the test for several times so as to check the difference between each test? (3) I understand there is an alternative way to do it (by MODEL TEST). However, I checked the user guide and could not find a example to follow. May I ask can you point me to an example? Also, is that (checking if the each pair of classes are difference in term of a continuous distal variable) covered in the online video? Thanks a lot! 


You should use Model Test. See chapter 16  pages 558559 in the latest UG version. With 2 classes and one distal with means labelled as m1 and m2 in the Model Command, you say: Model Test: m1=m2; 

John Woo posted on Monday, December 06, 2010  4:04 pm



Hello, My GMM model has a categorical distal outcome (binary) included in usevariablei.e., I am not using auxiliary (e). My model also specifies the distal outcome as a function of sex, race, and SES. When I run this model, I get a threshold value for my distal outcome for each class. Question #1: Should I think of these values as the means of distal outcome when sex=0, race=0, and SES=0? Question #2: If I wanted to get the means of distal outcome for different covariate values, can I still use the threshold of the distal outcome from my original model and use the following equation? P=exp(threshold+b1*sex+b2*race+b3*ses)/(1+exp(threshold+b1*sex+b2race+b3*ses) Note that I do not have separate "b0" (i.e., intercept coeff) in the above equation. I don't see it in the output. Question #3: If I use Model Test to test the difference in means of distal outcome, can I use original model to infer the difference in means about the second model (i.e., the model with different covariate values)? Thank you in advance. 


Q1: Yes. Except in the logit scale. The mean for the distal u is P(u=1). Q2 Q3: If you want to test in the logit scale you simply work with different combinations of threshold, b1, b2, b3 (for different SES). Or you can test in the probability scale using your Q2 expression. 

John Woo posted on Monday, December 06, 2010  7:58 pm



Dear Bengt, Thank you. One more question though. I am using TYPE=IMPUTATION (using five imputed datasets), and I do not see the Wald test results in my output. Is there a specific "TECH" I need to specify for output? Thank you. 


You need to use MODEL TEST to obtain the Wald Test. See the user's guide for further information. 

John Woo posted on Tuesday, December 07, 2010  3:37 pm



Linda, my question was whether the results for MODEL TEST are available when using TYPE=IMPUTATION. I know there are some functions (such as cprob) that are not available in the outputs for running imputed datasets. When I ran my model yesterday using TYPE=IMPUTATION and MODEL TEST, I got clean results (i.e., no error messages), except I did not get any Wald test results. 


MODEL TEST is available in TYPE=IMPUTATION. I ran it and got results. Are you using Version 6.1? It comes under the heading of Wald Test of Parameter Constraints 

John Woo posted on Tuesday, December 07, 2010  6:28 pm



Linda, Ok, I see the result now. I was expecting the Wald Test result for the set of pairwise tests (i.e., Ho: m1=m2, Ho: m1=m3, ... etc). But, it seems that what I get is the test for the Ho: m1=m2=m3=... To get the pairwise results, I guess I will just run the model several times each time using a different pair. P.S. Instead of using Wald Test, could I use the difference in proportion test using the predicted probabilities and estimated class counts? That is, t = (p1p2)/((p1*(1p1)/n1)+(p2*(1p2))/n2))^0.5 where p1 = predicted prob for distal u for class 1 p2 = predicted prob for distal u for class 2 n1 = final class count for class 1 n2 = final class count for class 2 


Yes, you have to run Wald several times. You can express any test function you want in Model Constraint. 


Hello, I am using LCA with covariates, many of which are categorical. I have created dummy variables for these covariates (n1 dummy variables for an nlevel covariate). Using MODEL TEST, I can get a multiple df Wald test for each covariate. Is there any way to run more than one Wald test at the same time? (rather than having to change the MODEL TEST statement and rerun for each covariate) 


No, this is not currently possible. 


Thanks for the quick reply. I would love it if the ability to specify more than one MODEL TEST became a feature in a future update. (1) Also, (2) the ability to specify categorical covariates with LCA would be nice (i.e. the program would automatically create the dummy variables and also, if used in MODEL TEST, it would know to do a multiple df test automatically). In order of preference for future consideration, I'd want (1). Thanks! 


Hi, I have a question regarding the addition of covariates. I have a GMM with 3 classes in which I S and Q are constrained at 0 but S varies in only two classes. Is this correct? : %OVERALL% i s q y1@0 y2@1 y3@2 y4@3; iq@0; c on x1 x2; %c#1% s; s on x1 x2; %c#2% s; s on x1 x2; 


That looks right. Check the results and Tech1 to see that you get what you want. 

K Frampton posted on Thursday, June 02, 2011  11:38 am



Hello, I am performing a wald test of parameter constraints with model test to identify differences in a distal variable across classes. 1. Can you please tell me what it means when the value = ********** (p = 0.00)? How can I find out what this value is? 2. Also,what does it mean when the value = infinity, and the p value= NaN? Thank you! 


The asterisks indicate that the value is too large to be printed in the space allocated for printing. Infinity means a very large number. NaN means not a number. Please send your output and license number to support@statmodel.com so I can see why you are getting these messages. 

K Frampton posted on Friday, June 03, 2011  11:19 am



Thanks Linda  I figured it out. I was simultaneously asking for tech11 (I kept it in from when I was identifying classes). When I deleted this, it worked fine. 


Dear Bengt/Linda, I have 10 repeated measures on felt emotions to predict consumption, a continuous distal outcome, for two groups of people. Specifically, I would like to test the hypothesis that the linear and quadratic trends are different for the two groups, and that these differences in trajectories predict consumption. Following Bengt's recommendation to me in another thread, I used the examples 8.6 and 8.8 to create the GMM below: a) is this is the right approach, given my research hypotheses? b) how do I find out what is the effect of the linear and quadratic factors on the outcome? DATA: FILE IS als.dat; VARIABLE: NAMES ARE Subject Cond subno litedark leftone lefttwo ap1ap10 ag1ag10; USEVARIABLES ARE leftone ap1ap10; CLASSES = cg (2) c(2); KNOWNCLASS = cg (litedark = 1, litedark = 2); ANALYSIS: TYPE = MIXTURE; MODEL: %OVERALL% i s q ap1@0 ap2@1 ap3@2 ap4@3 ap5@4 ap6@5 ap7@6 ap8@7 ap9@8 ap10@9; c on cg; %cg#1.c#1% %cg#2.c#1% %cg#1.c#2% %cg#2.c#2% OUTPUT: TECH1 TECH8 CINTERVAL; 


Your i, s, q means will vary across all 2 x 2 classes in this setup, but that is probably what you want. I assume your distal is "leftone" and its mean will vary over the 2 x 2 classes, so that is the effect of growth on distal here (also including an effect of membership of your cg). I don't think you can disentangle the influence of linear and quadratic factors on the distal because they interact. That's why it is better to let the 2 mixture classes of c influence the distal as is done here. So in sum, I think this probably gives what you might want. 

Anna Wolf posted on Monday, April 08, 2013  5:30 am



Dear Drs, I was hoping to get some assistance with comparing intercepts and slopes across classes. I’ve run a latent class growth analysis with a dummy covariate (e.g. treatment vs control groups). There are three classes. My understanding from previous posts is that I should use Wald chi2 testing via Model Test. I’m just not exactly sure of what the syntax would look like. I think I need to added the following to my syntax: MODEL TEST: 0=p1p2; 0=p1p3; I've also added the labels: (p1), (p2), (p3) for the thee 'slope on treatment' commands for each class. Does this test the mean slope differences between the classes? If so, how do I also go about comparing the intercepts across classes? Thanks in advance for your help! 


It looks like you have done this correctly. For the intercepts, you would label them and do the same test. 

Anna Wolf posted on Wednesday, April 10, 2013  5:41 pm



Thanks Linda for your speedy reply! So, just to clarify, the Wald Test of Parameter Constraints result for the above syntax would compare the slopes of both class 2 (p2) and class 3 (p3) with class 1 (p1). Is that correct? Thus, a significant Wald result would mean that there is a significant difference between the slopes of both class 2 and 3 compared to class 1. Is that right? Ultimately, I am wanting to compare the slopes of all three classes separately (e.g. class 1 (p1) with class 2 (p2), class 1 (p1) with class 3 (p3), and class 2 (p2) with class 3 (p3)). Is it still correct to just add the syntax: MODEL TEST: 0=p1p2 to compare class 1 with class 2, then repeat accordingly for the direct (pairwise) comparisons for the other two classes in separate analyses? e.g. MODEL TEST: 0=p1p3 and MODEL TEST: 0=p2p3 Cheers! 


The sytax above tests the equality of the coefficients jointly. A significant Wald test means there is a difference somewhere. It doesn't pinpoint where. The syntax below tests each pair separately if there is only one MODEL TEST in the input. 

Back to top 