Message/Author 

Judy Black posted on Wednesday, October 31, 2012  4:11 am



I am using the new 3step approach to examine latent class. After finishing the first two steps, we got 3 classes. We would like to examine how depression scores changed within each of the class. In SPSS, we could use repeated measures ANOVA to examine whether/how depression changed within every class. Is it possible to do ANOVA/ttest in Mplus 7? Or, is there any other analysis that can be used to examine the change scores of depression within every class? Thanks in advance. 


The third step is the ttests. 

Judy Black posted on Thursday, November 01, 2012  1:46 am



Hi Linda,thanks for your reply. Here is the syntax of my third step. What commands shall I use within the model part to have a ttest? TITLE: latent class ana DATA: FILE IS test7.dat; VARIABLE: NAMES ARE T1CESD T1CONTR T2CESD T2CONTR T3CESD T3CONTR p1 p2 p3 n; MISSING IS ALL (99.000); USEVARIABLES are T1CESD T2CESD T3CESD n; ClASSES = C(3); nominal=n; ANALYSIS: TYPE=MIXTURE; MODEL: %overall% %C#1% [N#1@2.792]; [N#2@0.169]; %C#2% [N#1@7.107]; [N#2@9.080]; %C#3% [N#1@2.208]; [N#2@9.106]; 


If your T1CESD T2CESD T3CESD variables are covariates, just say AUXILIARY = (R3STEP) T1CESD T2CESD T3CESD; and MODEL: %OVERALL% That's all. This will give you the tests you need. If they are not covariates, use AUXILIARY = (D3STEP) T1CESD T2CESD T3CESD; 

Lewina Lee posted on Friday, November 02, 2012  5:00 pm



Dear Drs. Muthen, Could you please explain when we need to use the AUXILIARY statement in Step 3 of the new 3Step approach to mixture modeling? In your reply to Judy Black's questions, you suggested using AUXILIARY = (R3STEP) T1CESD T2CESD T3CESD; In Webnotes No. 15 v. 5 (Appendix F), it does not specify an AUXILIARY statement. The code is: variable: Names are u1u10 y x p1p3 n; usevar are y x n; classes = c(3); nominal=n; data: le=man3step2.dat; Analysis: Type = Mixture; starts=0; Model: %overall% Y on X; (and so on..) I would like to regress a distal outcome on the latent class variable (C) & several covariates in Step 3. Do I need to specify the outcome & covariates in an AUXILIARY statement? Thank you very much for your help on this new technique. Sincerely, Lewina 


There is a distinction between the automatic and the manual approach to 3step. The automatic approach use AUXILIARY and does therefore not need to use the [n#1@] type of statements. The automatic approach is suitable for exploration of covariates and distal outcomes in an LCA. An input example showing the covariate case is given in the Version 7 UG ex 7.3. See also the V7 UG pp. 553554. The manual approach does not use AUXILIARY and needs the [n#1@] type of statements. The manual approach is exemplified in Appendix F of Web note 15 for the case where the variables not included in the latent class formation represent a regression model. If you want to regress a distal outcome on the latent class variable (C) & several covariates in Step 3, you need to take the manual approach. 

Lewina Lee posted on Sunday, November 04, 2012  1:35 pm



Great  clear to me now. Thank you very much for your explanation! 

Lewina Lee posted on Wednesday, November 07, 2012  6:34 am



Dr. Muthen, I tried the manual 3step approach. In the third step, I received an error message when I tried to regress a binary distal outcome (CHD) on a binary latent class variable (c). Could you please help? These are the model statements and error message I received. I also tried including the ALGORITHM=INTEGRATION statement to the ANALYSIS section (as suggested in the error message), but that did not help. MODEL: %OVERALL% CHD on T1AGE; c on T1AGE; CHD on c; %C#1% [N#1@ 2.682732393]; CHD on T1AGE; c on T1AGE; CHD on c; %C#2% [N#1@ 2.923583166]; CHD on T1AGE; c on T1AGE; CHD on c; OUTPUT: SAMPSTAT RESIDUAL STANDARDIZED TECH1 TECH4 TECH7 TECH8 ; *** ERROR The following MODEL statements are ignored: * Statements in the OVERALL class: CHD ON C#1 * Statements in Class 1: C#1 ON T1AGE CHD ON C#1 * Statements in Class 2: C#1 ON T1AGE CHD ON C#1 *** ERROR One or more MODEL statements were ignored. Note that ON statements must appear in the OVERALL class before they can be modified in classspecific models. Some statements are only supported by ALGORITHM=INTEGRATION. Thank you, Lewina 

Judy Black posted on Wednesday, November 07, 2012  7:45 am



Dear Dr.Muthen, Thanks for your reply. I tried the commands you suggested, but got an error. You may see the output here: Mplus VERSION 7 MUTHEN & MUTHEN 11/07/2012 4:27 PM INPUT INSTRUCTIONS TITLE: latent class analysis Nov 7th DATA: FILE IS C:\dataset2.dat; VARIABLE: NAMES ARE T1CESD T1CONTR T2CESD T2CONTR T3CESD T3CONTR T1SPAN T2SPAN T3SPAN Education; MISSING IS ALL (99.000); USEVARIABLES are T1CESD T1CONTR T2CESD T2CONTR T3CESD T3CONTR; ClASSES = C(3); Auxiliary =(de3step)T1CESD T2CESD T3CESD; ANALYSIS: TYPE=MIXTURE;PROCESSORS=2; OUTPUT: TECH11;TECH14; SAVEDATA: FILE=C:\November\dataset1.dat;SAVE=CPROB; *** WARNING Variable name contains more than 8 characters. Only the first 8 characters will be printed in the output. Variable: EDUCATION *** WARNING in MODEL command All variables are uncorrelated with all other variables within class. Check that this is what is intended. 2 WARNING(S) FOUND IN THE INPUT INSTRUCTIONS latent class analysis Nov 7th *** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. 

Lewina Lee posted on Wednesday, November 07, 2012  8:00 am



Hi, I think I resolved the issue I had earlier  I am retracting my question above (Wednesday, November 07, 2012  6:34 am). Sorry for the inconvenience. Lewina 


Judy: Remove TECH11 and TECH14 from the OUTPUT command. These options cannot currently be used together with DE3STEP. 


Hello, I am using the 3step manual approach Mplus 7 to regress a distal outcome on the latent class variable and covariates. And I may be just completely overlooking something but Is there any way to test the equality of means of the distal outcome across classes(like in the du3step) while also adjusting for covariates? My third step output gives my outcome (y) on covariates (x1x8) and the intercept and residual variance for each class. I was hoping to get a comparison across classes, is there a way to request this or a manual approach? Thanks for any suggestions. 


Use MODEL TEST in the final step. %C#1% ... [D] (m1); %C#2% ... [D] (m2); MODEL TEST: m1=m2; 

Leslie Roos posted on Tuesday, February 05, 2013  11:42 am



Hello, I am using 3step approach in MPlus 6 to regress a distal categorical outcome on latent class variable (complex stratified dataset), currently in the 3rd step following procedures outlined in Feingold, Tiberio & Capaldi (2013) and Asparouhov & Muthen (2012). I am stuck at receiving the error: *** ERROR One or more MODEL statements were ignored. These statements may be incorrect. Any advice would be highly appreciated! Leslie ANALYSIS: TYPE = COMPLEX MIXTURE; ESTIMATOR = MLR; ALGORITHM = INTEGRATION; INTEGRATION = MONTECARLO; MODEL: %OVERALL% aax1or2 ON cGROUP; jail ON cGROUP; %cGROUP#1% [GROUP#1@1.087]; [GROUP#2@2.792]; [GROUP#4@0.264]; [GROUP#5@0.033]; jail ON aax1or2; %cGROUP#2% [GROUP#1@0.653]; [GROUP#2@2.008]; [GROUP#4@0.838]; [GROUP#5@0.087]; jail ON aax1or2; .... OUTPUT: CINTERVAL; 


Please send the output and your license number to support@statmodel.com. 


Hi, I am performing the manual 3step for an LCA model with covariates and three distal outcomes (two of which are binary and one is continuous). I am unable to run the final step with all three distals and covariates. Two problems happen. First, when I ran the model with only the continuous distal, the covariate relationships were estimated, but the Wald test for the comparison of the distal outcome means (using model test) produced the following error: “WALD'S TEST COULD NOT BE COMPUTED BECAUSE OF A SINGULAR COVARIANCE MATRIX.” Second, when I ran the model with a binary distal (which was treated as continuous), I received the following error message under the Sample Statistics section: “THE ESTIMATED COVARIANCE MATRIX FOR THE Y VARIABLES IN CLASS 1 COULD NOT BE INVERTED. PROBLEM INVOLVING VARIABLE (F1EVERDO). COMPUTATION COULD NOT BE COMPLETED IN ITERATION 4. CHANGE YOUR MODEL AND/OR STARTING VALUES. THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.” Thanks in advance. 


First, note that Web Note 15 which describes the 3step approach has recently been corrected. The corrected version can be found on our home page. This means that the manual approach has changed and that R3STEP, DE3STEP, and DU3STEP have errors. Corrections are included in Mplus Version 7.1 which we plan to release at the end of March. Results are however quite similar. Second, these types of singularities may be due to an observed variable being constant within a class. Please check using the most likely class approach. Third, when using a binary distal variable, DU3STEP should be replaced by DE3STEP. 


Hi, I am using the calculations based on the revised Web Note. I also checked that there is variance in each of my classes for my three distals using the most likely class assignment. However, I still am not able to test the equality of means using "model test." Is there a different way for testing the equality of means in the context of the manual 3step? Thanks! 


Is your distal categorical? If not, please send your input, output, and data to Support. 


Thank you for your response. I have three distal outcomes (two of which are binary and one is continuous). I will send my input, output, and data to support.I appreciate your assistance! 


Hello, I conducted a mixture model analysis and then subsequently evaluated evaluated covariates and distal outcomes using the new approaches in version 7. I was asked to justify why I would use the Wald test over anova/manova for my distal outcome means comparison. Is there any specific resource you can direct me that would help validate the use of the Wald test for comparison? Thanks! 


anova/manova doesn't exist for mixture modeling and Wald does where Wald is essentially the same as anova if it were available for mixtures. 

Lewina Lee posted on Thursday, March 14, 2013  6:16 am



Drs. Muthen, I am running an LCA on data from a sample at Time 1, and using DU3STEP to compare classes on outcomes at Time 2. Only about 6070% of my LCA sample has data on Time 2 variables. How can I obtain the classspecific Ns for each distal outcome I specify with DU3STEP? Thank you, Lewina 


Not sure what you mean by classspecific Ns  if you are referring to sample size or the means of the nominal N variable representing latent class. If you refer to sample size, you can get that by doing a Type=Basic run. Note that the Version 7 DU3STEP has an error that is corrected in the upcoming version 7.1. See also Web Note 15 which shows how to do this manually. 

Lewina Lee posted on Thursday, March 14, 2013  9:58 am



When I ran DU3STEP, the output reads: LPES06 Mean S.E. Class 1 15.613 0.246 Class 2 14.222 0.394 Class 3 15.294 0.215 ChiSquare PValue Overall test 9.092 0.011 Class 1 vs. 2 8.836 0.003 Class 1 vs. 3 0.702 0.402 Class 2 vs. 3 5.488 0.019 Because my LCA was ran on N=1076, and only a subset of the sample had data (N=649) on variable LPES06 (auxiliary distal outcome), I want to know the split of the N=649 into Classes 1, 2 & 3. That is, in the above, what N is the Class 1 mean of 15.61 (SE=.246) based on? 


You need to use SAVEDATA: SAVE=CPROB; and using e.g. Excel you can count the number of cases that have nonmissing for the distal outcome and have the relevant most likely class. 

Yaoyue Hu posted on Thursday, April 11, 2013  2:10 am



Hello, In the step 3 of manual 3step estimate, Model: %overall% c on x1x13; %c#1% [N#1@2.706]; [N#2@4.043]; [N#3@0.172]; [N#4@0.354]; %c#2% [N#1@2.303]; [N#2@9.176]; [N#3@5.635]; [N#4@3.912]; %c#3% [N#1@5.670]; [N#2@6.898]; [N#3@9.019]; [N#4@6.131]; %c#4% [N#1@7.073]; [N#2@0]; [N#3@6.016]; [N#4@9.036]; %c#5% [N#1@1.673]; [N#2@9.038]; [N#3@9.038]; [N#4@9.038]; The following MODEL statements are ignored: * Statements in Class 1: [ N#3 ] [ N#4 ] . . . * Statements in Class 5: [ N#3 ] [ N#4 ] I don't know what was wrong, and is it possible to correct? Thank you for your help. 


Did you have N on the USEV list and did you declare it as nominal? Also, be sure to calculate your [N#values] using the corrected Web Note 15, which is called Version 6. 


Hi there, I get the following problem in 3rd step when using LC as a moderator. *** WARNING in MODEL command At least one variable is uncorrelated with all other variables within class. *** ERROR The following MODEL statements are ignored: * Statements in Class 1: [ N#1 ] [ N#2 ] [ N#3 ] [ N#4 ]... My command as follows: (I have double checked my way of calculating the [N#values], by recalculating those in the example of web note 15,v6): variable: names are Nu E O A CO EAU AU4 CPROB1CPROB5 n; usevar are EAU AU4 n; classes=c(5) NOMINAL=n; DATA: FILE=unweight2.dat; ANALYSIS:TYPE=MIXTURE; starts=0; MODEL: %OVERALL% AU4 ON EAU; %C#1% [N#1@3.201]; [N#2@5.933]; [N#3@1.404124]; [N#4@0.514]; AU4 ON EAU; ... %C#5% [N#1@1.756]; [N#2@3.564]; [N#3@6.349]; [N#4@9.026]; AU4 ON EAU; How can I fix it? Thanks. 


Try adding a semicolon at the end of the CLASSES option. 


Thank you. It works now. 


I have an LCA model in which the covariates influence the measurement model. It is similar to UG example 7.12, except the covariates do not have direct effects on the indicators. The covariates are binary (e.g. gender). How do you output the mean of the covariate for each latent class? For example, I would like to know the proportion of people within each class that are female. Thank you! 


As for Residual in the Output command. 


Thank you for your reply. Is there a way to test for significant differences for the covariates across classes? (those that are included in the LCA model). 


You can do that using the 3step approach of Auxiliary = (R3STEP) x; where you don't include x in the model. If you include the x in the model, it is harder because the x parameters are not part of the model. Unless you bring them into the model, mentioning say their means, and then do Model Test on them. But that is a heavier model, particularly if x's have missing data. 


Thanks for your reply. I would like to keep the covariates in the LCA measurement model. What if I used the manual 3step procedure (Web note 15 v. 6) and ran the measurement model with covariates in step 1. Is step 3 flexible such that I could leave the covariate out of the LCA model, but specify it on an auxiliary statement instead using the (e) or (r3step) command? 


No. And the (E) option is now more or less superseded by the DU/E3STEP and DCAT/DCON options (see the Version 7.1 Mplus Language Addendum). But to me, if you have gender as a predictor of latent class in your model, it seems that this model tells you what you need already  the significance of the gender effect is reported (males have significantly different latent class logits than females). If you are instead interested in how significantly different the gender proportions are in the different classes, shouldn't the model be one where gender is a distal outcome instead of a predictor? 


You are correct about already having what I need already in terms of gender differences across classes (the mean is estimated in the residual output, and significance test from the regression). Thanks for pointing that out. The remaining piece that I am unclear on is how to include gender in the measurement model, but still use R3STEP to examine the association between covariates and latent classes, while controlling for gender in the multinomial logistic regression. Is that possible using the "automatic" or manual r3step procedure? 


So you have a model with say u1u5 as latent class indicators and gender as an x variable with c on x. And then you want to check other potential covariates by having them predict the classes determined by the u1u5 plus x information. If I am understanding you correctly, you can do that by manual R3STEP. 

Wendy Liu posted on Thursday, June 13, 2013  5:12 pm



Dear Professor Muthen, I have used the manual 3step approach (Web Notes: No. 15, V6) for an LCA model with one continuous covariates and one distal continuous outcome. The output file, under the section of “Model Results”, showed the following results Categorical Latent Variables Means Estimate S.E. Est./S.E. Twotailed Pvalue C#1 0.521 0.162 3.209 0.001 C#2 0.744 0.200 3.716 0.000 How should I interpret this part of results? Thank you Wendy 


These are the logits that define the model estimated class probabilities which are given earlier in the output. 


Is it possible to use categorical or nominal predictors with r3step? 


The covariates can be binary or continuous. A nominal variable needs to be changed to a set of binary variables. 


Thank you very much! 

Jamie Taxer posted on Thursday, August 29, 2013  3:13 am



Dear Dr. Muthen, I have conducted an LPA with a sample size of 260 and replicated the classes in a second sample with 193 participants. I would now like to see if the classes differ from one another on multiple distal outcomes. I was planning on using the 3step approach, but I am not sure if my sample size is big enough for it. I noticed in presentation online that you recommend using the 3step method with a sample size of 500 or more. What would you suggest, should I use the 3step approach or the pseudoclass draw approach? 


I think you can use 3step with you samples. 


Dear Dr. Mutehn, I am running 3 step LCA with distal outcome (web note15). I could run the program but I am not sure how I can run Wald TEST with outcome of each class. (my outcome is a continuous variable). When my understanding is correct, intercept of each group is mean of each group's outcome. I'd like to compare one group's (e.g., g1) outcome is significantly higher than another group(e.g., g4). Would you please tell me how to do it? I would very appreciate if you can indicate reference point or syntax for it. Sincerely, HJ 


Are you using the DCON option? If not, read about it in the web note. 


I am implementing manual 3 step because I'd like to predict my outcomes at w2 by controlling previous outcomes (W1) to examine how my latent subgroups significantly related to my outcomes. 1. Does this idea fit in Mplus' manual 3 step LCA with distal outcome? (In other words, manual 3 step can test this model?) 2. I'd like to compare "intercept" of each class while controlling for previous outcomes (W1) because I can show how my latent classes are related to my dismal outcomes. 1) Is it OK to compare "intercept" to show how each class has different "mean" of dismal outcomes? If so, how can I compare "intercept" of each class? 


1. Note that when you do the manual 3step, the class percentages may change when you add the distal Y. 2. You can put a parameter label on the intercepts and then use Model Test to test if they are different. 


Drs. Muthen, I tried the manual 3step approach. In the third step, I received an error message when I tried to regress 6 distal outcomes on "c." I'm modeling a 6class model. Membership in the latent profiles is predicted by 4 variables. Finally, the profile should predict 6 distal outcomes. Could you help please? Here are excerpts from the code and error message. MODEL: %OVERALL% T4MSSE ON C; T4INT ON C; T4UTIL ON C; T4INTEN ON C; T4IDENT ON C; T4TEST ON C; C ON GLEVEL IND T2REL T5CLIM; %C#1% [N#1@10.480]; [N#2@6.777]; [N#3@.276]; [N#4@7.702]; [N#5@3.922]; T4MSSE ON C; T4INT ON C; T4UTIL ON C; T4INTEN ON C; T4IDENT ON C; T4TEST ON C; C ON GLEVEL IND T2REL T5CLIM; ........ AND SO ON ......... *** ERROR The following MODEL statements are ignored: * Statements in the OVERALL class: T4MSSE ON C#1 T4MSSE ON C#2 T4MSSE ON C#3 T4MSSE ON C#4 T4MSSE ON C#5 ........... AND SO ON ....... 


You cannot regress an observed variable on a categorical latent variable. The effect you are looking for is found in the varying of the means of t4msse across classes. 


Thanks for the tip! 


Just a follow up. I've changed my input as follows: MODEL: %OVERALL% C ON GLEVEL IND T2REL T5CLIM; %C#1% [N#1@10.480]; [N#2@6.777]; [N#3@.276]; [N#4@7.702]; [N#5@3.922]; [T4MSSE] (msse1); [T4INT] (int1); [T4UTIL] (util1); [T4INTEN] (inten1); [T4IDENT] (ident1); [T4TEST] (test1); C ON GLEVEL IND T2REL T5CLIM; ... %C#6% [N#1@9.183]; [N#2@10.559]; [N#3@3.610]; [N#4@2.083]; [N#5@4.262]; [T4MSSE] (msse6); [T4INT] (int6); [T4UTIL] (util6); [T4INTEN] (inten6); [T4IDENT] (ident6); [T4TEST] (test6); C ON GLEVEL IND T2REL T5CLIM; MODEL TEST: msse1=msse2; msse1=msse3; msse1=msse4; msse1=msse5; msse1=msse6; msse2=msse3; msse2=msse4; msse2=msse5; msse2=msse6; msse3=msse4; msse3=msse5; msse3=msse6; msse4=msse5; msse4=msse6; msse5=msse6; But I get an error message saying that my MODEL statements are ignored. Suggestions? 


Please send the output and your license number to support@statmodel.com. 

Jamie Taxer posted on Thursday, October 17, 2013  3:15 am



Dear Dr. Muthen, I am doing the automatic 3 step approach to test for differences between my latent classes. When testing for distal outcomes using the DU3STEP command I get the following message for several variables and classes: PROBLEMS OCCURRED DURING THE ESTIMATION FOR THE DISTAL OUTCOME EMOEX. THE LATENT CLASS VARIABLE IN STEP 3 HAS MORE THAN 20% CLASSIFICATION ERROR RELATIVE TO STEP 1 IN CLASS 1. For these variables I am also only getting values of 999.000 for the equality tests. What does this mean and is there some way that I can fix it? 


This means that you should not use DU3STEP. Instead, try the new option of DCON/DCAT. These issues are described in web note 15. 

Jamie Taxer posted on Thursday, October 17, 2013  7:06 am



Thank you! Does this mean that I should do two separate analyses? One time R3STEP to see which variables predict the latent classes and DCON to analyze the distal outcomes, since In Lanza's method latent class predictors should not be included in the model? 


That's right: R3STEP and DCON done separately. 


Drs. Muthen, I could run the manual 3step approach. I'd like to test whether DV' mean of each class are significantly different or not. As your recommendation (web note 15),I ran Wald test. However, once I ran it, I can only assign the same variance of DV among groups. Please take a look of my syntax. MODEL: %OVERALL% Y1 Y2 Y3 on ; %C#1% [N#1@2.212]; [N#2@6.556]; [N#3@0.051]; [Y1] (a1) Y2 Y3; %C#2% [N#1@8.10];[N#2@2.369]; [N#3@1.809]; [Y1](A2) Y2 Y3; %C#3% [N#1@0.6]; [N#2@0.921]; [N#3@2.154]; Y1 Y2 Y3; %C#4% [N#1@3.837]; [N#2@3.172]; [N#3@3.457]; Y1 Y2 Y3; model test: a1 = a2; Once I ran this model, I could see the same variance of Y1 in C1 and C2 and the same variance of Y2 in c1 and c2, and the same variance of y3 in c1 and c2. I don't know how to make mean difference test. Would you please point it out what I did make a mistake and how I can compare DV' mean difference across latent classes? Thank you in advance. 


In a statement like [Y1] (a1) Y2 Y3; nothing is parsed after the label. You would need to specify [Y1] (a1); Y2 Y3; or [Y1] (a1); Y2 Y3; To free the y variances in all classes you must have y1 y2 y3; in all classes. You do not. Use MODEL TEST: 0 = a1 a2; If you want the other means, you need to label them like you did a1 and a2. 


We ran a latent class analysis that settled on a 6class solution, and used r3step to test predictors of class membership. Because of the high number of classes and because some of them look very similar, we were wondering if it would be possible to combine some of them and reevaluate the influence of predictors. Is this possible with r3step or the manual 3step method? Thank you! 


No, you would have to rerun with a smaller number of latent classes. 


This seemed to work: I created a new mostlikely latent class variable, with the original six levels recoded into three. Then, I recalculated the classification uncertainty rates using the average posterior probabilities table, the proportions based on the estimated model, the proportions based on mostlikely class membership, and the following formulas: P(A or B) = P(A) + P(B)  P(A and B) P(BA) = P(AB) * P(B) / P(A) Finally, I calculated the logit table and used the values for step 3 of the 3step method. 


Hello, I am running an LPA with the automatic R3STEP and DCON procedures for covariates and distal outcomes. I have three separate data sets that I am combining and would like to account/control for this in the the actual profile development. Do you have any suggestions on ways to do this (or point me in the right direction in the literature)? Currently, I have just created a set of dummy codes and regressed c onto them (syntax below). Might this be an appropriate way to control for the data sets in the class development? Thank you for your time. MODEL: %OVERALL% u1; u2; c ON dummy1 dummy2; 

Jamie Taxer posted on Monday, November 25, 2013  9:20 am



Hi, I am using r3step to check for predictors of my latent profile groups and would like to know if it is possible to receive the class level means of each of the predictors. Next to whether or not the groups significantly differ from one another, I would like to report each classes mean and confidence interval for the predictors. 


Answer to Daniels: Your input says that the class probabilities can change over the data sets. But the item probs for each class will be the same, so measurement invariance is specified. To avoid MI you can explore direct effects from dummies to class indicators; we talk about that in our teaching of Topic 5, also giving a ref to Clogg work on multiplegroup LCA in I think Soc Meth. A more general Mplus approach than using dummy covariates is to let data set be represented by a Knownclass variable in which case you have great flexibility in checking data set invariance for the model parameters. 


Answer to Taxer: That does not come out automatically with R3STEP, so instead I would recommend using the Most likely class membership as an observed variable in a secondary analysis where you get the means and their SEs computed. 


Thank you for your response. I understand I can explore MI across data sets with the knownclass command. There does seem to be some minor differences in estimates when I go that route. To clarify your first point, would it be accurate to say that specifying direct effects from the dummy variables to the indicators controls for differences across the studies in the profile estimation? MODEL: %OVERALL% y1; y2; y1y2 ON dummy1 dummy2; 


When you have c measured by say y1y5 and say c on x1x2; you can't also identify y1y5 on x1x2; You should instead do yj on x1x2; for one yj at a time and see where the important direct effects are. 


Thanks for your help, Bengt. One other question if you don't mind: I am modeling distal outcomes of my profiles using the DCON command (given high classification error in some instances). However, due to listwise deletion of missing data, my sample size drops to 173 for some outcomes. Do you think this sample size is too low for reliable estimates? Also, is it possible to impute missing data when running this analysis? If so, would you be able to provide syntax in the analysis statement for that? I'm not sure how to include type = mixture AND missing. Thanks again for all your help. 


I don't know the percentage missing, but I would do simple descriptive stats on the not missing variables in your model, comparing the sample with missing on the DCON variable to those not missing. So you see how selective the missingness is. Missing data handling is not implemented for 3step. Not sure if we do 3step for imputed data using Type=Imputation data; try it. 

Karen Kochel posted on Thursday, January 16, 2014  11:15 am



I am conducting LPA with the goal of regressing three distal outcomes on latent class variable (with 2 continuous indicators) and 1 covariate and then examining equality of means (via Wald Test) on the distal outcomes. I first estimated a joint model (i.e., I combined the latent class model and distal outcome model) and entropy was high (i.e., >.90), but the inclusion of the distal outcome resulted in a substantial change in latent class formation so that the latent variable lost its intended meaning. Having reviewed Web Notes 15 and discussion posts, it seems a 3step approach is warranted; am I right? If so, and given that I have a distal outcome and covariate, will I need to use the manual 3step approach? Or can DU3STEP and R3STEP be used in a single analysis to explore the distal outcome and covariate, respectively? Will each distal outcome need to be investigated within a separate analysis? 


DU3STEP and R3STEP cannot be used together which means that if you have a covariate you would have to do the "manual approach" mentioned in the web note, but that would likely also change the class formation. If you don't include the covariate, the DCON/DCAT approach is best. 


Thank you for your help! I have two followup questions related to this four class LPA. (1) To evaluate the effect of latent profile on distal outcome while controlling for prior levels of the distal outcome (i.e., the covariate), I regressed the distal outcome on the covariate (per Linda's suggestion to another user). I would now like to evaluate the relation between profile membership and distal outcome. Is it acceptable to use model test to examine intercept differences in the distal outcome (because the mean is not provided in this case, correct)? For example: %c#1% … [Distal] (m1); %c#2% … [Distal] (m2); Model test: m1=m2; (2) I used the 3step manual approach (with guidance from Webnote 15 and Vermunt, 2010) to evaluate the abovementioned models. In a manuscript, should I be presenting statistics (e.g., a comparison of true values and estimates) as evidence that the 3step procedure works well with my data? If so, what values do you recommend I present? Thank you! 


So in your script here you are not doing 3step but instead a regular LPA with a distal regressed on a covariate? Note that this may change the class formation relative to not including the distal in the model. But apart from that, yes you can test intercept differences this way. Not sure what you mean in (2); not enough information about what you did. And, what do you mean by "true values"? 


Thanks for your response. To clarify: I followed the 3step approach outlined in Web Note 15. In Step 1, I estimated my LPA and included the auxiliary option but did not specify a type (so that my auxiliary vars would not be used in estimation but rather saved in savedata file). I obtained support for a four class solution. In Step 2, I “computed” error for most likely class (in reality, used the “logits for the classification probabilities for most likely class” obtained in Step 1). In Step 3, I used most likely class as the indicator variable and fixed measurement errors at values in Step 1 output. Also in Step 3, I specified auxiliary model: I regressed the distal outcome on the covariate (i.e., prior levels of distal outcome). Am I correctly implementing the 3step approach? And, I can now test intercept differences via the Wald test? Re: question (2) in my prior post: in Web Note 15 you present an example in which you conduct the 3step with an arbitrary 2nd model. You conclude that the procedure works well for your example because your estimates obtained in the final stage are close to true parameter values. Elsewhere, I’ve seen you suggest that when class separation in the LCA is good, then N is a good indicator of C. Should I be evaluating if the 3step performs well with my data and, if so, how do I do this? In other words, do I need to justify why I am using this 3step versus, say, the 1step? Thank you again! 


Regarding your first paragraph, you want to check that your latent class formation (percentage in the classes, same people in the classes) in Step 3 is close to that of Step 1  often it isn't in which case the stepwise method loses its advantage/meaningfulness. Regarding your second paragraph. I think the only justifications you need to make are (1) substantive reason for wanting classes to be defined only by the LPA indicators, not the distal (2) Step 1 and Step 3 classification agrees. 

Kathleen posted on Wednesday, February 05, 2014  9:56 pm



I am trying to implement the 3step approach for an LTA, although the entropy at each time point > than 0.87. I was looking at Webnote 15, and slides from a presentation at the Modern Modeling Methods Conference, UConn, May 21, 2013. From slides 79, it appears the values for the nominal variable are fixed at the logodds of the results from the "Average Latent Class Probabilities for Most Likely Class Membership..." However, in the Webnote, the results from the "Logits for the Classification Probabilities..."are used. Which method should be used, and how can I relate the Average Latent Class Probs to the Logits? I see the formula for the logits described on page 4, but I haven't been able to replicate it. Thanks very much for your time. 


You should use "Logits for the Classification Probabilities", which are printed in Mplus version 7.11. See the latest version of our 3step paper and its Mplus scripts which are shown at the top of http://www.statmodel.com/recentpapers.shtml 


I am using the new automatic auxiliary command, R3step, in an LPA. Is it reasonable to include some covariates in the model command (e.g., "c on age") and others in the auxiliary command ((R3Step) pred1, pred2))? Thank you. 


You can do that if you think e.g. age is critical to the class formation. But don't expect that the results with the auxiliary covariate (z, say) would be the same as doing a 1step analysis with c ON age z; 

HanJung Ko posted on Wednesday, April 16, 2014  10:59 am



Hi, Dr. Muthen, I am doing a 3step GMM, with a few variables predicting the class membership at step 3 (not distal outcome variable on c as in your web note) Here is my step1 model syntax for 2 classes: %OVERALL% i s  pl_1@0 pl_2@1 pl_3@2 pl_4@3 pl_5@4; i WITH s; My question is: For computing N parameters in step 2 and further enter them in step 3, where should I compute from? I read in one handout from "Classification Probabilities for the Most Likely Latent Class Membership"? For example, log (0.909/0.091) for n#1 from that table and log(0.037/0.963) for n#2. or Mplus already calculated the numbers? (i.e., Logits for the Classification Probabilities for the Most Likely Latent Class Membership: 2.306 for n#1, and 3.270 for n#2) Their numbers are not the same. That's why I am confused. Thank you, HanJung Ko 


You should use the Mpluscalculated logits. If you are uncertain, send output and license number to Support. 


I am using the new 3step approach with ECLSK data and have to weight the analyses. I get this error message: *** ERROR in VARIABLE command Auxiliary variables with DCATEGORICAL or DCONTINUOUS are not available with TYPE=MIXTURE and weights. Are weights not allowed in this model? Many thanks 


Weights are not allowed with these options. 


Can weights be applied in the manual approach? I tried to compare the outcome across classes using the MODEL TEST command and came across the following error: WALD'S TEST COULD NOT BE COMPUTED BECAUSE OF A SINGULAR COVARIANCE MATRIX. The classification probabilities for the most likely class membership are below 1 2 3 4 1 0.983 0.012 0.000 0.005 2 0.019 0.947 0.026 0.007 3 0.000 0.007 0.986 0.008 4 0.004 0.007 0.021 0.968 Thank you 


This should be easy to fix. Your model test should look like this model test: 0=a1a2; 0=a2a3; 0=a3a4; If you have more than these 3 equations in model test remove them. 


Thank you so much, I'll try this! 


Thank you for your comment Tihomir, these changes worked. Is there an output I can request to see specific mean differences across classes, like a posthoc test? Thank you again 

ellen liao posted on Friday, April 25, 2014  8:22 am



Dear all, I am running the manual 3step approach. After identified the 3class solution for the independent variable (class_neg), I want to use classmembership to explain the variations on the trajectory of the outcome (tzexe1 mzexe1 jzexe1), taking into account measurement error. Input is Usevar are tzexe1 mzexe1 jzexe1 ttime_10 mtime_10 jtime_10 tage_c55 class_neg ; TSCORE= ttime_10 mtime_10 jtime_10 ; NOMINAL= class_neg; CLASSES = c(3); Analysis: Type = random mixture ; Model: %OVERALL% i s  tzexe1 mzexe1 jzexe1 AT ttime_10 mtime_10 jtime_10 ; tzexe1 mzexe1 jzexe1 (1); i s ON tage_c55 c; %c#1% [class_neg#1 @1.912]; [class_neg#2 @11.766]; %c#2% [class_neg#1 @2.429]; [class_neg#2 @1.263]; %c#3% [class_neg#1 @2.034]; [class_neg#2 @2.409]; the error message is *** ERROR The following MODEL statements are ignored: * Statements in the OVERALL class: I ON C#1 I ON C#2 S ON C#1 S ON C#2 *** ERROR One or more MODEL statements were ignored. These statements may be incorrect. Any ideas what's going? Thanks in advance. 


Laia: We don't provide a table of the means. You can take these from the results where you have labelled them. 


Ellen: Variables cannot be regressed on the categorical latent variable. This effect is found the means of i and s varying across classes. 

ellen liao posted on Wednesday, April 30, 2014  4:14 am



Thanks, Linda. That works. One more question regarding covariates in predicting classmembership. I understand in the article by Muthen 2004 argued covariates should be included otherwise the predicted membership is distorted. However, given my research question is regarding how those latent classes (independent variable) are associated with a distal outcome, where those covariates (e.g. age, sex etc.) will be controlled again. Somehow, I feel if I follow the suggestion proposed by Muthen, I will double control for those covariates? Can you talk me through the logic? Many thanks. 


Not sure what you mean. Perhaps you are considering 3step. If you are considering 1step there is no double control. 

ellen liao posted on Wednesday, April 30, 2014  8:47 am



Hi, Muthen Yes, I am using the 3step, so the classmembership will be derived from previous analysis and using as a categorical variable alongside the class membership error. So my question is should I use twice the covariates in the membership classification (Step 1&2, GMM), and in the main analysis (Step 3, regression). Thanks. 


Yes. 

ellen liao posted on Wednesday, April 30, 2014  11:23 pm



Sorry for bugging. But I am still confused. If covariates were used twice in the 3step, isn't there double control for the same covariates? If not what the role they are playing in different steps? Many thanks. 


In the first step, the covariates help determine the classes. In the last step including the covariates says that the class variable is not the only predictor of the distal. If you leave out the covariates in the first step you don't get optimal classes. If you leave out the covariates in the last step you are misrepresenting the effect of the class variable (just like omitted variables/predictors in regression). 


Hi all, I have undertaken a LCA (n = 1690) based on 9 dependent variables, concluded with a 3 class solution, and want to test differences in two distal outcomes. First I tried to epecify the auxiliary outcomes as du3step and de3step, but got error messages. Then, I tried the dcon option, which provide some estimates: NUMBER OF DELETED OBSERVATIONS FOR THE AUXILIARY VARIABLE: 46 NUMBER OF OBSERVATIONS USED FOR THE AUXILIARY VARIABLE: 1644 Approximate Mean S.E. Class 1 7.500 0.048 Class 2 6.099 0.134 Class 3 7.458 0.051 However, these are very unexpected findings given the nature of the variables. The classes do not seem to have been turned around either. Just to check, I also tried the option (e), and got these results: Mean S.E. Class 1 7.498 0.051 Class 2 7.439 0.053 Class 3 6.177 0.126 This is in line with our hypotheses. What is happening here, and which results can I trust? 


Please send the two outputs to support@statmodel.com and if you can also send the data. 

andy supple posted on Wednesday, July 30, 2014  5:58 am



Is it possible to use the define command with R3STEP? I get an error saying that the variable isn't found if the R3STEP comes before the define command. If I put the R3STEP after define it says there is an assignment error. Is this possible to do with the r3step? usevariables are madefunv pick30v namesv teaseb harassb rumorb reduced ; CLASSES = c(4); categorical are madefunv  rumorb; define: blackbypar = black*parinvolve; auxiliary (r3step) reduced parinvolve black blackbypar; ANALYSIS: TYPE = MIXTURE; model: %overall% OUTPUT: modindices (all); 


Please send the 2 outputs to Support. 


I am using the DU3STEP option in a LCA to look at differences in a distal outcome across the 3 identified classes. I wanted to know if it is possible to include control variables for the distal outcome. 


Not when using this automatic approach. You can try using the "manual" approach described in our 3step paper, but class formation may change between step 1 and step 3. 

Lauren Cole posted on Wednesday, February 18, 2015  1:09 pm



Hi all, I am working on trying to complete a moderation analysis (interaction) within the 3step manual approach. The association I am examining is selfefficacy and alcohol misuse. To start, I did the latent class analysis (first step) to determine classes of alcohol misuse, which resulted in 5 classes. I then used the manual 3step method to regress the alcohol misuse classes on selfefficacy while taking into account the classification uncertainty. I would like to test for moderation/interaction of this relationship by depression. To do this, I would need to include a depression*selfefficacy term in the model  this step seems straightforward. However, if there is moderation present, I need to present the results broken down by depression status and I am unclear as to how to do that while still accounting for the classification uncertainty properly. For example, I would want the OR for selfefficacy and alcohol misuse for 1. depressed and 2. notdepressed women, separately. Any help would be appreciated. 


So in the last step you are doing a multinomial logistic regression with an interaction. So you have n on self (b1) dep (b2) selfdep (b3); So the slope for selfefficacy for depressed=0 is b1 and the slope for selfeff for dep=1 is b1+b3. So you get the ORs using Model Constraint with or0=exp(b1) and or1=exp(b1+b3). 

Lauren Cole posted on Thursday, February 19, 2015  2:21 pm



Thank you, Dr. Muthen. I was able to obtain the odds ratio stratified by depression with your help. One followup question: how do I calculate the 95% confidence intervals for or1? From what I can find, I need to use the variancecovariance matrix to calculate the confidence intervals  if that is correct, how would I get that information out of MPlus using the manual 3step method? 


What you should do is to get the 95% confidence limits for the corresponding logOR (so before you do exp)  you get that simply by you youself calculating estimate plus minus 1.96*SE, where the output gives you the estimate and SE. Then again using your own calculator you take exp of those 2 limits and that will give you the OR limits. 

Lauren Cole posted on Monday, February 23, 2015  9:00 am



Thank you, Dr. Muthen. I am familiar with calculating the 95% CI for an odds ratio as you explained above. My question is more about how to calculate the 95% CI for the or1=exp(b1+b3) in the example you gave above. How would I incorporate the standard errors for both of those estimates to estimate the 95% CI? Would I need to use the variancecovariance matrix? If so, how would I get that in MPlus using the manual 3step method? 


You can use Model Constraint to express logor = b1+b3; That gives the SE for this log odds and then you proceed with the usual steps. 


Hello, I am using the R3STEP command to estimate four latent classes and then use these classes in a multinomial logistic regression. I have estimated four models that progressively increase the number of covariates in the model i.e. Model 1: covariate 1 Model 2: covariates 1 & 2 Model 3: covariates 1 & 2 & 3 Model 4: covariates 1 & 2 & 3 & 4 Partial code for model 1 is presented below: classes = c(4); auxiliary = WhiteBritish(R3STEP) WhiteIrish(R3STEP) Indian(R3STEP) Bangladeshi(R3STEP) BlackCaribbean(R3STEP) BlackAfrican(R3STEP) Analysis: Type = complex mixture ; starts = 200 5; I have found that the number of parameters, loglikelihood, AIC, BIC and SSABIC are exactly the same for all four models (given under MODEL FIT INFORMATION). I think this is incorrect and I am unsure as to how to obtain correct model fit statistics. Any help would be much appreciated. Many thanks, Dharmi 


The Model Fit Information refers to the Step 1 analysis, using only the latent class indicators. 


Thank you  that makes sense. My follow up question is: is there a way to assess the fit of the full model? Thanks, Dharmi 


No, then you have to do a regular mixture analysis in a single step. 

db40 posted on Tuesday, May 12, 2015  3:45 am



Hi can I have something cleared up please? The 3step analysis requires that Mplus outputs a file in step one which is called into step and this is repeated for step 3? This is a model with covariates and distal outcomes. 


See my answer to your other post. 

db40 posted on Sunday, May 17, 2015  1:19 pm



Dear Bengt, I have tried to follow appendix D and E for my model (3 class with covariates and distal outcomes. I have a question regarding step1 (apologies I am a psychologist not a statistician). Page 8 details this of which I have amended to reflect the number of indicators: My question is, what does this tell me below and why are the indicators in %c#3% broken into two lines (I have followed p8). Model: %OVERALL% %c#1% [u1$1u8$1*1]; %c#2% [u1$1u8$1*1]; %c#3% [u1$1u5$1*1]; [u6$1u8$1*1] ; <   why are these broken? In step2  I have amended to reflect my data. But I get an error " "THE ESTIMATED COVARIANCE MATRIX FOR THE Y VARIABLES IN CLASS 1 COULD NOT BE INVERTED. PROBLEM INVOLVING VARIABLE Y1. [...] nominal = n ; Classes = C(3); [...] ANALYSIS: [.... ] Model: %OVERALL% y1 on b2 ; %C#1% [N#1@2.142]; <these have been taken from the previous step [N#2@0.645]; Y1 on b2; Y1; %C#2% [N#1@0.185]; [N#2@5.011]; Y1 on b2; Y1; %C#3% [N#1@2.315]; [N#2@0.130]; Y1 on b2; Y1; 


Hi, I used the automatic R3Step approach and want to report the odds ratio for my covariates but don't see it in the output. Is there an option for getting this output? 


Hello, I am running an LPA on clustered data with distal outcomes using the automatic de3step. I have 10 distal outcomes and one outcome will not provide parameter estimates. I get the following error message ARIABLE IN STEP 3 HAS MORE THAN 20% CLASSIFICATION ERROR RELATIVE TO STEP 1 IN CLASS 1. Is there a way that I can get means and unbiased standard errors to conduct mean difference tests? 


DE3STEP is not for clustered data analysis. 


Perhaps you are referring to a Type=Complex analysis. Maybe try BCH  take a look at this paper on our website: Asparouhov, T. & Muthén, B. (2014). Auxiliary variables in mixture modeling: Using the BCH method in Mplus to estimate a distal outcome model and an arbitrary second model. Paper can be downloaded from here. Mplus Web Notes: No. 21. 


Yes, I am using the type=complex. THank you, I will look at that paper. 


hi, i have the same question posted by Sasha Fleary on 6/16/15. i'm using the R3STEP command to estimate three latent classes and then use the classes as a dependent variable in a multinomial logistic regression. i'm also adding covariates to see if the log odds on my main independent variable of interest change. in the output under 'TESTS OF CATEGORICAL LATENT VARIABLE MULTINOMIAL LOGISTIC REGRESSIONS USING THE 3STEP PROCEDURE', log odds are reported. where can i find information on how to get odds ratios and their standard errors? my understanding is that i can't label the parameters in my auxiliary statement to then use in the model constraint option? my analyses are also weighted and are using imputed data. thanks for any help! 


No, you can't use Model Constraint here, so if not printed there is no way to get the SEs for the ORs. 


But perhaps you can instead do R3STEP manually as shown in our papers. Then you have access to Model Constraint. 


hi Bengt, thanks for the response, i had also thought of doing the manual step. however, 'save=cprob' isn't an option when you have imputed data. 


Why do you need cprobs? 


maybe i've misunderstood? in step 1 of the manual process, don't you save the class probabilities and create a new file? this is from Appendix D of the appendices of web note 15. if there's a way i can do the manual step using Type = imputation, that would be great. 

anonymous Z posted on Thursday, August 06, 2015  11:29 am



Hi Drs. Muthen, I have consulted with you through emails about the manual 3step in regard to adding a distal outcome. Below is your answer and my syntax. You said I can “hold the slopes of the covariate equal across classes so that you can interpret class differences in terms of the classspecific intercepts.” My question is: what is the syntax for hold the slopes of the covariate equal across classes? Thank you very much! Dr. Linda Muthen’s response: Your model estimates the classspecific intercept parameters for your EXT_4 distal outcome, not their means. I recommend doing what you do in Ancova – hold the slopes of the covariates equal across classes so that you can interpret class differences in terms of the classspecific intercepts, controlling for the covariates. When you have covariates you should not be interested in the EXT_4 mean because that is a function of the covariate means. MODEL: %OVERALL% EXT_4 ON tx_2 age_dic eth sex_abu phy_abu H_und18 H_abo18 TS_D TS_A; %c#1% [n#1@4.289]; [n#2@0.306]; [EXT_4]; EXT_4; %c#2% [n#1@0.325]; [n#2@2.654]; [EXT_4]; EXT_4; %c#3% [n#1@3.912]; [n#2@2.880]; [EXT_4]; EXT_4; 

anonymous Z posted on Friday, August 07, 2015  11:24 am



Hi Dr. Muthen, Following my post on Thursday, I did model test. The pvalue of Wald Test was significant (see below), I assume this suggest that the mean value of EXT_4 of class 1 was significantly different that of class 3. However, the output only provided the intercepts of EXT_4, which was a negative value. What does a negative value mean? Is there a way I can request the means of EXT_4? Thanks so much! %c#1% [EXT_4] (m1); EXT_4; %c#2% [EXT_4] (m2); EXT_4; %c#3% [EXT_4](m3); EXT_4; MODEL TEST: m1=m3; Wald Test of Parameter Constraints Value 8.967 Degrees of Freedom 1 PValue 0.0027 


To get the means you have to label Model parameters and express the means in terms of these parameters in Model constraint. 

anonymous Z posted on Saturday, August 08, 2015  12:34 pm



Dr. Muthen, Thanks for your response. I have two followup questions: 1. I thought the syntax I used (see below) is the way to express the means. I did “model test” to make m1=m3, and then check Wald test results. Isn’t this the wrong way? “[EXT_4] (m1);” “[EXT_4] (m3);” 2. What do you mean by “express the means in terms of these parameters in Model constraint?” What is the syntax? Thanks so much! 


You said that [ext_4} is the intercept, which means that the ext_4 variable is regressed on some variable. For instance, ext_4 on x; would have ext_4 mean = intercept+b*xmean. 

anonymous Z posted on Monday, August 10, 2015  7:43 am



Hi Dr. Muthen, Based on your advice, I wrote the syntax as below to compare the mean of ext_4 for class1 and class3 (I have two covariates). Is this what you mean? Thank you very much! MODEL: %OVERALL% Ext_4 ON x1 x2; %c#1% [n#1@4.289]; [n#2@0.306]; [EXT_4] (m1); EXT_4; [X1](m11); [X2](m12); Ext_4 on x1(b11); Ext_4 on x2(b12); %c#2% [n#1@0.325]; [n#2@2.654]; [EXT_4] (m2); EXT_4; [X1](m21); [X2](m22); Ext_4 on x1(b21); Ext_4 on x2(b22); %c#3% [n#1@3.912]; [n#2@2.880]; [EXT_4] (m3); EXT_4; [X1](m31); [X2](m32); Ext_4 on x1(b31); Ext_4 on x2(b32); Model constraint: New(ext_4mean_1 ext_4mean2 ext4_mean3 diffm13); ext_4mean_1= m1+ b11* m11+b12*m12; ext_4mean_2= m2+ b21* m21+b22*m22; ext_4mean_3= m3+ b31* m31+b32*m32; diffm13= ext_4mean_1 ext_4mean_3; 


Yes, that's what I meant. 


Hello, If I want to define an interaction when using the R3STEP command, how can I do this? I want to multiply together two observed variables using the define command. I am having problems because I cannot put define before the variable command. Auxiliary comes under variable, hence when I put my interaction terms in auxiliary, I get an error saying they have not been defined. Any help appreciated. Thanks, Dharmi 


If the auxiliary is a newly created variable, it should be mentioned on both USEVARIABLES and AUXILIARY options. 


Thanks. One further question. I am trying to save the class probabilities using cprob. The maximum record length is 10,000 but I have almost 17,000 cases  is there any way around this? Dharmi 

Jon Heron posted on Thursday, August 13, 2015  7:41 am



I've saved cprobs for 500,001 cases without any problem 

Back to top 