Feihong Wang posted on Thursday, February 19, 2015 - 11:37 am
Dear Dr. Muthen, if I completed the 3-step LTA with one set of auxiliary variables, if I want to use a subset of those variables in step 3, do I need to complete step 1 and step 2 again with that subset. My understanding is that the auxiliary variables are included in step 1 and 2 so that they are kept in the new dataset for step 3. Therefore, I donít think I have to do the steps over. Correct? Thanks, Feihong
Hi, Dr. Muthen, we are working on LTA 3-step procedure. We have used both parameterization 1 and parameterization 2. Is there a way with either or both parameterization to fix a latent transition probability based on the estimated model to zero. We have 4 latent groups at two time points, and would like to fix the latent transition probability to zero from group 1 at time 1 to group 2 at time 2. Thanks, Feihong
In Step 3 you do c2 on c1 and that's where you can fix a transition probability to zero - either by fixing a logit slope to -15 or by switching to parameterization= probability; see UG and the V7Part2 handout for the 2012 August Utrecht short course.
Dear Dr. Muthen, I have completed the 3-step procedure. The sample size at step 1 is 1200. The sample size at step 2.1 is smaller (n=1156) than the sample size at step 1, and the sample size at step 2.2 is smaller than the sample size at step 2.1 (N=972). Is using the 3-step procedure when there are cases with missing data recommended, or would it be preferable to use the sample with N=972 in all three steps? Or do you have an alternate better recommendation? Thanks! Feihong
It sounds like the latent class measurement model for some observations is available only at one of the two time points and is missing at the other. I would insert missing values for the nominal class indicator when that happens (when the measurement model is missing entirely at certain time points) - that way you will be able to keep the entire sample size. Make sure you monitor the data carefully so you can see which observations are dropped in Mplus during the different stages.
Alternatively you can actually keep all observations by adding a dependent but uncorrelated variable to each of the steps that has no missing values (that way the data files will stay the same size). Make sure the new variable does not affect the latent class measurement model.
The first method is probably better.
Feihong Wang posted on Thursday, September 22, 2016 - 7:47 pm
Dear Dr. Muthen, I have conducted a latent transition analysis using variables with 7 categories. I have used the results in the probability scale to calculate model-implied means of the observed variable. That is, for each observed variable, I summed the products of the response scores (1 to 7) and the probabilities of these scores. I would like to know if standard errors of these means can be correctly calculated by summing products of the sampling variances for the probabilities and the probabilities of the scores. Or does sampling dependence among the probabilities cause the calculation to be incorrect? I would also like to know if model-implied means for the observed variables are independent within each latent class and across latent classes. If you have a better idea for comparing model-implied means, I would like to hear it. Thank you for your help. Feihong
It sounds like you declare your variables as categorical which means they are treated as ordinal. But when you create means you have to give scores to the categories which then force them to be equidistant which the model does not assume. So you lose the advantage of the ordinal modeling.
With 7 categories and without strong floor or ceiling effects I would simply treat them as continuous and avoid the complications.
Treating them as ordinal, I would focus on the probabilities of each category and collections of categories (such as the top 2 , top 3, etc).
I am conducting a LTA (3 continuous indicators, 2 time points). When I model the LPAs together (C2 ON C1), the proportions for the classes change and for C2 even the profiles change drastically, even if I use svalues.
I had read that this can happen and the solution would be to used the 3-step approach. I ran the LTA with N1 and N2 (most likely class memberships) as indicators of C1 and C2 without changes to the classes and added covariates without a problem.
I modeled my distal outcomes (continuous) one at a time first (without covariates) to see if I ran into issues, but every model worked well (very slight changes in class probabilities for C2). However, when I tried to model my 4 distal outcomes together, the class probabilities for C2 changed drastically.
1) Given that I expect non-invariance in the LPAs across time, should I be concerned that modelling C1 and C2 together using their original indicators produced very different probabilities (and profiles for C2) for the classes?
2) I think I should next try the manual BCH method next, but I am not sure how to do this for a LTA - should I get the bch weights by modelling the LPAs separately (using the original three indicators), just like I did for the 3-step method and then model the two LPAs together with the Ws as TRAINING variables?
Nylund-Gibson, K., Grimm, R., Quirk, M., & Furlong, M. (2014): A latent transition mixture model using the three-step specification. Structural Equation Modeling: A Multidisciplinary Journal, 21, 439-454. contact first author
Thank you for your quick reply. Nylund-Gibson et al. (2014)'s paper as well as Nylund-Gibson's dissertations have indeed been very informative. I used the 3-step method explained in the article to create the N indicators for the LTA. The article, however, does not deal with distal outcomes beyond stating that they should be included as auxiliary information when saving the cprobs to create the data to be used in the 3rd step.
I will contact Dr. Nylund-Gibson.
I wonder if you could comment on the use of the manual BCH method in LTA (the second question in my previous post).
We have not implemented training data for multiple C variables yet, so BCH is not easily available for LTA. However I don't think you need BCH for LTA to resolve your distal outcome problem. You don't need to include C1 in the BCH analysis - just use the second time point LCA and specify the auxiliary=(bch) distal;