Hi, I have been estimating LCA models based on a large national survey. In order to get an accurate estimate of class sizes should I (a) use the sampling weights to adjust estimates of class size after classifying cases, or (b) include the sampling weight in the actual LCA analysis.
I have noticed that including a sampling weight in the analysis tend to result in solutions with fewer classes, when I had expected only the test statisitcs to be adjusted.
Hi Linda, I have run into some problems when using the sampling weight in a LCA analysis. The LRT statistic seems to be behaving oddly. The mean for the VUONG-LO-MENDELL-RUBIN LIKELIHOOD RATIO TEST without the sampling weight is 35 and with the sampling weight it is 426234. This difference seems to be large. With the weighting variable the mean for the model with a class less is 35 and 37000 for the model with a class more. Do these estimates seem reasonable?
I have tried the parametric likelihood ratio bootstrap test but cannot seem to get this test when a weight variable is included in the model. Can the bootstrap test be conducted when a weight variable is included?
In the first response in this thread, Dr. Muthen endorses the use of sampling weights when using LCA. Well, is it ever valid to not use sample weights? For example, via LCA, I am identifying different classes of substance use trajectories. The sample I am using oversamples heavy substance users, so the weight variable, in order to render the results representative of the U.S. population, weights the heavy users lower than the non-heavy users.
Not surpisingly, the optimal number of latent classes (as well as the growth characteristics of the classes) varies depending upon whether or not the weight variable is included in the analyses or not. In short, when the weight variable is used, which weights low substance use users more, fewer latent classes are identified among the heavy subtance users, while the opposite is true when weights are not used.
It seems to me, that an argument can be made for not using the sample weights in this case. That is, the additional classes identified among the heavy users when not using the sample weight are "real" classes - that is they do exist in the sample, and they exist in the population. That is if one is trying to identify latent classes among a small sub-sample of the population, it seems like perfect sense to oversample that small sub-sample in order to do so. In order to make the estimates representative, the posterior proabilities for group membership could then be used to make the latent classes known classes, and then the sample weight could be applied to these later analyses.
Do you see a fundamental flaw in the argument above for not using the sample weight initially, but bringing it back in later for subsequent analyses? While the logic of my argument seems pretty straightforward, I am not familiar with the nuts-and-bolts of how sample weights actually impact class identification in LCA -- so there could be something I am failing to realize.
I understand that to generalize LCGA results to the population, one should conduct analysis with the appropriate sampling weight applied. However, I am going to do additional analysis (e.g., ANOVAs) with individuals assigned to their most likely latent class. In general, I believe it to be appropriate to weight such analysis as the ANOVA. However, in this particular case it seems there may be double weighting occurring since weights would be applied for the LCGA and then again for post-trajectory analysis (e.g., ANOVA), which doesn't seem right to me.
Should I apply the sampling weight at the LCGA stage only? Apply the weight at the ANOVA stage only? Apply the weight during both the LCGA and ANOVA stages? Any guidance is most helpful. Thanks.
I think you should use weights in both stages. Using them in the first stage ensures correct parameter estimates which form the basis for the posterior probabilities which give most likely class. But then in the ANOVA you need to account for that every person with his/her most likely class should not count equally - so weight again.
I am running a LCA with an epidemiological sample, using weights. I saved the most likely class and exported it to SPSS. When I run frequencies in SPSS, I get the same weighted percentages for each class as in Mplus, but the actual frequencies (class counts) are markedly different. Would you be able to explain why this is the case?
I am running a latent profile analysis with a nationally representative sample of 1200 individuals clustered in 100 programs. The latent class indicators are 6 counts and 10 rasch scales.
When using the sampling weights, the BLRT is not allowed for the number of latent classes. I've come across simulation studies where the LMR test has fairly high rates of type-1 error. In my analysis the p-value for the LMR test for 2 vs 3 classes was 0.3. The next best fit index, the BIC, continues to decrease substantially through 8-9 class models. How would one interpret what to do here? I understand that substantive reasoning is the next best step, but the initial tests didn't help much to point in a direction.
BIC is the only index that is useful here because it is the only one that takes complex sampling features into account.
If BIC doesn't show a minimum you may want to add some residual correlations among outcomes. Which ones to add can be gauged from adding a single factor to the model and see for which items its loadings are significant.
OK. I believe a minimum is found, but with a very high number of classes. I've found in the literature that when using Rasch scores as indicators in an LCA/LPA, the BIC can recommend a spurious number of classes. If the BIC doesn't perform well either in this case, I guess I am feeling a bit in the dark about the number of classes to select. What would you recommend?
There are a couple of the Rasch scores that are quite skewed and a couple that are bimodal. What would be the best approach here, or would you recommend against using the sampling weights so I could utilize BLRT?
I'm working with data from a stratified random sample with 4 strata and their corresponding sampling weights. I believe my weights are the raw weights (e.g., 23.03, 5.50), . In order to generalize the results of my LPA to the population, is the following syntax appropriate?
weight is weight; strat is stratum; ... type is mixture complex ...
When I run the LPA, I get sample frequencies, however. What is wrong with my syntax?
I am trying to run a LCA using complex data. My data file has the weights and also the replicate weights. However, when I try to run the analysis with
I get an error message saying that "Mixture" cannot be used with replicate weights. But when I take out the "Mixture" from "analysis=", I get a new error message stating that "Classes option is only available with Type=Mixture".
Can replicate weights (REPSE=BRR) not be used in LCA? If not, should I drop the replicate weights and just use the weight? If they can be, how should I alter my syntax?
I also had the following error message: "Analysis with replicate weights is not allowed with algorithm=integration"
I have attempted to enter the following syntax for a LCA:
Data: file is 'C:/Users/LCAnowght.csv'; Variable: names are DEP BIP PSY PTSD ANX PER SUICIDE ALC DRUG W; usevariables = DEP BIP PSY PTSD ANX PER SUICIDE ALC DRUG; weight = w; missing = all(9); classes = MHDIFF (2); categorical = DEP BIP PSY PTSD ANX PER SUICIDE ALC DRUG; Analysis: type = mixture; estimator = MLR; starts = 1000 100; stiterations = 20; Output: tech11 tech14;
I am receiving the following error messages: *** ERROR The number of observations is 0. Check your data and format statement. Data file: \client\c$\users\rache\desktop\/LCAnowght.csv *** ERROR Non-missing blank found in data file at record #1, field #: 10
When I enter the syntax without the weight line and variable, Mplus runs without issue. What am I doing wrong? Thank you in advance for your assistance.