LCA and sampling weights PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Mark Shevlin posted on Monday, April 03, 2006 - 9:11 am
Hi, I have been estimating LCA models based on a large national survey. In order to get an accurate estimate of class sizes should I (a) use the sampling weights to adjust estimates of class size after classifying cases, or (b) include the sampling weight in the actual LCA analysis.

I have noticed that including a sampling weight in the analysis tend to result in solutions with fewer classes, when I had expected only the test statisitcs to be adjusted.

Many thanks in advance
 Linda K. Muthen posted on Monday, April 03, 2006 - 2:59 pm
Sampling weights should be included in the analysis because they affect parameter estimates, standard errors, and tests of model fit.
 Mark Shevlin posted on Wednesday, April 05, 2006 - 4:21 am
Many thanks
 Mark Shevlin posted on Thursday, April 06, 2006 - 2:14 am
Hi Linda,
I have run into some problems when using the sampling weight in a LCA analysis. The LRT statistic seems to be behaving oddly. The mean for the VUONG-LO-MENDELL-RUBIN LIKELIHOOD RATIO TEST without the sampling weight is 35 and with the sampling weight it is 426234. This difference seems to be large. With the weighting variable the mean for the model with a class less is 35 and 37000 for the model with a class more. Do these estimates seem reasonable?

I have tried the parametric likelihood ratio bootstrap test but cannot seem to get this test when a weight variable is included in the model. Can the bootstrap test be conducted when a weight variable is included?

Many thanks in advance
 Linda K. Muthen posted on Thursday, April 06, 2006 - 7:14 am
I would have to see your analysis to answer this. Please send your input, data, output, and license number to support@statmodel.com. Bootstrap and weights cannot be used together.
 Justin Jager posted on Monday, December 18, 2006 - 6:26 am
In the first response in this thread, Dr. Muthen endorses the use of sampling weights when using LCA. Well, is it ever valid to not use sample weights? For example, via LCA, I am identifying different classes of substance use trajectories. The sample I am using oversamples heavy substance users, so the weight variable, in order to render the results representative of the U.S. population, weights the heavy users lower than the non-heavy users.

Not surpisingly, the optimal number of latent classes (as well as the growth characteristics of the classes) varies depending upon whether or not the weight variable is included in the analyses or not. In short, when the weight variable is used, which weights low substance use users more, fewer latent classes are identified among the heavy subtance users, while the opposite is true when weights are not used.

(continued on in post below...)
 Justin Jager posted on Monday, December 18, 2006 - 6:27 am
(continuation of post above...)

It seems to me, that an argument can be made for not using the sample weights in this case. That is, the additional classes identified among the heavy users when not using the sample weight are "real" classes - that is they do exist in the sample, and they exist in the population. That is if one is trying to identify latent classes among a small sub-sample of the population, it seems like perfect sense to oversample that small sub-sample in order to do so. In order to make the estimates representative, the posterior proabilities for group membership could then be used to make the latent classes known classes, and then the sample weight could be applied to these later analyses.

Do you see a fundamental flaw in the argument above for not using the sample weight initially, but bringing it back in later for subsequent analyses? While the logic of my argument seems pretty straightforward, I am not familiar with the nuts-and-bolts of how sample weights actually impact class identification in LCA -- so there could be something I am failing to realize.

Thanks,

Justin
 Linda K. Muthen posted on Monday, December 18, 2006 - 8:56 am
If you don't use sampling weights, your generalizations are to the sample. If you use sampling weights, you can generalize to the population. You could also consider looking only at the heavy users.
 jtw posted on Monday, October 04, 2010 - 9:41 am
Hello,

I understand that to generalize LCGA results to the population, one should conduct analysis with the appropriate sampling weight applied. However, I am going to do additional analysis (e.g., ANOVAs) with individuals assigned to their most likely latent class. In general, I believe it to be appropriate to weight such analysis as the ANOVA. However, in this particular case it seems there may be double weighting occurring since weights would be applied for the LCGA and then again for post-trajectory analysis (e.g., ANOVA), which doesn't seem right to me.

Should I apply the sampling weight at the LCGA stage only? Apply the weight at the ANOVA stage only? Apply the weight during both the LCGA and ANOVA stages? Any guidance is most helpful. Thanks.
 Bengt O. Muthen posted on Tuesday, October 05, 2010 - 9:58 am
I think you should use weights in both stages. Using them in the first stage ensures correct parameter estimates which form the basis for the posterior probabilities which give most likely class. But then in the ANOVA you need to account for that every person with his/her most likely class should not count equally - so weight again.
 Carol Rhonda Burns posted on Tuesday, September 29, 2015 - 8:34 am
Dear Dr. Muthen,

I am running a LCA with an epidemiological sample, using weights. I saved the most likely class and exported it to SPSS. When I run frequencies in SPSS, I get the same weighted percentages for each class as in Mplus, but the actual frequencies (class counts) are markedly different. Would you be able to explain why this is the case?
 Linda K. Muthen posted on Tuesday, September 29, 2015 - 10:57 am
You must apply the weights to the posterior probabilities that you exported.
 Carol Rhonda Burns posted on Wednesday, September 30, 2015 - 7:30 am
Thank you!
 Corey Savage posted on Thursday, January 28, 2016 - 1:05 am
I am running a latent profile analysis with a nationally representative sample of 1200 individuals clustered in 100 programs. The latent class indicators are 6 counts and 10 rasch scales.

When using the sampling weights, the BLRT is not allowed for the number of latent classes. I've come across simulation studies where the LMR test has fairly high rates of type-1 error. In my analysis the p-value for the LMR test for 2 vs 3 classes was 0.3. The next best fit index, the BIC, continues to decrease substantially through 8-9 class models. How would one interpret what to do here? I understand that substantive reasoning is the next best step, but the initial tests didn't help much to point in a direction.

Any help or references would be much appreciated!
 Bengt O. Muthen posted on Friday, January 29, 2016 - 11:05 am
BIC is the only index that is useful here because it is the only one that takes complex sampling features into account.

If BIC doesn't show a minimum you may want to add some residual correlations among outcomes. Which ones to add can be gauged from adding a single factor to the model and see for which items its loadings are significant.
 Corey Savage posted on Friday, January 29, 2016 - 11:58 am
OK. I believe a minimum is found, but with a very high number of classes. I've found in the literature that when using Rasch scores as indicators in an LCA/LPA, the BIC can recommend a spurious number of classes. If the BIC doesn't perform well either in this case, I guess I am feeling a bit in the dark about the number of classes to select. What would you recommend?
 Corey Savage posted on Friday, January 29, 2016 - 12:17 pm
Also, do you by chance have a reference for only the BIC being relevant to use with complex sampling?
 Bengt O. Muthen posted on Friday, January 29, 2016 - 6:11 pm
I don't see why Rasch scores would hurt BIC unless their distributions are very skewed.

No special reference, but the TECH11 and TECH14 theory references do not consider complex survey data.
 Corey Savage posted on Friday, January 29, 2016 - 7:39 pm
There are a couple of the Rasch scores that are quite skewed and a couple that are bimodal. What would be the best approach here, or would you recommend against using the sampling weights so I could utilize BLRT?
 Bengt O. Muthen posted on Monday, February 01, 2016 - 10:05 am
I would stay with BIC. See also my answer to your other question on this.
 Ann-Renee Blais posted on Thursday, September 08, 2016 - 8:55 am
Good morning,

I'm working with data from a stratified random sample with 4 strata and their corresponding sampling weights. I believe my weights are the raw weights (e.g., 23.03, 5.50), . In order to generalize the results of my LPA to the population, is the following syntax appropriate?

weight is weight;
strat is stratum;
...
type is mixture complex
...

When I run the LPA, I get sample frequencies, however. What is wrong with my syntax?

Thank you for your help!

Ann-Renee
 Linda K. Muthen posted on Thursday, September 08, 2016 - 11:51 am
Please send the output and your license number to support at statmodel.com and explain exactly where in the output you are looking and don't understand.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: