Pseudo-class draws PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Jon Heron posted on Friday, March 04, 2011 - 12:29 am
Hi Bengt/Linda,

very simple question (I hope!)
Please could you tell me how many draws are made with pseudoclass?

many thanks, Jon
 Bengt O. Muthen posted on Friday, March 04, 2011 - 7:58 am
20. See the technical appendix:

http://www.statmodel.com/download/meantest2.pdf
 Jon Heron posted on Friday, March 04, 2011 - 8:26 am
Brill, thanks Bengt
 Jon Heron posted on Wednesday, April 27, 2011 - 7:17 am
Hi Bengt/Linda,


I have a related question to the above which may be simple and/or stupid. I'm hoping for the former.

Why don't the approaches of probability weighting and pseudo-class draws give the same answer since they are using the same information?

I recently have found, as reported in the manuscript "relatinglca.pdf" that estimates based on P-C draws tend to have larger standard errors compared to prob-weighting. Furthermore, I have established through bootstrapping that the prob-weighting SE's appear more or less correct whilst the P-C draw SE's a little high.

I have used stata's iweight option for the prob-weighting. I have also successfully replicated Mplus' PC-draw output in Stata and confirmed that the concordance between weighting and PC_draws doesn't improve if the number of draws is increased considerably.

I should add that here the latent class variable is a multinomial outcome. if I turn things on their head and have the latent variable as a predictor then the agreement is considerably worse.

Please can you help?


many thanks, Jon
 Tihomir Asparouhov posted on Wednesday, April 27, 2011 - 11:19 am
Jon

The best information on this topic is
http://statmodel.com/download/relatinglca.pdf

I will just add two things.

1. You should not use either P-C draw or probability weighting analysis as your final analysis. The Auxiliary command is an exploratory tool. Once you have identified the "best" predictors with that command you should estimate a model where those best covariates are included in the model and use only those results as your final results. Thus use the auxiliary command to select the most important covariates but you should not use the reported SE beyond what these are intended for.

2. As the paper above indicates a two-step analysis where the class variable is formed in one analysis only with indicator variables and without covariate, followed up by a second step for correlating/regressing (one way or another) the estimated class variable with new variables, is never going to be perfect. The two step analysis has the drawback that the class variable is formed without all the information available, namely, without the information from the covariates. Thus this boils down to the same conclusion - use a one step analysis as your final analysis where both indicators and covariates are used.

Tihomir
 Jon Heron posted on Thursday, April 28, 2011 - 1:15 am
Thanks Tihomir


Jon
 Jon Heron posted on Thursday, April 28, 2011 - 1:28 am
Just to add an additional thought, in the case of a distal outcome rather than covariates I understood that it was now established that a two-stage model was the only way to maintain some essence of causality since with a 1-stage model the outcome becomes an additional indicator of the latent variable.

With a 2-stage model with a distal outcome, prob-weighting and pseudo-class results can differ markedly, particularly the SE's. I just don't get it :-S
 Bengt O. Muthen posted on Thursday, April 28, 2011 - 9:49 am
One may disagree, but let me put in my two cents regarding your first paragraph. I am not sure I agree with a two-stage approach being the established way to go.

Perhaps it is useful to think of the example of non-compliance analysis via a mixture of compliers and never-takers, estimating CACE. Here, the post-treatment outcome Y is a primary source in determining the latent class membership, in conjunction with the latent class covariates. This model is a prime example of casual inference (using principal stratification).

I think it is good to include the distal when doing the mixture analysis - it focuses the class formation on the predictive validity that you are interested in. When it comes to predicting, you wouldn't use the distal information, but you use the estimates from the model with the distal. It's a good topic for discussion, however.
 Tihomir Asparouhov posted on Thursday, April 28, 2011 - 10:26 am
Let me point out again that all the necessary information to make an informed decision is available in Tables 6 and 7 in

http://statmodel.com/download/relatinglca.pdf

You can choose between prob weighted and pseudo class depending on what is most important to you: coverage (pseudo class) or true reflection of the variability of the estimates (prob weighted). Both methods are worse than 1-stage.

In addition - take a look at section 4.1 in
http://statmodel.com/download/Plausible.pdf

In principal using a two-stage model estimation is fine to maintain some essence of causality, but you have to realize that the 1-stage estimation will yield most accurate parameter estimates and standard error estimates. Thus again I would advocate facilitating both methods but using the 1-stage as the ultimate model.
 Jon Heron posted on Tuesday, May 03, 2011 - 7:11 am
Thanks Bengt / Tihomir,

I was referring to this paper

http://www.statmodel.com/download/bookchapterggmm.pdf

in my paragraph on distal and causality as it seems to me that in most cases we DO want to think of distal outcomes as effects or consequences of the trajectory classes.

Are you both suggesting the use of a one-stage "distals-as-class-indicators" analysis followed by the the use of plausible values to give us a properly defined mixture but still allow us to treat our outcome as a consequence with a 2nd stage.

I hadn't thought of this option + I like it.

cheers
 Madison Aitken posted on Tuesday, January 10, 2017 - 9:35 am
Hello,

I've run a latent profile analysis with pseudoclass draws to compare the latent profiles on three continuous variables of interest.

Is there a way to request odds ratios/risk ratios for the multinomial logistic regression and to request confidence intervals for these ratios?

Many thanks.

Madison
 Bengt O. Muthen posted on Tuesday, January 10, 2017 - 3:43 pm
If they are not printed you need to use Model Constraint to compute exp(beta) and then compute CIs using our FAQ:

Odds ratio confidence interval from logOR estimate and SE
 Madison Aitken posted on Thursday, January 12, 2017 - 9:05 am
Thank you Dr. Muthen!
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: