Message/Author |
|
|
Hi, I created 5 multiple imputed datasets and conducted an LCA with 3 ordered categorical indicators. Q1. What kind of pooling does Mplus with respect to BIC, LL, class probabilities, thresholds? Simple averaging of the values of each of the 5 datasets? Q2. I can obtain Tech 14 (BLRT) for each of the 5 datasets separatly. Is there a possibility to pool the p-values or should I just report the average/range? Q3. With the final class solution I also want to use DCAT. Is there an easy option to get pooled p values? Or would it be okay to simply report the average/range of p values across imputated data sets? |
|
|
Q1. Yes Q2. We don't have a pooled p-value for tech14 Q3. The p-value is already pooled if you are using variable: Auxiliary = (DCAT); data: type=imputation; |
|
|
Thanks for your fast reply. With respect to Q3. I meant p-values of the class comparisons e.g., Class 1 vs. 2, Class 1 vs. 3 and so on. These comparisons are not given when using DCAT with type=imputation; I only get them when I analyze the 5 datasets separately. Is a way to pool the individual p-values of the class comparisons? |
|
|
Yes. You can use Section 2 http://www.statmodel.com/download/MI7.pdf but first you will need to get the SE for the class comparison from the reported quantities x = point estimate (difference of estimates) z = chi-2 SE(x)=x/sqrt(z) Using the manual 3-step or BCH methods would let you obtain this directly with MODEL TEST. |
|
|
Thanks, I have some Follow-up questions: Q2. With type=imputation TECH 14 and TECH 10 are not available. When I analyze the imputed data sets separately, am I correct that there is no way to pool TECH 14 results by Hand? Further, what about TECH10 Bivariate residuals - anyway to pool them (e.g., averaging)?. Q3. I tried the Manual BCH method like stated in Webnote 21 as you recommended. Step 1 works perfectly. For step 2 I do not know what code I need to get Model test to work. I know I need to use model test: 0=a-b; but how can I define a and b to test whether phq_0 differs across classes? Data: file=manBCH2.dat; Variable: Names are audit_1 audit_2 audit_3_gen phq_0 BCHW1-BCHW3 MLC; USEVAR = phq_0 BCHW1-BCHW3 ; CLASSES = c(3); Training=BCHW1-BCHW3(bch); ANALYSIS: TYPE = MIXTURE; STARTS = 0; Model: %overall% C on phq_0; Thanks yo much! |
|
|
Q2. You can average all of these statistics, however proper statistical inference (i.e. correct SE) is difficult and in almost all instances are difficult not just due to software but also theoretically. likelihood based inference is even more difficult theoretically. Model test and the actual SE are the most reliable tools. Q3. You don't need model test for this model. Just use the SE for these parameter C on phq_0; |
|
|
I first wanted to find the best class solution using type=imputation. I ended up with a 6 class solution. As a second step I wanted to run the R3STEP command for several individual covariate analyses. Can I use starts=0 and optseed from the 6 class solution to run R3STEP analyses with type=imputation? I tried, but the averaged parameters for the 6-class solutions were not exactly identical when using optseed(small numerical differences emerged e.g. for LL, BIC, class sizes.) Thanks for your help. |
|
|
It should have worked. You can try output:svalues; to get an even better starting value model and if you are still seeing difference send all inputs outputs and data to support@statmodel.com |
|
|
When using random start values eg starts = 10000 1000; with type=Imputation, the results only display the seeds for the best likelihood from the first imputed dataset. If I use that optseed when running R3Step, class results differ from the initial run. I thought this might be because the optseed from the first does not necessesarily be the same for the other imputed datasets? |
|
|
You were right, it worked out fine. Thanks |
|
Back to top |