Hi! I'm trying to conduct multiple imputation in a multilevel dataset and I have some questions: 1) Is it ok to use latent aggregation for a variable that is going to be imputed? I'm worried that the small sizes of the clusters will affect the results. We have 2 to 4 subjects but sampling is close to 100% of the possible subjects. 2) Do you have data comparing the 3 kinds of H1 imputation?
1) I am guessing, but it sounds like you are asking about H0-based imputation using a twolevel model and that when data have been so imputed you want to do twolevel modeling using the latent covariate approach. And then you worry that you have too small cluster sizes. I couldn't say what the effects of that would be for different degrees of missingness without doing a simulation study (if that indeed was what you were asking).
2) We haven't done organized simulation studies on that, but I assume the results are quite similar. A simulation study might be of interest.
I see, so you are estimating both a within and between component of those variables. Although you have small cluster sizes you may get ok results if you have many clusters. But how all this affects the quality of the imputation is a research topic.
I am assuming that H1 models for multilevel data include only fixed effects, is that correct? Would it be possible to add a H1 model with random effects? I am getting results which differ a bit too much from the imputed data in realcom-impute when analyzing random-effects in MlwiN.
So I found that Mplus gives me out of range values whenever I use twolevel or complex, even after telling that the values should be in a certain range. I'm thinking that it might be an issue with the latent aggregation because our clusters are small in size (2-4) and there's little to no sampling error but I can't say for sure. Would it be possible to add a command to do manifest aggregation?
Hi, I'm doing twolevel analysis with random slopes and would like to use multiple imputation.
1) Do I assume correctly that, at least for continuous data, variance covariance imputation in Mplus is similar to Schafer and Yucel's (2002) approach with all variables treated as dependent?
2) Is H1 imputation equal to H0 imputation with saturated models at both levels?
3) If yes, is it correct that H1 imputation is not adequate for models with random slopes, as all relationships between variables are taken into account but not possible variation in these relationships between groups?
My analysis model is of this type:
%within% s1 | y1 on x1; s2 | y2 on x1; y1 y2 on x2; y1 with y2;
%between% y1 y2 s1 s2 on x2 z; y1 y2 s1 s2 with y1 y2 s1 s2;
There is missing data on y1, y2, x2, and z (not on x1). I came up with this model to impute the missing data on these variables:
%within% s1 | y1 on x1; s2 | y2 on x1; x2 on x1; y1 y2 x2 with y1 y2 x2;
%between% y1 y2 s1 s2 x2 z with y1 y2 s1 s2 x2 z;
4) Most important to me are correct estimates of random slope variances and cross-level interactions. Is this a suitable imputation model then?
5) Plausible value estimation is currently not possible with logit link, right? (This would be great to have!)
1) Schafer and Yucel's (2002) use random slope in the imputation model. It can be done in Mplus with H0 imputation.
5) You can use Probit link - for imputation purposes it should be sufficient.
Emily Kim posted on Tuesday, April 03, 2012 - 1:05 pm
Hi! I'm trying to use Mplus for analyzing two-level data with missing data imputation. I need to impute the missing data for predictors at level-1 as well as level-2. I used syntax below to impute the data:
DATA IMPUTATION: impute = math_ss math08ss urban CEO; ndatasets = 20; save = CEO_probsolveimp*.dat; thin = 1000;
Although CEO is a level-2 predictor, it varied at level-1, so that I couldn't put it at level-2 for the analysis. Am I doing something wrong? Can I handle missing data for a predictor at level-2? Thanks in advance!
A level-2 predictor by definition does not vary within level-1 clusters. These variables should be put on the BETWEEN list.
Emily Kim posted on Wednesday, April 04, 2012 - 8:11 am
Thanks, Linda. Yes, they shouldn't vary within level-1 clusters, but they did. Is there a way to specify whether a variable is at level-1 or level-2 for imputing process? In the syntax above, only CEO is a level-2 variable while all others (e.g., math_ss) are level-1 variables, but they all are in the same statement regardless of level difference. Please let me know if I can clarify more on this.
See the WITHIN and BETWEEN options in the user's guide. A variable cannot be put on the BETWEEN list if it varies for individuals in the same cluster. It sounds like you have a problem with your data that needs to be addressed.
Emily Kim posted on Wednesday, April 04, 2012 - 11:44 am
Thanks again, Linda. I greatly appreciate your feedback.
Please let me clarify my question. I conduct multilevel analysis with two-level data. Model is below:
I have missing data with Pretest and SchoolSize, so I'm trying to impute the missing data for those two predictors. I used Mplus for missing data imputation with two-level data. Syntax for missing data imputation procedure was below:
DATA IMPUTATION: impute = pretest schoolsize; ndatasets = 20; save = probsolveimp*.dat; thin = 1000;
In the 20 sets of imputed data from Mplus, I got level-1 variations for schoolsize, which should not be. I assumed that it happened because I did not specify the level difference in the imputation statement. My question is this: Can I specify that Schoolsize is level-2 predictor so that I do not get the level-1 variance for that predictor? I'm sorry for any confusion in previous questions. Thank you!
dear mplus team, I'm using pisa data and want to estimate a threelevel (school, country) model including cross-level interactions. The main focus is to explain the variation of a L1-relationship through country-level variables, but I would also control for L2-variables.
I want to do MI for each country seperatly to preserve slope variation between countries. Further I think that it is necesarry to preserve between school slope variation, thus I would use H0-imputation for each country.
Is this a reasonably strategy?
Further, the dependent variables are plausible values!
How should I treat PVs in the imputation model? Is it reasonable to run 5 imputation models? For imputation model 1 I would use PV1, for model 2 PV2 and so on... From imputation model 1 I would use data set 1 in the final analysis, from imputation model 2 I would use data set 2 and so on....
Some further questions . a.) for multilevel models with RS it is necesarry to use montecarlo Integration? b.) But MC-intergration is not allowed with type = threelevel? Is there an alternative? c.) for threelevel models using bayes estimation sampling weights are not allowed? --> Thus bayes esimation isn't possible
d.) would it be "correct" if I... 1.) estimate a twolevel model (Student Country) using fiml with MC-integration 2.) and back up the results with a threelevel model (using listwise deletion)?
Q1-Q2: I don't think there is theory for doing this.
1)-2): That should be fine. The number of integration points need to be increased with increasing dimensions of integration - see TECH8 (also watch out for negative ABS changes suggesting low precision).
a) No, you can use regular ML integration.
b) See a).
c) Right - Bayes with weights has not been invented yet.
one last question: I tried to use H0-imputation with a random slope (s | pv on escs) in the model command. But actually it seems that ESCS values are not imputed. Thus I specified the covariances of all l1-variables. Now I got the following error:
MODELS WITH RANDOM SLOPES FOR VARIABLES WITH MISSING VALUES CAN NOT BE ESTIMATED WITH THE BAYES ESTIMATOR.
Does this mean, that it is not possible to impute values of Independent variables with random slopes?
a simple further question: If I use a fixed seed and run the same imputation model two times, will the imputed data sets be identical? (it is: data set 1 first run = data set 1 2nd run, data set 2 first run = data set 2 2nd run, ....
I am running a multilevel model and I need to impute the data. I've used:
TYPE IS TWOLEVEL BASIC; ESTIMATOR IS BAYES; ... DATA IMPUTATION: IMPUTE=[VAR LIST]; NDATASETS=10; SAVE=twolevel*.dat;
and then analysing the imputed data in with a script that uses: DATA: FILE IS twolevellist.dat; TYPE=IMPUTATION;
However, I have also seen that it might be possible to do imputation and analyses all in one, putting the imputation command upfront (not saving the datasets) including the commands for the analyses. Is there one of these that's better than the other or should the results be the same?!