Multilevel simulation - measurement e...
Message/Author
 Josh Kazman posted on Thursday, November 23, 2017 - 2:20 pm
Hi - I'm running a few basic Monte Carlo / Multilevel models for the first time, and am trying to investigate the effect of measurement error (in the IV) at different levels on parameter estimates.

In the code below, I tried to look at measurement error at the "within cluster" level. In the population, I created a factor from the predictor with a less than perfect loading; then I regressed the DV onto the factor, and set that coefficient at 0.6. Next, in the output, I was expecting to see an attenuation in that 0.6 coefficient; however, instead, the coefficient ("y on truex") is consistently much greater than 0.6. (The same thing happened when I tried to run a similar model with measurement error at the between cluster level.)

I'm sure I'm making some basic mistake in my reasoning, but cannot for the life of me figure out where.

Thanks for any help that anyone may be able to offer!

Josh

_______________
MODEL POPULATION:

%within%

truex by x1 @ 0.8;
x1 @ 0.36;
truex@1;
y on truex@ 0.6;
y @ 1;

%between%

y@0.467;

MODEL:

%within%

truex by x1;
y on truex;

%between%

y;
 Bengt O. Muthen posted on Friday, November 24, 2017 - 11:44 am
I think you want to regress y on x1 in your Model command to show the attenuation.

Note that the Model specified in the Model command is not identified because it tries to estimate both the truex variance and the x1 residual variance.
 Josh Kazman posted on Friday, November 24, 2017 - 2:00 pm

Regressing y on x1 in the model command was my initial inclination, but it produces this error:

_______________

*** ERROR in MODEL POPULATION command
Variable has mismatched roles in MODEL and MODEL POPULATION.
Variable: X1

______________

Adding a latent factor into the model command seems to be a workaround.

Based on your suggestion, I tried setting the variance of the factor to 1 (in the Model), and then I started to see the attenuation that I would expect. However, under some circumstances, the average estimate for x1's residual variance becomes negative (even though none of the replications produce any errors).

When instead I set the residual variance of x1 to 0 in the Model, then I still see some attenuation, but it's very miniscule.

Eventually, I'd like to be able to tweak reliability and ICC in the Population, and adding a separate latent factor to the Model seems to complicate that. Is there an easier workaround than creating a latent factor in the Model? If not, then should I set the Model factor variance to 1 and not worry about x1's negative residual variance estimates?