Random effects IRT PreviousNext
Mplus Discussion > Categorical Data Modeling >
Message/Author
 Yaacov Petscher posted on Thursday, May 30, 2013 - 3:53 am
Is it currently possible to specify a random effects IRT such that thresholds, loadings, and theta are random? For scale identification I have been able to fix one of the three and the other two are estimated, but was curious as to whether a loading could be fixed to 1 and the variances could be estimated. Because the data structure is univariate for the random IRT model, I was less certain if this was possible. Thanks for any input.
 Bengt O. Muthen posted on Thursday, May 30, 2013 - 6:07 am
Yes, see UG ex 9.26.
 Yaacov Petscher posted on Thursday, May 30, 2013 - 7:47 am
Thanks, Bengt. I've been using that as a guide, but I'm actually interested in running something in the vein of:

s | f BY u;
f;
u@0;
%BETWEEN item%
u; [u$1];
s; [s];

so that f is estimated instead of fixed. Is there a way to identify the model so that f, u, and s variances can be estimated. Perhaps by fixing the means of the slopes to 1?
 Bengt O. Muthen posted on Thursday, May 30, 2013 - 9:53 am
You should think of the random loadings in line with group-varying loadings in which case you cannot also identify the factor variance. So you don't want to free the factor variance. It wouldn't be identified and it would not carry any extra information. The unit factor variance is standard in IRT.

If you are willing to say that one loading is fixed instead of random you can free the factor variance, but that would seem unrealistic. Fixing the means of the slopes doesn't help I think - it is the fact that they have variances that's the key.
 Yaacov Petscher posted on Thursday, May 30, 2013 - 11:51 am
Do you think if it is mechanically possible to actually fix a loading? With the code of "f by u; " designating that the structure of the data is univariate rather than multivariate (which would, by contrast, carry a "f by u1-u10" as in traditional IRT), is it possible to fix u1@1 in the "f by u" case in the random IRT?
 Bengt O. Muthen posted on Thursday, May 30, 2013 - 1:14 pm
An answer to your question will come shortly, but I don't see why you want the factor variance free.
 Tihomir Asparouhov posted on Thursday, May 30, 2013 - 1:22 pm
You can try this trick. Add a new between item variable: the observed loading L. In the data file add 999 (missing value) for those that are not observed.
S on L@1;
L*1;
[S@0];
S@0; or S@0.0001;
where S is the random loading.
 Yaacov Petscher posted on Thursday, May 30, 2013 - 5:07 pm
Thank you both! Part of the reason we want to explore the student variance is that our previous published work has decomposed variances in item responses due to individuals and item thresholds via glimmix models. Often the largest source of variance is due to individual as opposed to threshold differences. I replicated that model in Mplus (one I was doing at M3 when we were chatting!) where loading variances were fixed and individual/threshold variances were estimated as in my paper, and found similar estimates and intraclass correlations. It may be theoretically interesting to extend that work to see how the variance components change when moving from a Rasch based model to the 2pl context, and to evaluate not only the threshold variance changes by estimating the loading variance, but also the individual variance as well. In the glimmix application via SAS a substantial portion of the student variance was captured by individual and individual by item interactions. Seeing how the same item covariates from our old paper differentially explain threshold and loading variance could shed like that has instructional implications for certain reading assessments.
 Yaacov Petscher posted on Friday, September 27, 2013 - 7:17 am
As a follow-up to this line of inquiry, a colleague of mine and I are running the random items IRT with random thresholds and loadings via:
model: %within%
%between id%
fload| f by ln;
f@1;
ln@0;
%between letter%
ln*; [ln$1];
fload*; [fload*1];
savedata: file is yay.dat;
save=fscores(50);
FACTORS=ln fload;

In the output file we the inclusion of FLOAD with the 50 values associated with the imputations, and the mean of the the random loading effects across items (1.73) corresponds well to the mean in the Model Results section (1.77). As it pertains to the intercepts, we're unclear about what the B2a_LN and B2b_LN values represent. The average of B2b_LN was .025, which was not close to the mean threshold (-2.60), nor the variances. Do the intercepts vary automatically when the loadings vary, does this need to be specified in the model, or is it not possible to obtain the specific item intercepts? Thank you!
 Bengt O. Muthen posted on Friday, September 27, 2013 - 10:11 am
It depends on the order in your cluster command:

clusters = level2b level2a;

The B2b value refers to level2b scores and the B2a value to level2a scores.
 Yaacov Petscher posted on Friday, September 27, 2013 - 10:39 am
In this model we have clusters = item id so it makes sense why b2a has 0 and b2b has information. Is it the case, that the b2b_ln average intercept effects should be added to the mean of -2.60 in order to calculate the actual thresholds for each item? Otherwise, we're having difficulty resolving the -2.60 with the b2b_ln average of .025.
 Tihomir Asparouhov posted on Friday, September 27, 2013 - 3:00 pm
ln_b2b is just the random part and it does not include the threshold value [ln$1]. To get the threshold for each item you have to subtract the factor score value for ln_b2b from the threshold value [ln$1]. By default ln_b2b is an item specific random effect with mean zero across items but for specific items it will not be zero.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: