Message/Author 


Is it currently possible to specify a random effects IRT such that thresholds, loadings, and theta are random? For scale identification I have been able to fix one of the three and the other two are estimated, but was curious as to whether a loading could be fixed to 1 and the variances could be estimated. Because the data structure is univariate for the random IRT model, I was less certain if this was possible. Thanks for any input. 


Yes, see UG ex 9.26. 


Thanks, Bengt. I've been using that as a guide, but I'm actually interested in running something in the vein of: s  f BY u; f; u@0; %BETWEEN item% u; [u$1]; s; [s]; so that f is estimated instead of fixed. Is there a way to identify the model so that f, u, and s variances can be estimated. Perhaps by fixing the means of the slopes to 1? 


You should think of the random loadings in line with groupvarying loadings in which case you cannot also identify the factor variance. So you don't want to free the factor variance. It wouldn't be identified and it would not carry any extra information. The unit factor variance is standard in IRT. If you are willing to say that one loading is fixed instead of random you can free the factor variance, but that would seem unrealistic. Fixing the means of the slopes doesn't help I think  it is the fact that they have variances that's the key. 


Do you think if it is mechanically possible to actually fix a loading? With the code of "f by u; " designating that the structure of the data is univariate rather than multivariate (which would, by contrast, carry a "f by u1u10" as in traditional IRT), is it possible to fix u1@1 in the "f by u" case in the random IRT? 


An answer to your question will come shortly, but I don't see why you want the factor variance free. 


You can try this trick. Add a new between item variable: the observed loading L. In the data file add 999 (missing value) for those that are not observed. S on L@1; L*1; [S@0]; S@0; or S@0.0001; where S is the random loading. 


Thank you both! Part of the reason we want to explore the student variance is that our previous published work has decomposed variances in item responses due to individuals and item thresholds via glimmix models. Often the largest source of variance is due to individual as opposed to threshold differences. I replicated that model in Mplus (one I was doing at M3 when we were chatting!) where loading variances were fixed and individual/threshold variances were estimated as in my paper, and found similar estimates and intraclass correlations. It may be theoretically interesting to extend that work to see how the variance components change when moving from a Rasch based model to the 2pl context, and to evaluate not only the threshold variance changes by estimating the loading variance, but also the individual variance as well. In the glimmix application via SAS a substantial portion of the student variance was captured by individual and individual by item interactions. Seeing how the same item covariates from our old paper differentially explain threshold and loading variance could shed like that has instructional implications for certain reading assessments. 


As a followup to this line of inquiry, a colleague of mine and I are running the random items IRT with random thresholds and loadings via: model: %within% %between id% fload f by ln; f@1; ln@0; %between letter% ln*; [ln$1]; fload*; [fload*1]; savedata: file is yay.dat; save=fscores(50); FACTORS=ln fload; In the output file we the inclusion of FLOAD with the 50 values associated with the imputations, and the mean of the the random loading effects across items (1.73) corresponds well to the mean in the Model Results section (1.77). As it pertains to the intercepts, we're unclear about what the B2a_LN and B2b_LN values represent. The average of B2b_LN was .025, which was not close to the mean threshold (2.60), nor the variances. Do the intercepts vary automatically when the loadings vary, does this need to be specified in the model, or is it not possible to obtain the specific item intercepts? Thank you! 


It depends on the order in your cluster command: clusters = level2b level2a; The B2b value refers to level2b scores and the B2a value to level2a scores. 


In this model we have clusters = item id so it makes sense why b2a has 0 and b2b has information. Is it the case, that the b2b_ln average intercept effects should be added to the mean of 2.60 in order to calculate the actual thresholds for each item? Otherwise, we're having difficulty resolving the 2.60 with the b2b_ln average of .025. 


ln_b2b is just the random part and it does not include the threshold value [ln$1]. To get the threshold for each item you have to subtract the factor score value for ln_b2b from the threshold value [ln$1]. By default ln_b2b is an item specific random effect with mean zero across items but for specific items it will not be zero. 

Back to top 