

Differences in avg estimates between ... 

Message/Author 


Hi, all. We have a measurement instrument that is composed of sets of items derived from 3 dichotomous factors. Factor A is uppercase vs. lowercase letters, Factor B is a knowledge factor, letter names vs. letter sounds, and Factor C is a task factor, multiple choice vs. free response. So we have a pool of items that have A=uppercase, B=names, and C=multiplechoice and so on for the other seven combinations. All items are dichotomous. Similar to an ANOVA framework, we would like to conceptualize thresholds/difficulties and loadings/discriminations as dependent variables to ask questions about the item parameters such as, "Are uppercase letters more discriminating or difficult than lower case letters?". One of the team suggested that we could output the estimates and literally do an ANOVA on those estimates, but others are concerned that we would lose too much power using that approach. We're also thinking we might be able to do this with model TESTS by manually constructing the contrasts among sets of parameters. For example, all parameters would be given a parameter label, say, D1D40 for the discriminations, and to test the main effect of factor A, the model test would then be written (in pseudo code) as: Average(D1:D20)  Average(D21:D40) = 0; And then we'd like extend this to twoway and threeway interactions. What do you think? Is this approach valid, and if not, what would you recommend? 


One approach that could be tried is to use Model Constraint and express the measurement parameters in terms of contributions from the different levels of the 3 factors A, B, and C. So, e.g. with thresholds labeled t1, t2, ... in the Model command: Model Constraint: New(a1 a2 b1 b2 c1 c2); t1 = a1+b1+c1; ! level 1 of all 3 factors t2 = a2+ ... That means that these thresholds are modeled in terms of the new parameters in line with an Anova. You may also want to ask on SEMNET. 


Thanks, Bengt. If I understand correctly, this creates six new variables but leaves them undefined? When I've used NEW before, I usually define what the new variable is (e.g., a difference in two regression coefficients). More importantly, if I have two items with the same values of a1c2, they would be constrained to having the same threshold, right? If so, that's not exactly what we're looking for because we want to ask if the average thresholds/loadings of the *pool* of items is different from those from another pool (each combination of A/B/C has many items). Am I also understanding you that you don't think that MODEL TEST would work for some reason? Best, Jeff 


Q1: The six new variables are implicitly defined  so like new parameters replacing the threshold parameters. Q2: Right. Given your description, you should go ahead and use Model Test. 


Thanks. Appreciate the help. 

Back to top 

