Differences in avg estimates between ... PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
 Jeff Williams posted on Wednesday, March 13, 2019 - 1:50 pm
Hi, all. We have a measurement instrument that is composed of sets of items derived from 3 dichotomous factors. Factor A is uppercase vs. lowercase letters, Factor B is a knowledge factor, letter names vs. letter sounds, and Factor C is a task factor, multiple choice vs. free response. So we have a pool of items that have A=uppercase, B=names, and C=multiple-choice and so on for the other seven combinations. All items are dichotomous.

Similar to an ANOVA framework, we would like to conceptualize thresholds/difficulties and loadings/discriminations as dependent variables to ask questions about the item parameters such as, "Are uppercase letters more discriminating or difficult than lower case letters?".

One of the team suggested that we could output the estimates and literally do an ANOVA on those estimates, but others are concerned that we would lose too much power using that approach. We're also thinking we might be able to do this with model TESTS by manually constructing the contrasts among sets of parameters. For example, all parameters would be given a parameter label, say, D1-D40 for the discriminations, and to test the main effect of factor A, the model test would then be written (in pseudo code) as:

Average(D1:D20) - Average(D21:D40) = 0;

And then we'd like extend this to two-way and three-way interactions. What do you think? Is this approach valid, and if not, what would you recommend?
 Bengt O. Muthen posted on Wednesday, March 13, 2019 - 4:21 pm
One approach that could be tried is to use Model Constraint and express the measurement parameters in terms of contributions from the different levels of the 3 factors A, B, and C. So, e.g. with thresholds labeled t1, t2, ... in the Model command:

Model Constraint:
New(a1 a2 b1 b2 c1 c2);
t1 = a1+b1+c1; ! level 1 of all 3 factors
t2 = a2+ ...

That means that these thresholds are modeled in terms of the new parameters in line with an Anova.

You may also want to ask on SEMNET.
 Jeff Williams posted on Thursday, March 14, 2019 - 6:53 am
Thanks, Bengt. If I understand correctly, this creates six new variables but leaves them undefined? When I've used NEW before, I usually define what the new variable is (e.g., a difference in two regression coefficients).

More importantly, if I have two items with the same values of a1-c2, they would be constrained to having the same threshold, right? If so, that's not exactly what we're looking for because we want to ask if the average thresholds/loadings of the *pool* of items is different from those from another pool (each combination of A/B/C has many items).

Am I also understanding you that you don't think that MODEL TEST would work for some reason?

 Bengt O. Muthen posted on Friday, March 15, 2019 - 11:26 am
Q1: The six new variables are implicitly defined - so like new parameters replacing the threshold parameters.

Q2: Right.

Given your description, you should go ahead and use Model Test.
 Jeff Williams posted on Saturday, March 16, 2019 - 6:07 am
Thanks. Appreciate the help.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message