Peter Ji posted on Thursday, February 03, 2011 - 3:57 am
I am working with a group to analyze rating scale data. The group I am working with wants to conduct EFA and CFA on the data. The rating scale has 35 questions, 1600+ students were rated by 100 teachers. However there was only one teacher rating for every six students. We cannot conduct an inter-rater reliability statistic and the observations are essentially non-independent, nested within teacher.
My question is, am I correct that we cannot conduct an EFA/CFA and ignore the non-independence of the data? We cannot determine if the items load on a factor because the items actually do co-vary or that a teacher was using the 35 item scale in a similar fashion when rating the six students. In addition, because we have only one teacher rating six students, we cannot compare the teacher's ratings with other teachers because their ratings do not overlap. I would like to try a multi-source approach for a CFA, but I don't think the way the data were collected will allow us to try that approach.
Is there any way to conduct the EFA/CFA and account for the fact that there is only one rating for every six students? Are there alternate approaches?
I am using MPlus for this. Thanks in advance, hope my explanation of this problem made sense. Petji@Uic.edu
It sounds like you have students nested in teacher. You could use a multilevel EFA or CFA model to take this non-independence of observations into account.
Peter Ji posted on Thursday, February 03, 2011 - 5:22 pm
thanks for your reply. is there any concern with the fact that there is no crossed design for teacher? can any meaningful interpretation take place when there is only one source (teacher) rating one target (students)?