tdietzvt posted on Wednesday, November 20, 2002 - 1:02 pm
We are working on a data set in which we have say, m objects, each of which is evaluated on k attributes by n subjects. Much of the literature in this area (risk perception) averages across the n individuals to produce m means (one for each object) on each of the k attributes, then does exploratory factor analysis to see how the attributes factor. This of course ignores variation across individuals, which is certainly inappropriate. Is there a good way to approach this problemt that does not assume away inter-individual variaton? Some literature to which I might look? Thanks! Tom Dietz, George Mason Univesity
bmuthen posted on Wednesday, November 20, 2002 - 4:58 pm
What are typical magnitudes of m, k, and n? I assume all n subjects rate all m objects on all k attributes?
tdietz posted on Thursday, November 21, 2002 - 5:41 am
Sorry, should have been precise. Yes all subjects use the same rating scale for all objects on all attributes, typically in this literature a 5 ot 7 point "Likert" like scale. The norm in the literature is to ignore the fact that this is ordinal data and treat it as interval. This is of course an issue but the larger issue by far I think is how to deal with the real data structure rather than assuming it away by taking means of ratings over individuals and then using the objects as "cases" and the rating scale means as the variables.
tdietz posted on Thursday, November 21, 2002 - 7:57 am
Typical magnitudes are 5-10 for the number of atrributes, 20-50 or even more for the number of objects and from 100-500 for the number of subjects.
bmuthen posted on Thursday, November 21, 2002 - 8:05 am
So, in principle, it sounds like one could think of this as n independent observations on m*k variables. Do you think that the multi-trait, multi-method literature gives some insights?
tdietz posted on Thursday, November 21, 2002 - 8:43 am
Interesting point--I should have thought of that and will take a look--haven't looked at that lit since grad school. Is there an up to date guide you could suggest?
bmuthen posted on Thursday, November 21, 2002 - 10:27 am
- This is just being discussed on SEMNET. Are you part of that?
tdietz posted on Friday, November 22, 2002 - 6:06 am
No,but I probably should be. Can you give me the access information? Thanks for your usual great insight on this.
tnguyen posted on Friday, November 22, 2002 - 7:07 am
To join SEMNET, send an email to LISTSERV@BAMA.UA.EDU with the following text in the body of the message:
Please guide me regarding this constant error in CFA with categorical outcomes. Latent variables are 4, dependent variables(indicators)are 20, all indicators are categorical mainly binary. Sample size= 4691, missingness of data = 20-30%. estimator = MLR, integration montecarlo (1000). The error message repeatedly is as follows:
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-ZERO DERIVATIVE OF THE OBSERVED-DATA LOGLIKELIHOOD.
THE MCONVERGENCE CRITERION OF THE EM ALGORITHM IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR THE FOLLOWING PARAMETER IS 0.49785457D+00: Parameter 18, P4 WITH BOP