Message/Author 

Tim Seifert posted on Thursday, March 22, 2007  11:39 pm



Is it possible to conduct a multigroup factor analysis with summary data? If so, how do I specify the groups? I have set up an example with the correlation matrix for each group in a separate file (as described in p334335 of the user's guide). Thanks in advance. 


Summary data must be in one data set. This is described in Chapter 13 under the heading Summary Data, One Data Set. Special options that are needed for this type of analysis are described as are the labels that are used in this case. 


My apologies for not reading the chapter thoroughly. 

Boliang Guo posted on Tuesday, September 23, 2008  11:11 am



(correlation means sd) and (covariance mean) different? using EX5.18 data and code, result from (covariance mean) exactly same as from individual data, but (correlation means sd) slight different from individual data, i.e. different Chi2, AIC and coefficients, is it true? covariance mean .0127592 .0353808 .992987 .759357 1.05699 .0171771 .0064591 .985796 .429187 1.04333 correlation means sd .127592 .0353808 .9964873 1.028101 1 .7412 1 .0171771 .0064591 .9928725 1.021436 1 0.4232 1 


If they are slightly different, it may be because the correlations and standard deviations are used to create covariances. There may be some rounding error that make the covariances slightly different. 


Dear MPlus discussion I anyone has advice I'd be most grateful. I have 3 raters, on a scale measuring children's problem behaviour, made from summing 5 items. (Nobs is very large). For every item within the scale, around half the children score 0 (no problems). The distribution of scores is highly positively skewed. I want to know whether the scale scores of the 3 raters are measuring the same thing. Is Multigroup CFA a good way to find out? Any direction will be appreciated. thank you, 


A simple way to look at interrater reliability is to look at the correlations among the raters to see how high they are. You could also create a factor for each rater using the five items and look at the factor correlations. 


Thank you Linda, I have looked at both the intercorrelations of scale scores among raters and the correlations among the extracted factors for each scale across 3 raters. Correlations among parent and child raters are around .3.4 Correlations between teacher and parents/child are low ~.2.3 The factors extracted from the 3 raters correlate better  .4.5 The large mean differences of item and scale scores between raters, together with the factor structure, suggest that the 3 raters are measuring something different. I'm investigating sex differences in the the genetic association between ability and these behavioural problems; I'd hoped to be able to use a latent factor extracted from these 3raters, but since the correlation between raters is low, perhaps I should do the analysis using all three raters? sorry to be longwinded. and thank you for your help, Rosalind 


Seems like the raters consider different aspects, so doing separate analyses for different raters seems called for. 

Back to top 