Multigroup factor analysis with summa...
Message/Author
 Tim Seifert posted on Thursday, March 22, 2007 - 6:39 pm
Is it possible to conduct a multigroup factor analysis with summary data? If so, how do I specify the groups? I have set up an example with the correlation matrix for each group in a separate file (as described in p334-335 of the user's guide). Thanks in advance.
 Linda K. Muthen posted on Thursday, March 22, 2007 - 6:55 pm
Summary data must be in one data set. This is described in Chapter 13 under the heading Summary Data, One Data Set. Special options that are needed for this type of analysis are described as are the labels that are used in this case.
 Tim Seifert posted on Friday, March 23, 2007 - 8:20 am
My apologies for not reading the chapter thoroughly.
 Boliang Guo posted on Tuesday, September 23, 2008 - 5:11 am
(correlation means sd) and (covariance mean) different?
using EX5.18 data and code, result from (covariance mean) exactly same as from individual data, but (correlation means sd) slight different from individual data, i.e. different Chi-2, AIC and coefficients, is it true?

covariance mean
.0127592 .0353808
.992987
.759357 1.05699
-.0171771 .0064591
.985796
.429187 1.04333

correlation means sd
.127592 .0353808
.9964873 1.028101
1
.7412 1
-.0171771 .0064591
.9928725 1.021436
1
0.4232 1
 Linda K. Muthen posted on Tuesday, September 23, 2008 - 6:41 am
If they are slightly different, it may be because the correlations and standard deviations are used to create covariances. There may be some rounding error that make the covariances slightly different.
 Rosalind Arden posted on Wednesday, September 24, 2008 - 7:02 pm
Dear MPlus discussion

I anyone has advice I'd be most grateful.

I have 3 raters, on a scale measuring children's problem behaviour, made from summing 5 items. (N-obs is very large).

For every item within the scale, around half the children score 0 (no problems). The distribution of scores is highly positively skewed.

I want to know whether the scale scores of the 3 raters are measuring the same thing.

Is Multigroup CFA a good way to find out?

Any direction will be appreciated.

thank you,
 Linda K. Muthen posted on Thursday, September 25, 2008 - 10:20 am
A simple way to look at inter-rater reliability is to look at the correlations among the raters to see how high they are. You could also create a factor for each rater using the five items and look at the factor correlations.
 Rosalind Arden posted on Thursday, September 25, 2008 - 3:23 pm
Thank you Linda,

I have looked at both the inter-correlations of scale scores among raters and the correlations among the extracted factors for each scale across 3 raters.

Correlations among parent and child raters are around .3-.4
Correlations between teacher and parents/child are low ~.2-.3

The factors extracted from the 3 raters correlate better - .4-.5

The large mean differences of item and scale scores between raters, together with the factor structure, suggest that the 3 raters are measuring something different.

I'm investigating sex differences in the the genetic association between ability and these behavioural problems; I'd hoped to be able to use a latent factor extracted from these 3-raters, but since the correlation between raters is low, perhaps I should do the analysis using all three raters?

sorry to be long-winded.
and thank you for your help,
Rosalind
 Bengt O. Muthen posted on Thursday, September 25, 2008 - 9:59 pm
Seems like the raters consider different aspects, so doing separate analyses for different raters seems called for.