Cronbach's Alpha vs Composite Reliabi... PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 Robin Ghertner posted on Thursday, August 01, 2013 - 10:51 pm
Hello -

I'm calculating scale reliability for a single factor, with 22 indicator variables. The model fit indicates a unidimensional construct (RMSEA under 0.05, CFI and TLI both above 0.95). The loadings indicate congeneric model, not tau-equivalent.

My Cronbach's alpha estimate is 0.78.

I calculate composite reliability (Fornell and Larcker, 1981), which is:
sum(factor loadings)^2/(sum(factor loadings)^2+sum(error variance))
The estimate results in 0.96.

I would anticipate that alpha would be a lower bound, given the model is congeneric; however, I did not expect composite reliability to be so much higher than alpha. Given 0.95 is often used as a high stakes cutoff, using one reliability metric I have a decidedly low stakes assessment, and another it passes muster, I'm a little confused here.

Can anyone provide an explanation for why composite reliability would be so much higher? Am I missing something?

Thanks
 Bengt O. Muthen posted on Friday, August 02, 2013 - 7:46 am
I asked Tenko Raykov to comment on this and here is his answer:

"For the setting described (congeneric model, no error covariances), coefficient alpha can be a serious under-estimate of scale reliability even at the population level (if the entire population was studied), and obviously in a given sample the same may happen. The underlying reason is discussed in detail and 'qualitative terms' in Novick & Lewis, Psychometrika, 1967, and in 'quantitative terms' in Raykov, 1997, MBR.
Simply put, it boils down to the extent to which the construct loadings (factor loadings) are dissimilar - the more they are so, the more pronounced the underestimation 'bias' of alpha is. (Examples of this kind are also given in Raykov, 1997, and Raykov, 2001, BJMSP). The alternative formula used by the colleague asking this question, is identical to that of the 'omega' coefficient, which is reliability itself in this setting. Thus, unless the conditions indicated in Table 1 on p. 344 in Raykov, MBR, 1997, hold (when alpha is close to reliability in the population, even with somewhat dissimilar loadings), the preferred measure of reliability is the reliability coefficient itself - which would be only natural, logically - i.e., the omega coefficient mentioned above. For the particular question asked, it may also help to work out a confidence interval for reliability.
For this, the R-function 'ci.rel' in ch. 7 of Raykov & Marcoulides, 2011, "Introduction to Psychometric Theory", could be used (see also the more general discussion there on point and interval estimation of reliability, in particular with Mplus at the software level)."
 RuoShui posted on Thursday, March 23, 2017 - 11:37 pm
Dear Drs. Muthen,

I am wondering if the formula mentioned above for omega reliability coefficient: sum(factor loadings)^2/(sum(factor loadings)^2+sum(error variance)) also works for factors with ordinal items using WLSMV? I don't seem to be able to find the error variance in the output, could you please let me know where I can find this information?

I am also curious if the Omega generated by the ci.reliability function in R mentioned by Raykov above is equivalent to that from the formula?

Thanks.
 Bengt O. Muthen posted on Thursday, March 30, 2017 - 12:42 pm
Please see our FAQs named Reliability.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: