How to compute composite reliability PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
 Daniel posted on Thursday, February 20, 2003 - 6:47 am
After standardizing, there are two kinds of parameters: Std and StdYX based on different standardizing methods. Should I use StdYX to calculate composite reliability? What is your suggestion?

Definition of composite reliability:


Li= the standardized factor loadings for the
Var(Ei)=the error variance associated with the
individual indicator variables.
 bmuthen posted on Thursday, February 20, 2003 - 7:42 am
I am not familiar with the formula for composite reliability, but it would seem that the summation of item loadings without weighting them would require that they are standardized, and probably using StdYX.
 Anonymous posted on Tuesday, May 11, 2004 - 7:41 am
Prof.Muthen, I wanna compute 4 factoes' reliability following your method(Multilevel CSA, 1994), may I and how I get each level 2 and level 1 factors variance and their related error variacen by Mplus 3.0 output? for level 2 factore reliability, is the formula above the right one? sorry I can
 Linda K. Muthen posted on Tuesday, May 11, 2004 - 1:54 pm
If you are asking how to obtain model estimated variances for your factors, ask for TECH4 in the OUTPUT command.
 Anonymous posted on Tuesday, May 11, 2004 - 11:41 pm
does TECH4 can also give the error variance related on specific factor? or a I have to compute the total/error variance for each factor?
thanks for your guide!
by the way, how can I register this disscussion list?
 Linda K. Muthen posted on Wednesday, May 12, 2004 - 9:49 am
TECH4 gives the model estimated variances for the latent variables in the model. Go to Getting Started under Documentation in the left margin to get an account.
 Kayhan posted on Wednesday, December 28, 2005 - 8:27 am
how should we compute composite reliability by spss and what is the rule of choosing an item for weighing?
 Linda K. Muthen posted on Wednesday, December 28, 2005 - 9:16 am
You should ask SPSS support this question. I don't use SPSS.
 Ramin Azad posted on Monday, January 23, 2006 - 3:27 am
Dear Sir/Madam

I am wondering how I can compute composite reliability and Average Variance Extracted by SPSS or AMOS?

Kind regards
 Linda K. Muthen posted on Monday, January 23, 2006 - 8:49 am
I am not sure how these are defined in SPSS and AMOS. You should check their user's guides.
 Tor Neilands posted on Tuesday, May 02, 2006 - 6:19 pm
Hi, Bengt.

Good to see you at AERA a few weeks ago.

I am interested in computing the composite reliability of a number of survey items that I have analyzed in a CFA model fitted via Mplus.

In a series of SEM Journal articles, Raykov demonstrated that the composite reliability for a set of continuous items is explainable as:

var(true) / [var(true) + var(residual)]

within a CFA model where var(true) is equal to the sum across items of the variance explainable for each item and var(residual) is the sum of the residual error variances.

This is essentially an intraclass correlation coefficient. Of note, if the CFA model postulates a single factor with equal loadings and equal residual values, the value of this expression is identical to Cronbach's coefficient alpha. When loading values are unequal, this measure will yield more accurate reliability estimates than alpha (alpha will be underestimated).

In my present situation, however, I am analyzing ordered categorical outcomes. It is possible using MODEL CONSTRAINT in Mplus to compute var(true) and var(residual) for ordered categorical outcomes, just as one can do for continuous outcomes.

I asked a colleague at AERA for his opinion of the usefulness of this practice, and he expressed doubt that the resulting statistic would be in interpretable as a measure of composite reliability of the observed items. He thought it would instead be an index of reliability of the underlying latent y* values rather than the y values themselves. I am curious about this conclusion, though, given that direct modeling of the y's is mathematically identical to modeling of the y* values, per the discussion in the Mplus user's guide technical appendix.

I'm just getting started researching this issue in the literature, but did come across the following article:

Roberts, Chris & McNamee, Roseanne. Assessing the reliability of ordered categorical scales using kappa-type statistics. Statistical Methods in Medical Research, 2005, 14: 493-514.

On page 498, they present what they refer to as an intraclass kappa coefficient for ordered categorical data and express it as the ratio of between-subject to total variance.

Do you have any thoughts/opinions you can share about the usefulness of the approach I proposed above (computing the explained variance / [explained variance + residual variance] in Mplus) and the appropriateness of interpreting the resulting coefficient as a reliability estimate of the composite of the y values?

Thanks so much,

 Bengt O. Muthen posted on Thursday, May 04, 2006 - 11:11 am
Big topic. In factor analysis with categorical outcomes, the logit or probit regression of an item on the factor(s) is a regular logistic or probit regression and therefore can be expressed in terms of a y* dependent variable that has a linear regression on the factor(s) and that is crudely observed by a categorical y. While those 2 formulations are identical for regression relations, reliability is another matter. The y* variables have residual variances, although they are not identifiable parameters, but instead remainders adding up to a y* variance of one. So, one can formulate reliability in terms of y* and this was done by Linda Muthen in her 1983 dissertation. Others have picked up that idea too; I think Marcoulides wrote something on it later on. This type of reliability is indirectly relevant for the observed items. But you have to ask what kind of aggregation of the items you are interested in getting the reliability for. Is it for the sum? Or shouldn't you simply ask - what is the reliability of the factor scores that you get? The answer to that question is in the IRT literature and points to information curves, the graphics of which is something that will be included in the soon forthcoming Mplus version 4.1
 Tor Neilands posted on Saturday, May 06, 2006 - 11:23 am
Thank you, Bengt. This is very helpful. Do you have any favorite references that discuss the use/interpretation of information curves as they will be implemented in Mplus 4.1? It would be helpful to read up on these in advance of the release of v4.1 of Mplus so that I can hit the ground running when 4.1 becomes available.

Thanks again,

 Bengt O. Muthen posted on Saturday, May 06, 2006 - 11:33 am
Two good sources are the books:

Hambleton & Swaminathan (1985). Item Response Theory.

du Toit (2003). IRT from SSI.
 Fernando Terrés de Ercilla posted on Thursday, May 25, 2006 - 3:55 am
I'm trying to replicate with Mplus Raykov's procedure for evaluating reliability, as shown in
The (more or less) direct translation from Lisrel to Mplus of his model for a noncongeneric scale is:

r1 BY y1@1;
r2 BY y2@1;
r3 BY y3@1;
r4 BY y4@1;
r5 BY y5@1;
r6 BY y6@1;
f1 BY r1* (l1)
r2-r4 (l2-l4);
f2 BY r3* (m1)
r4-r6 (m2-m4);
f3 BY;
f3 ON r1-r6@1;
f4 BY;
f4 ON f1 (o1)
f2 (o2);
f3 WITH f4@0;
Model constraint:

Where the first factor loads on items y1-y4, and the second factor loads on items y3-y6. The reliability of the sum score of the observed variables is estimated by the quotient between the estimate of the true composite variance (F4) and the variance of the composite (F3), both reported in TECH4.

Now, I want to use this model with ordinal data (and the WLMV estimator), and a very similar noncongeneric scale with 6 items and two factors. My question is: what adjustmensts do I need to make?, is it possible to run this model under the delta parameterization?, and if it is necesary to use the theta parameterization, what other adjustmensts do I need to make?

Thanks in advance,
 Bengt O. Muthen posted on Thursday, May 25, 2006 - 5:33 pm
Without me getting into the tech doc you refer to, here are some reactions, first regarding the setup you show for continuous outcomes and then for categorical outcomes.

Do you really need to complicate the input by defining the factors r1-r6 behind each observed outcome? Why not have f1 and f2 measured by y1-y6 directly?

I don't think statements of the kind

F3 BY;

work. You can define the factor by any observed variable and not adversely impact the model by fixing the loading @0.

When switching to categorical outcomes, the question is if you want to work with ML or WLSMV estimation - which is also a choice between a logit and a probit model. I guess in this situation it makes a difference if you put a factor behind each of the outcomes because the regression of f3 on r1-r6 is a regression on continuous latent response variables, whereas the regression of f3 on y1-y6 is a regression on categorical outcomes with ML (continuous latent response vbles with WLSMV). Which to choose is a research question that I don't get into here. With WLSMV, I don't see that the Delta/Theta choice makes a difference.
 Fernando Terrés de Ercilla posted on Saturday, May 27, 2006 - 4:23 am
Thank you very much for your quick and enriching answer and … Wonderful!, I did a direct translation from Lisrel to Mplus (and it worked), but I forgot that Mplus is more flexible, so in Mplus the r1-r6 aren’t needed.
The code reduces to:

f1 BY y1* (l1)
y2-y4 (l2-l4);
f2 BY y3* (m1)
y4-y6 (m2-m4);
f3 BY;
f3 ON y1-y6@1;
f4 BY;
f4 ON f1 (o1)
f2 (o2);
f3 WITH f4@0;
Model constraint:

And it gives the same reliability results as Raykov’s.

The statements: f3 BY; and f4 BY;, do their work perfectly (and they give the same results that, i.e. f3 BY y1@0).

When trying this model with ordinal data (y1-y6), and the WLSMV estimator, I receive the message: “The model is not supported by DELTA parameterization. Use THETA parameterization.”. I have done it, and then the model works.
Nevertheless, when using the ML estimator you get an error, “Internal Error Code: PR1004 - Parameter restriction split problem”.

Many thanks, Fernando.
 Bengt O. Muthen posted on Saturday, May 27, 2006 - 3:49 pm
Thanks for exploring - I learned something new. I see now that Theta parameterization won't work, because the model falls into the category described in the User's Guide as "categorical dependent variable is both influenced by and influences either another observed dependent variable or a latent variable." The ML issue is one of Model constraint implementation where currently parameters of certain different types cannot be part of the same constraint.
 Adam Benson posted on Friday, June 13, 2008 - 3:22 pm
When using Mplus, AVE is calculated by taking the standardized estimated loadings for each item within its respective construct. The value of the estimate is squared, and then summed to create the numerator of the AVE statistic. The same value for the estimate is used in the denominator, only this time the estimate and 1 minus the estimate are used. You acn refer to Gefen, Straub and Boudreau (2000) who use the symbol lowercase lambda, l, to represent the value of the estimate. I'll use "E" to represent sigma. The formula for generating the AVE statistic is then
AVE = (Eli^2)/((Eli^2)+(E1-li^2))
You then take square root of the resulting statistic and places it within the correlation table of the latent constructs, which is generated using the “tech4” option in the Mplus output command. That value is compared to the correlations of that construct to the other constructs in the model. If the square root of the AVE for a construct is above .50 and larger than its correlation with other constructs, convergent and discriminant validity are said to be shown (Gefen et al. 2000).
Hope this helps!
 tina freyburg posted on Friday, March 20, 2009 - 9:23 am
Dear all,
I) With the help of your posts, I transfered Raykov's scale reliability into this command:
F1 BY item1* (l1)
item2-item5 (l2-l5);
item1-item5 (ve1-ve5);
RELIABF1 = (l1 +l2 +l3 + l4 + l5)**2/
((l1 +l2 +l3 +l4 +15)**2 + (ve1 +ve2 +ve3 +ve4 +ve5));
OUTPUT: Standardized tech4;

However, I get an error message:

Parameter 25 refers to
ITEM1 25

Still, values for estimates, residual variances and reliabF1 are given. If I calculate reliab. by hand (squared sum of unstand. estimate/(squared sum of unstand. estimate+sum of residual variance) result is .80 instead of given .040 (?!)

II) I wasn't sure whether the first line in model command needs to be
F1 BY item1* (l1) or simply F1 BY item1.
If I do so, I get same error message but lower reliab. (.76) calculated by hand.

Thanks&warm wishes
 Bengt O. Muthen posted on Friday, March 20, 2009 - 10:58 am
It sounds like you have categorical outcomes in which case free residual variance parameters cannot be identified, but are fixed at 1.
 tina freyburg posted on Friday, March 20, 2009 - 12:16 pm
Thank you very much for your answer, Bengt.

It is true that I have categorical items measured by 5.-Likert scales. If I understood it correctly, the produced factors are then continous latent variables, right?

I am very sorry, but I didn't understand in what way I can use your answer to take the error message into account.

I would appreciate it very much it you explained it to me.

Thank you very much!
 tina freyburg posted on Friday, March 20, 2009 - 12:48 pm
I forgot to say, that parameter 25 is theta of the first item. It doesn't matter whether I conduct the analysis separately for my three subscales or altogether. It is always the same error message that occurs, and the corresponding parameter is always the first item.
 Bengt O. Muthen posted on Friday, March 20, 2009 - 1:08 pm
You say:

item1-item5 (ve1-ve5);

But I assume you have stated:

categorical = item1-item5;

in which case those residual variances are not identified. You should remove the line:

item1-item5 (ve1-ve5);

I also assume that you are using the WLSMV estimator in which case these residual variances are deduced (not estimated as free parameters) from the model and are printed when requesting the Standardized solution in the Output command. So your Model Constraint section will have to use those printed values instead of ve1-ve5.
 tina freyburg posted on Saturday, March 21, 2009 - 8:12 am
Bengt! Great, thanks! This clarifies a lot. I indeed defined the items as categorial and I also used WLSMV estimator.

Unfortunately, I do not know how to use the printed values for residual variances in model constraint command.

I tried it by removing the line that provides label for error variances for each item and replacing (ve1 + ve2 + ve3 + ve4 + ve5) by (y1 + y2 + y3 + y4 + y5) in the model constraint command.

However, I receive the message that parameter label y1 is unknown. How do I refer to the residual variances? Would be great if you helped me once more.
 Bengt O. Muthen posted on Saturday, March 21, 2009 - 10:37 am
Instead of "ve1" etc, you put the values that are printed for them.
 Mr Mo DANG-ARNOUX posted on Wednesday, July 27, 2011 - 2:28 am
Dear Prof Muthen,

By the Residual Variances do you mean the so named column in the R-SQUARE section of the output file, for item1 to item5 ?

This information seems to be printed whether the Standardized option is requested or not.

BTW, why is the R-SQUARE section not printed when PARAMETERIZATION = THETA?

Thank you in advance for your attention,

Intern in Public Health
Grenoble Medical School, France.
 Linda K. Muthen posted on Wednesday, July 27, 2011 - 11:00 am
Yes. This is printed both with and without the STANDARDIZED option.

There is an R-square with the Theta parametrization.
 Mr Mo DANG-ARNOUX posted on Thursday, July 28, 2011 - 9:14 am
Dear Linda,

Thank you very much for your prompt answer.

Actually, with the THETA parametrization, the R-square section seems to be printed only if STANDARDIZED is requested in OUTPUT (I am using Mplus version 6.1).

Anyway, I managed computing Raykov's composite reliability formula by starting from Tina's syntax and applying Bengt's advice. Since the line
item1-item5 (ve1-ve5);
is removed, the THETA parametrization is no longer needed.

The issue remains however somewhat unclear to me, of how we may interpret Raykov's reliability in the case of ordinal items? In my understanding, it somehow measures the part of true variance in the underlying responses y*_i. How to we relate that to the internal consistency of the actual ordinal responses y_i? I have asked the question on SEMNET, and am waiting for an answer there.

Best wishes,

 Bengt O. Muthen posted on Thursday, July 28, 2011 - 10:33 am
Tenko Raykov just emailed this response:

I am afraid I don't have a direct answer to the question of composite
reliability with highly discrete items unless they're all binary (in which
case our paper with Tihomir and Dimitrov, SEM, 2010, outlines a method of
point and interval estimation as well as of the change in it due to
revision; see also below in this message).

A procedure for approximate point and interval estimation of reliability of
the sum score of discrete items (with up to say 5 levels/values possible) is
outlined in our recent book with Marcoulides, Intro to Psychometric theory,
2011, NY: Taylor & Francis. That procedure is not exact and has a potential
limitation of not giving a single estimate for the scale's reliability
(simple sum score's reliability), as discussed there. With 5 or more
levels/values on each item, there's a better procedure outlined in the book
for point and interval estimation of the scale's reliability, which uses ML

 Mr Mo DANG-ARNOUX posted on Friday, July 29, 2011 - 3:35 am
Dear Bengt and Tenko,

Many thanks for your prompt response to an actually rather complex issue!

Do you authorize me to forward your response to SEMNET in order to complement the discussion I started there?

Tenko, having browsed your 2011 book with Marcoulides on Amazon, it seems to fit very well my current needs indeed. It should provide me a solid reference to understand fundamental issues in psychometric studies and models, esp. from the latent variable viewpoint. And a detailed treatment of advanced topics such as reliability of ordinal items. I am definitely ordering it!

I think works such as your book should help towards a more widespread adoption of better suited alternatives to Cronbach alpha. Still, given the familiarity of most non-statistician researchers with alpha, I am wondering whether using alternatives will be readily understood when submitting research in non-methodologically oriented journals?

Best regards,
 Brandon Earl Fleming posted on Monday, August 08, 2011 - 5:50 am
I am new to Mplus but I want to calculate the confidence interval for AVE and CR.

There is a paper: A Comparison of Three Confidence Intervals of Composite Reliability of A Unidimensional Test (YE Bao-Juan 2011) that mentions in the abstract (English) that "...results could be directly obtained by using SEM software Mplus that automatically calculates the confidence interval with Delta method and presents the confidence interval." However, the paper itself is written in Chinese, which I am unfamiliar with.

How do I calculate these statistics with their corresponding confidence intervals?
 Linda K. Muthen posted on Monday, August 08, 2011 - 1:33 pm
This question is for a more general discussion forum like SEMNET. Once you determine how to calculate these statistics, you can use the CINTERVAL option to obtain confidence intervals.
 Brandon Earl Fleming posted on Wednesday, August 10, 2011 - 9:18 am
Oh thank you for the reply. I have already calculated the statistics (AVE and CR)in Excel. But from my understanding to use CINTERVAL I would need to calculate them directly in Mplus. Is this correct?

What are the steps to reading the parameter estimates and error variances into a formula in Mplus?

Could you direct me to a page in the User's guide, or an example online.

 Linda K. Muthen posted on Wednesday, August 10, 2011 - 10:35 am
Yes, to obtain confidence intervals, the parameters need to be part of the model. See MODEL CONSTRAINT in the user's guide.
 Brandon Earl Fleming posted on Tuesday, August 16, 2011 - 6:48 pm
Thank you, that was incredible helpful. I was able to get the factor loads into model constraint but how do you get the outputted values for the error variances into model constraint?
 Linda K. Muthen posted on Wednesday, August 17, 2011 - 1:09 pm
I think you have a residual variance in your model but want to use the variance in MODEL CONSTRAINT. If that is so, you must define the variance as a new parameter using the components from your model.
 Stephen Teo posted on Thursday, March 14, 2013 - 9:28 pm

Is it possible to compute AVE on mplus?

 Linda K. Muthen posted on Friday, March 15, 2013 - 9:02 am
The is no option for AVE but people have computed it. You may want to ask on SEMNET to see if someone has the input for that.
 Gwo-Bao Liou posted on Tuesday, June 25, 2013 - 3:31 pm
I am new trying to calculate composite reliability and average variance extracted.

The ways to estimate them are:

Composite Reliability = (Sum of standardized loadings)**2/ ((Sum of standardized loading) **2 + Sum of indicator’s residual variance))

Average Variance Extracted = Sum of squared standardized loadings/ (Sum of squared standardized loadings + Sum of indicator’s residual variance)

My questions are:
1) If the ways estimating Composite Reliability and Average Variance Extracted have anything incorrect, please let me know.

2) Should I use ‘standardized’ residual variances of the indicators in the processes? Or should I use ‘raw’ residual variances of the indicators in the processes?

3) I set seven indicators as categorical items in the CFA analysis (the default estimator for categorical data analysis is WLSMV in Mplus). Mplus outputs showed no residual variance for these categorical items. Is there a way to obtain the residual variances for these categorical items. Or should I use the other ways to estimate composite reliability and average variance extracted for these categorical items?

Thank you very much!!
 Bengt O. Muthen posted on Tuesday, June 25, 2013 - 6:04 pm
1) That looks correct. The reliability formula assumes that the factor variance is set to 1. You can also check these general issues which are not Mplus specific on SEMNET. See also the Raykov-Marcoulides book "Introduction to Psychometric Theory".

2) Use raw estimates

3) I don't this reliability formula is for categorical items. Again, ask on SEMNET.
 Gwo-Bao Liou posted on Tuesday, June 25, 2013 - 9:58 pm
Dear Dr. Muthén:

A lot of thanks for your rapid response!!
Your valuable comments are very helpful to me.
I just got the book that you suggested from Amazon four days ago. :D

Thank you very much again!!
 Gwo-Bao Liou posted on Wednesday, June 26, 2013 - 11:12 am
Dear Dr. Muthén:

Sorry to bother you again for the same question!!
I discuss with my friend (who also using Mplus) about selecting “standardized” or “raw” estimates in calculating composite reliability and average variance extracted. My friend suggested that we should use all raw estimates or all standardized estimates for both factor loadings and indicator’s residual variance in the two processes. Is my friend right? Or should I use “standardized” estimates in factor loadings, but use “raw” estimates in indicator’s residual variance?
 Bengt O. Muthen posted on Wednesday, June 26, 2013 - 2:21 pm
Yes, use raw estimates everywhere.
 Gwo-Bao Liou posted on Wednesday, June 26, 2013 - 3:01 pm
Dear Dr. Muthén:

Thank you very much for your rapid interpretation!!

I am deeply appreciated for your teaching!!
 Tyler Moore posted on Thursday, September 04, 2014 - 9:36 am
Hi Bengt/Linda,

I'm estimating a straightforward bifactor model with continuous indicators, and was wondering if the latest version of Mplus can output coefficient omega. Also, I heard it is fairly simple to get confidence intervals for omega using Mplus output, but am having trouble finding specific instructions for that. How do I get those CIs?

 Bengt O. Muthen posted on Thursday, September 04, 2014 - 2:33 pm
There is a FAQ on our website for estimating omega with a single factor:

Omega coefficient in Mplus

This follows the formulas in the Raykov-Marcoulides book.

With a bi-factor model you have to decide what the "true score variable" is - is it the general factor or the general plus the specific? Not sure if the book covers that.
 Madison Aitken posted on Wednesday, October 15, 2014 - 12:03 pm

I'm attempting to request the composite reliability for factors in a CFA. Each factor has 5 categorical items (0, 1, 2). I used the input above kindly provided by Tina Freyburg but get results that differ from my hand calculations according to the Raykov formula. Following Dr. Muthen's suggestion above, I have used the values from my CFA output under the R-SQUARE Residual Variance heading rather than the ve1 etc. in computing the values.

My input is as follows


F1 by Y1* (Lam1)
Y2 Y3 Y4 Y5 (Lam2-Lam5);
F2 by Y6* (Lam6)
Y7 Y8 Y9 Y10 (Lam7-Lam10);


NEW (Rel_1 Rel_2);

Rel_1=((Lam1+Lam2+Lam3+Lam4+Lam5)**2)/(((Lam1+Lam2+Lam3+Lam4+Lam5)**2) + (0.861 + 0.374 + 0.393 + 0.598 + 0.496));

Rel_2 = ((Lam6+Lam7+Lam8+Lam9+Lam10)**2)/
(((Lam6+Lam7+Lam8+Lam9+Lam10)**2)+ (0.690 + 0.333 + 0.612 + 0.503 + 0.322));

I would be so grateful for any insight into why this might be happening and how I can fix it. Thanks!
 Bengt O. Muthen posted on Wednesday, October 15, 2014 - 2:02 pm
You don't mention what "why this might be happening" refers to.

Note that the formula you use here is for continuous items, not for categorical items. The latter case is more complex and studied in

Structural Equation Modeling, 17:265–279, 2010

Evaluation of Scale Reliability With Binary
Measures Using Latent Variable Modeling
Tenko Raykov
Michigan State University
Dimiter M. Dimitrov
George Mason University
Tihomir Asparouhov
 Cheng posted on Saturday, April 04, 2015 - 11:36 pm
Hi Muthen,
I have 3 different measures here (eg., belief, behavior and knowledge). I conducted CFA on this three measures using MPlus. My first objective is to check the validity using CFA (measurement models), and then the next stage is to run a SEM model (obj2) based on the result form the measurement models (obj1). In my obj 1, is it necessary to calculate the composite reliability and AVE (average variance extracted)? Fit indices from Mplus output for the measurement model indicate fit. However the AVE is quite low for some subscales/factors. I just wonder can I proceed to obj 2 (SEM model), even though my AVE is quite low for some factors (eg say 0.40 < recommended 0.50?). I have seen many articles using CFA analysis had only reported the fit indices and very rare author will report the composite reliability and AVE. My question is the fit indices provided in Mplus are sufficient enough to judges a measurement model is valid?
 Cheng posted on Saturday, April 04, 2015 - 11:38 pm
Hi Muthen,

Sorry, I have another 2 questions:

(1)Regarding the knowledge scale which is binary (yes/no). In Mplus, I run CFA using estimator WLSMV. The standardized correlation between two factors is 1.012 which is more than 1. What can I do, should I delete more items? Does it mean that these two factors are highly correlated? Can you use the word “correlated” for binary measure?

(2)If I want to calculate the composite reliability and AVE, what formula should I use for knowledge which is a binary measure? From your previous discussion, composite reliability should refer to Raykov et al 2010. How about AVE? Can we calculate AVE for binary measure?
 Cheng posted on Sunday, April 05, 2015 - 8:00 pm
Dear Muthen,
I had read Raykov’s paper on “evaluation of scale reliability with binary measures using latent variable modeling”. The scale reliability coefficient formula stated in the paper (page 269, formula (14)), can it be applied if WLSMV estimator is used in CFA? Is there any way I can compute AVE (average variance extracted) for binary measure (yes and no response)?
 Cesar Daniel Costa Ball posted on Monday, April 06, 2015 - 9:00 am
How to write the syntax to calculate the reliability in a CFA?
 Bengt O. Muthen posted on Monday, April 06, 2015 - 5:00 pm
Cheng, you had many posts. I will order the answers time-wise.

1. Sat 11:36

Q1: No.

Q2.: Yes.

Q3: Yes.
 Bengt O. Muthen posted on Monday, April 06, 2015 - 5:03 pm
Cheng, Sat 11:38

(1) Factor correlation greater than 1 implies that only one factor is needed. This can also happen when the CFA structure is too strict.

(2) I would not encourage using composite reliability of AVE with FA for binary items.
 Bengt O. Muthen posted on Monday, April 06, 2015 - 5:04 pm
Cheng, Sun 8

Please email Raykov to see what he thinks.
 Bengt O. Muthen posted on Monday, April 06, 2015 - 5:06 pm
Answer to Ceasar Ball

See the FAQ "Omega coefficient in Mplus" on our website.
 Cheng posted on Monday, April 06, 2015 - 6:08 pm
Thank you very much for all your answers. Really appreciate it.
 Cesar Daniel Costa Ball posted on Monday, April 06, 2015 - 6:58 pm
I found the following:

"Estimating coefficient omega in Mplus for a 1-factor model with continuous items"

The same formula for categorical or continuous data used?
 Bengt O. Muthen posted on Tuesday, April 07, 2015 - 7:31 am
That formula is for continuous outcomes. Raykov has also written about this for categorical outcomes. See the 2010 SEM article with Mplus code:

Evaluation of Scale Reliability With Binary
Measures Using Latent Variable Modeling
 Cesar Daniel Costa Ball posted on Tuesday, April 07, 2015 - 9:25 am
thank you very much
 Alexander Tokarev posted on Sunday, September 25, 2016 - 9:48 am
Dear Drs. Muthen,

I wanted to ask you 2 questions regarding reliability.

1) From reading earlier comments and your responses, I have noted an Mplus code for calculating Omega reliability for a latent factor with continuous indicators. I was wondering, if I have ordinal indicators measured on a 5-point Likert scale, would that be ok to use the same code or do I have to necessarily use the code you suggested for the categorical/binary indicators from: "Evaluation of Scale Reliability With Binary
Measures Using Latent Variable Modeling"?

2) From your experience would the procedures for ordered categorical (e.g.5 point likert scale) and continuous indicators give very different or somewhat similar results?

3) Given the omega code for continuous indicators, how would one adopt this formulae for measuring a second-order latent factor omega that has 3 first order latent factors and 4 indicators per factor. I thought I could first calculate omegas for the 3 first order factors.But then I am not sure how to proceed next, i.e. how they should be combined and a second-order factor taken into account.

Thank you

Kind regards

 Bengt O. Muthen posted on Monday, September 26, 2016 - 12:47 pm
1) You will need special code for categorical outcomes.

2) I don't have experience with this but I doubt it.

3) Check articles by Tenko Raykov or email him. One question is how one defines reliability here - for one of the factors only or all factors jointly.
 Alexander Tokarev posted on Tuesday, September 27, 2016 - 4:55 am
Thank you for your prompt reply
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message