Message/Author 

Daniel posted on Thursday, February 20, 2003  6:47 am



After standardizing, there are two kinds of parameters: Std and StdYX based on different standardizing methods. Should I use StdYX to calculate composite reliability? What is your suggestion? Definition of composite reliability: (L1+...Lk)**2/[(L1+...Lk)**2+(Var(E1)+...+Var(Ek))] where Li= the standardized factor loadings for the factor Var(Ei)=the error variance associated with the individual indicator variables. 

bmuthen posted on Thursday, February 20, 2003  7:42 am



I am not familiar with the formula for composite reliability, but it would seem that the summation of item loadings without weighting them would require that they are standardized, and probably using StdYX. 

Anonymous posted on Tuesday, May 11, 2004  7:41 am



Prof.Muthen, I wanna compute 4 factoes' reliability following your method(Multilevel CSA, 1994), may I and how I get each level 2 and level 1 factors variance and their related error variacen by Mplus 3.0 output? for level 2 factore reliability, is the formula above the right one? sorry I can 


If you are asking how to obtain model estimated variances for your factors, ask for TECH4 in the OUTPUT command. 

Anonymous posted on Tuesday, May 11, 2004  11:41 pm



does TECH4 can also give the error variance related on specific factor? or a I have to compute the total/error variance for each factor? thanks for your guide! by the way, how can I register this disscussion list? 


TECH4 gives the model estimated variances for the latent variables in the model. Go to Getting Started under Documentation in the left margin to get an account. 

Kayhan posted on Wednesday, December 28, 2005  8:27 am



how should we compute composite reliability by spss and what is the rule of choosing an item for weighing? 


You should ask SPSS support this question. I don't use SPSS. 

Ramin Azad posted on Monday, January 23, 2006  3:27 am



Dear Sir/Madam I am wondering how I can compute composite reliability and Average Variance Extracted by SPSS or AMOS? Kind regards Ramin 


I am not sure how these are defined in SPSS and AMOS. You should check their user's guides. 


Hi, Bengt. Good to see you at AERA a few weeks ago. I am interested in computing the composite reliability of a number of survey items that I have analyzed in a CFA model fitted via Mplus. In a series of SEM Journal articles, Raykov demonstrated that the composite reliability for a set of continuous items is explainable as: var(true) / [var(true) + var(residual)] within a CFA model where var(true) is equal to the sum across items of the variance explainable for each item and var(residual) is the sum of the residual error variances. This is essentially an intraclass correlation coefficient. Of note, if the CFA model postulates a single factor with equal loadings and equal residual values, the value of this expression is identical to Cronbach's coefficient alpha. When loading values are unequal, this measure will yield more accurate reliability estimates than alpha (alpha will be underestimated). In my present situation, however, I am analyzing ordered categorical outcomes. It is possible using MODEL CONSTRAINT in Mplus to compute var(true) and var(residual) for ordered categorical outcomes, just as one can do for continuous outcomes. I asked a colleague at AERA for his opinion of the usefulness of this practice, and he expressed doubt that the resulting statistic would be in interpretable as a measure of composite reliability of the observed items. He thought it would instead be an index of reliability of the underlying latent y* values rather than the y values themselves. I am curious about this conclusion, though, given that direct modeling of the y's is mathematically identical to modeling of the y* values, per the discussion in the Mplus user's guide technical appendix. I'm just getting started researching this issue in the literature, but did come across the following article: Roberts, Chris & McNamee, Roseanne. Assessing the reliability of ordered categorical scales using kappatype statistics. Statistical Methods in Medical Research, 2005, 14: 493514. On page 498, they present what they refer to as an intraclass kappa coefficient for ordered categorical data and express it as the ratio of betweensubject to total variance. Do you have any thoughts/opinions you can share about the usefulness of the approach I proposed above (computing the explained variance / [explained variance + residual variance] in Mplus) and the appropriateness of interpreting the resulting coefficient as a reliability estimate of the composite of the y values? Thanks so much, Tor 


Big topic. In factor analysis with categorical outcomes, the logit or probit regression of an item on the factor(s) is a regular logistic or probit regression and therefore can be expressed in terms of a y* dependent variable that has a linear regression on the factor(s) and that is crudely observed by a categorical y. While those 2 formulations are identical for regression relations, reliability is another matter. The y* variables have residual variances, although they are not identifiable parameters, but instead remainders adding up to a y* variance of one. So, one can formulate reliability in terms of y* and this was done by Linda Muthen in her 1983 dissertation. Others have picked up that idea too; I think Marcoulides wrote something on it later on. This type of reliability is indirectly relevant for the observed items. But you have to ask what kind of aggregation of the items you are interested in getting the reliability for. Is it for the sum? Or shouldn't you simply ask  what is the reliability of the factor scores that you get? The answer to that question is in the IRT literature and points to information curves, the graphics of which is something that will be included in the soon forthcoming Mplus version 4.1 


Thank you, Bengt. This is very helpful. Do you have any favorite references that discuss the use/interpretation of information curves as they will be implemented in Mplus 4.1? It would be helpful to read up on these in advance of the release of v4.1 of Mplus so that I can hit the ground running when 4.1 becomes available. Thanks again, Tor 


Two good sources are the books: Hambleton & Swaminathan (1985). Item Response Theory. du Toit (2003). IRT from SSI. 


I'm trying to replicate with Mplus Raykov's procedure for evaluating reliability, as shown in http://www.ssicentral.com/lisrel/techdocs/reliabil.pdf The (more or less) direct translation from Lisrel to Mplus of his model for a noncongeneric scale is: MODEL: r1 BY y1@1; r2 BY y2@1; r3 BY y3@1; r4 BY y4@1; r5 BY y5@1; r6 BY y6@1; y1y6@0; f1 BY r1* (l1) r2r4 (l2l4); f2 BY r3* (m1) r4r6 (m2m4); f1f2@1; f3 BY; f3@0; f3 ON r1r6@1; f4 BY; f4@0; f4 ON f1 (o1) f2 (o2); f3 WITH f4@0; Model constraint: o1=l1+l2+l3+l4; o2=m1+m2+m3+m4; Output: TECH4; Where the first factor loads on items y1y4, and the second factor loads on items y3y6. The reliability of the sum score of the observed variables is estimated by the quotient between the estimate of the true composite variance (F4) and the variance of the composite (F3), both reported in TECH4. Now, I want to use this model with ordinal data (and the WLMV estimator), and a very similar noncongeneric scale with 6 items and two factors. My question is: what adjustmensts do I need to make?, is it possible to run this model under the delta parameterization?, and if it is necesary to use the theta parameterization, what other adjustmensts do I need to make? Thanks in advance, Fernando. 


Without me getting into the tech doc you refer to, here are some reactions, first regarding the setup you show for continuous outcomes and then for categorical outcomes. Do you really need to complicate the input by defining the factors r1r6 behind each observed outcome? Why not have f1 and f2 measured by y1y6 directly? I don't think statements of the kind F3 BY; work. You can define the factor by any observed variable and not adversely impact the model by fixing the loading @0. When switching to categorical outcomes, the question is if you want to work with ML or WLSMV estimation  which is also a choice between a logit and a probit model. I guess in this situation it makes a difference if you put a factor behind each of the outcomes because the regression of f3 on r1r6 is a regression on continuous latent response variables, whereas the regression of f3 on y1y6 is a regression on categorical outcomes with ML (continuous latent response vbles with WLSMV). Which to choose is a research question that I don't get into here. With WLSMV, I don't see that the Delta/Theta choice makes a difference. 


Thank you very much for your quick and enriching answer and … Wonderful!, I did a direct translation from Lisrel to Mplus (and it worked), but I forgot that Mplus is more flexible, so in Mplus the r1r6 aren’t needed. The code reduces to: MODEL: f1 BY y1* (l1) y2y4 (l2l4); f2 BY y3* (m1) y4y6 (m2m4); f1f2@1; f3 BY; f3@0; f3 ON y1y6@1; f4 BY; f4@0; f4 ON f1 (o1) f2 (o2); f3 WITH f4@0; Model constraint: o1=l1+l2+l3+l4; o2=m1+m2+m3+m4; Output: Standardized; TECH4; And it gives the same reliability results as Raykov’s. The statements: f3 BY; and f4 BY;, do their work perfectly (and they give the same results that, i.e. f3 BY y1@0). When trying this model with ordinal data (y1y6), and the WLSMV estimator, I receive the message: “The model is not supported by DELTA parameterization. Use THETA parameterization.”. I have done it, and then the model works. Nevertheless, when using the ML estimator you get an error, “Internal Error Code: PR1004  Parameter restriction split problem”. Many thanks, Fernando. 


Thanks for exploring  I learned something new. I see now that Theta parameterization won't work, because the model falls into the category described in the User's Guide as "categorical dependent variable is both influenced by and influences either another observed dependent variable or a latent variable." The ML issue is one of Model constraint implementation where currently parameters of certain different types cannot be part of the same constraint. 


When using Mplus, AVE is calculated by taking the standardized estimated loadings for each item within its respective construct. The value of the estimate is squared, and then summed to create the numerator of the AVE statistic. The same value for the estimate is used in the denominator, only this time the estimate and 1 minus the estimate are used. You acn refer to Gefen, Straub and Boudreau (2000) who use the symbol lowercase lambda, l, to represent the value of the estimate. I'll use "E" to represent sigma. The formula for generating the AVE statistic is then AVE = (Eli^2)/((Eli^2)+(E1li^2)) You then take square root of the resulting statistic and places it within the correlation table of the latent constructs, which is generated using the “tech4” option in the Mplus output command. That value is compared to the correlations of that construct to the other constructs in the model. If the square root of the AVE for a construct is above .50 and larger than its correlation with other constructs, convergent and discriminant validity are said to be shown (Gefen et al. 2000). Hope this helps! 


Dear all, I) With the help of your posts, I transfered Raykov's scale reliability into this command: ANALYSIS: parameterization=theta; MODEL: F1 BY item1* (l1) item2item5 (l2l5); item1item5 (ve1ve5); F1@1; MODEL CONSTRAINT: NEW (RELIABF1); RELIABF1 = (l1 +l2 +l3 + l4 + l5)**2/ ((l1 +l2 +l3 +l4 +15)**2 + (ve1 +ve2 +ve3 +ve4 +ve5)); OUTPUT: Standardized tech4; However, I get an error message: THE MODEL ESTIMATION TERMINATED NORMALLY. THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 25. Parameter 25 refers to THETA ITEM1 ITEM1 25 Still, values for estimates, residual variances and reliabF1 are given. If I calculate reliab. by hand (squared sum of unstand. estimate/(squared sum of unstand. estimate+sum of residual variance) result is .80 instead of given .040 (?!) II) I wasn't sure whether the first line in model command needs to be F1 BY item1* (l1) or simply F1 BY item1. If I do so, I get same error message but lower reliab. (.76) calculated by hand. Thanks&warm wishes 


It sounds like you have categorical outcomes in which case free residual variance parameters cannot be identified, but are fixed at 1. 


Thank you very much for your answer, Bengt. It is true that I have categorical items measured by 5.Likert scales. If I understood it correctly, the produced factors are then continous latent variables, right? I am very sorry, but I didn't understand in what way I can use your answer to take the error message into account. I would appreciate it very much it you explained it to me. Thank you very much! 


I forgot to say, that parameter 25 is theta of the first item. It doesn't matter whether I conduct the analysis separately for my three subscales or altogether. It is always the same error message that occurs, and the corresponding parameter is always the first item. 


You say: item1item5 (ve1ve5); But I assume you have stated: categorical = item1item5; in which case those residual variances are not identified. You should remove the line: item1item5 (ve1ve5); I also assume that you are using the WLSMV estimator in which case these residual variances are deduced (not estimated as free parameters) from the model and are printed when requesting the Standardized solution in the Output command. So your Model Constraint section will have to use those printed values instead of ve1ve5. 


Bengt! Great, thanks! This clarifies a lot. I indeed defined the items as categorial and I also used WLSMV estimator. Unfortunately, I do not know how to use the printed values for residual variances in model constraint command. I tried it by removing the line that provides label for error variances for each item and replacing (ve1 + ve2 + ve3 + ve4 + ve5) by (y1 + y2 + y3 + y4 + y5) in the model constraint command. However, I receive the message that parameter label y1 is unknown. How do I refer to the residual variances? Would be great if you helped me once more. 


Instead of "ve1" etc, you put the values that are printed for them. 


Dear Prof Muthen, By the Residual Variances do you mean the so named column in the RSQUARE section of the output file, for item1 to item5 ? This information seems to be printed whether the Standardized option is requested or not. BTW, why is the RSQUARE section not printed when PARAMETERIZATION = THETA? Thank you in advance for your attention, Mo  Mr Mo DANGARNOUX Intern in Public Health Grenoble Medical School, France. 


Yes. This is printed both with and without the STANDARDIZED option. There is an Rsquare with the Theta parametrization. 


Dear Linda, Thank you very much for your prompt answer. Actually, with the THETA parametrization, the Rsquare section seems to be printed only if STANDARDIZED is requested in OUTPUT (I am using Mplus version 6.1). Anyway, I managed computing Raykov's composite reliability formula by starting from Tina's syntax and applying Bengt's advice. Since the line item1item5 (ve1ve5); is removed, the THETA parametrization is no longer needed. The issue remains however somewhat unclear to me, of how we may interpret Raykov's reliability in the case of ordinal items? In my understanding, it somehow measures the part of true variance in the underlying responses y*_i. How to we relate that to the internal consistency of the actual ordinal responses y_i? I have asked the question on SEMNET, and am waiting for an answer there. Best wishes, Mô 


Tenko Raykov just emailed this response: I am afraid I don't have a direct answer to the question of composite reliability with highly discrete items unless they're all binary (in which case our paper with Tihomir and Dimitrov, SEM, 2010, outlines a method of point and interval estimation as well as of the change in it due to revision; see also below in this message). A procedure for approximate point and interval estimation of reliability of the sum score of discrete items (with up to say 5 levels/values possible) is outlined in our recent book with Marcoulides, Intro to Psychometric theory, 2011, NY: Taylor & Francis. That procedure is not exact and has a potential limitation of not giving a single estimate for the scale's reliability (simple sum score's reliability), as discussed there. With 5 or more levels/values on each item, there's a better procedure outlined in the book for point and interval estimation of the scale's reliability, which uses ML robust. Tenko 


Dear Bengt and Tenko, Many thanks for your prompt response to an actually rather complex issue! Do you authorize me to forward your response to SEMNET in order to complement the discussion I started there? Tenko, having browsed your 2011 book with Marcoulides on Amazon, it seems to fit very well my current needs indeed. It should provide me a solid reference to understand fundamental issues in psychometric studies and models, esp. from the latent variable viewpoint. And a detailed treatment of advanced topics such as reliability of ordinal items. I am definitely ordering it! I think works such as your book should help towards a more widespread adoption of better suited alternatives to Cronbach alpha. Still, given the familiarity of most nonstatistician researchers with alpha, I am wondering whether using alternatives will be readily understood when submitting research in nonmethodologically oriented journals? Best regards, Mo 


I am new to Mplus but I want to calculate the confidence interval for AVE and CR. There is a paper: A Comparison of Three Confidence Intervals of Composite Reliability of A Unidimensional Test (YE BaoJuan 2011) that mentions in the abstract (English) that "...results could be directly obtained by using SEM software Mplus that automatically calculates the confidence interval with Delta method and presents the confidence interval." However, the paper itself is written in Chinese, which I am unfamiliar with. How do I calculate these statistics with their corresponding confidence intervals? 


This question is for a more general discussion forum like SEMNET. Once you determine how to calculate these statistics, you can use the CINTERVAL option to obtain confidence intervals. 


Oh thank you for the reply. I have already calculated the statistics (AVE and CR)in Excel. But from my understanding to use CINTERVAL I would need to calculate them directly in Mplus. Is this correct? What are the steps to reading the parameter estimates and error variances into a formula in Mplus? Could you direct me to a page in the User's guide, or an example online. Thanks. 


Yes, to obtain confidence intervals, the parameters need to be part of the model. See MODEL CONSTRAINT in the user's guide. 


Thank you, that was incredible helpful. I was able to get the factor loads into model constraint but how do you get the outputted values for the error variances into model constraint? 


I think you have a residual variance in your model but want to use the variance in MODEL CONSTRAINT. If that is so, you must define the variance as a new parameter using the components from your model. 

Stephen Teo posted on Thursday, March 14, 2013  9:28 pm



hi Is it possible to compute AVE on mplus? thanks 


The is no option for AVE but people have computed it. You may want to ask on SEMNET to see if someone has the input for that. 


I am new trying to calculate composite reliability and average variance extracted. The ways to estimate them are: Composite Reliability = (Sum of standardized loadings)**2/ ((Sum of standardized loading) **2 + Sum of indicator’s residual variance)) Average Variance Extracted = Sum of squared standardized loadings/ (Sum of squared standardized loadings + Sum of indicator’s residual variance) My questions are: 1) If the ways estimating Composite Reliability and Average Variance Extracted have anything incorrect, please let me know. 2) Should I use ‘standardized’ residual variances of the indicators in the processes? Or should I use ‘raw’ residual variances of the indicators in the processes? 3) I set seven indicators as categorical items in the CFA analysis (the default estimator for categorical data analysis is WLSMV in Mplus). Mplus outputs showed no residual variance for these categorical items. Is there a way to obtain the residual variances for these categorical items. Or should I use the other ways to estimate composite reliability and average variance extracted for these categorical items? Thank you very much!! 


1) That looks correct. The reliability formula assumes that the factor variance is set to 1. You can also check these general issues which are not Mplus specific on SEMNET. See also the RaykovMarcoulides book "Introduction to Psychometric Theory". 2) Use raw estimates 3) I don't this reliability formula is for categorical items. Again, ask on SEMNET. 


Dear Dr. Muthén: A lot of thanks for your rapid response!! Your valuable comments are very helpful to me. I just got the book that you suggested from Amazon four days ago. :D Thank you very much again!! 


Dear Dr. Muthén: Sorry to bother you again for the same question!! I discuss with my friend (who also using Mplus) about selecting “standardized” or “raw” estimates in calculating composite reliability and average variance extracted. My friend suggested that we should use all raw estimates or all standardized estimates for both factor loadings and indicator’s residual variance in the two processes. Is my friend right? Or should I use “standardized” estimates in factor loadings, but use “raw” estimates in indicator’s residual variance? 


Yes, use raw estimates everywhere. 


Dear Dr. Muthén: Thank you very much for your rapid interpretation!! I am deeply appreciated for your teaching!! 

Tyler Moore posted on Thursday, September 04, 2014  9:36 am



Hi Bengt/Linda, I'm estimating a straightforward bifactor model with continuous indicators, and was wondering if the latest version of Mplus can output coefficient omega. Also, I heard it is fairly simple to get confidence intervals for omega using Mplus output, but am having trouble finding specific instructions for that. How do I get those CIs? Thanks! 


There is a FAQ on our website for estimating omega with a single factor: Omega coefficient in Mplus This follows the formulas in the RaykovMarcoulides book. With a bifactor model you have to decide what the "true score variable" is  is it the general factor or the general plus the specific? Not sure if the book covers that. 


Hello, I'm attempting to request the composite reliability for factors in a CFA. Each factor has 5 categorical items (0, 1, 2). I used the input above kindly provided by Tina Freyburg but get results that differ from my hand calculations according to the Raykov formula. Following Dr. Muthen's suggestion above, I have used the values from my CFA output under the RSQUARE Residual Variance heading rather than the ve1 etc. in computing the values. My input is as follows ANALYSIS: type=complex; estimator=wlsmv; parameterization=theta; MODEL: F1 by Y1* (Lam1) Y2 Y3 Y4 Y5 (Lam2Lam5); F2 by Y6* (Lam6) Y7 Y8 Y9 Y10 (Lam7Lam10); F1@1; F2@1; MODEL CONSTRAINT: NEW (Rel_1 Rel_2); Rel_1=((Lam1+Lam2+Lam3+Lam4+Lam5)**2)/(((Lam1+Lam2+Lam3+Lam4+Lam5)**2) + (0.861 + 0.374 + 0.393 + 0.598 + 0.496)); Rel_2 = ((Lam6+Lam7+Lam8+Lam9+Lam10)**2)/ (((Lam6+Lam7+Lam8+Lam9+Lam10)**2)+ (0.690 + 0.333 + 0.612 + 0.503 + 0.322)); I would be so grateful for any insight into why this might be happening and how I can fix it. Thanks! 


You don't mention what "why this might be happening" refers to. Note that the formula you use here is for continuous items, not for categorical items. The latter case is more complex and studied in Structural Equation Modeling, 17:265–279, 2010 Evaluation of Scale Reliability With Binary Measures Using Latent Variable Modeling Tenko Raykov Michigan State University Dimiter M. Dimitrov George Mason University Tihomir Asparouhov 

Cheng posted on Saturday, April 04, 2015  11:36 pm



Hi Muthen, I have 3 different measures here (eg., belief, behavior and knowledge). I conducted CFA on this three measures using MPlus. My first objective is to check the validity using CFA (measurement models), and then the next stage is to run a SEM model (obj2) based on the result form the measurement models (obj1). In my obj 1, is it necessary to calculate the composite reliability and AVE (average variance extracted)? Fit indices from Mplus output for the measurement model indicate fit. However the AVE is quite low for some subscales/factors. I just wonder can I proceed to obj 2 (SEM model), even though my AVE is quite low for some factors (eg say 0.40 < recommended 0.50?). I have seen many articles using CFA analysis had only reported the fit indices and very rare author will report the composite reliability and AVE. My question is the fit indices provided in Mplus are sufficient enough to judges a measurement model is valid? 

Cheng posted on Saturday, April 04, 2015  11:38 pm



Hi Muthen, Sorry, I have another 2 questions: (1)Regarding the knowledge scale which is binary (yes/no). In Mplus, I run CFA using estimator WLSMV. The standardized correlation between two factors is 1.012 which is more than 1. What can I do, should I delete more items? Does it mean that these two factors are highly correlated? Can you use the word “correlated” for binary measure? (2)If I want to calculate the composite reliability and AVE, what formula should I use for knowledge which is a binary measure? From your previous discussion, composite reliability should refer to Raykov et al 2010. How about AVE? Can we calculate AVE for binary measure? 

Cheng posted on Sunday, April 05, 2015  8:00 pm



Dear Muthen, I had read Raykov’s paper on “evaluation of scale reliability with binary measures using latent variable modeling”. The scale reliability coefficient formula stated in the paper (page 269, formula (14)), can it be applied if WLSMV estimator is used in CFA? Is there any way I can compute AVE (average variance extracted) for binary measure (yes and no response)? 


How to write the syntax to calculate the reliability in a CFA? 


Cheng, you had many posts. I will order the answers timewise. 1. Sat 11:36 Q1: No. Q2.: Yes. Q3: Yes. 


Cheng, Sat 11:38 (1) Factor correlation greater than 1 implies that only one factor is needed. This can also happen when the CFA structure is too strict. (2) I would not encourage using composite reliability of AVE with FA for binary items. 


Cheng, Sun 8 Please email Raykov to see what he thinks. 


Answer to Ceasar Ball See the FAQ "Omega coefficient in Mplus" on our website. 

Cheng posted on Monday, April 06, 2015  6:08 pm



Thank you very much for all your answers. Really appreciate it. 


I found the following: "Estimating coefficient omega in Mplus for a 1factor model with continuous items" The same formula for categorical or continuous data used? 


That formula is for continuous outcomes. Raykov has also written about this for categorical outcomes. See the 2010 SEM article with Mplus code: Evaluation of Scale Reliability With Binary Measures Using Latent Variable Modeling 


thank you very much 


Dear Drs. Muthen, I wanted to ask you 2 questions regarding reliability. 1) From reading earlier comments and your responses, I have noted an Mplus code for calculating Omega reliability for a latent factor with continuous indicators. I was wondering, if I have ordinal indicators measured on a 5point Likert scale, would that be ok to use the same code or do I have to necessarily use the code you suggested for the categorical/binary indicators from: "Evaluation of Scale Reliability With Binary Measures Using Latent Variable Modeling"? 2) From your experience would the procedures for ordered categorical (e.g.5 point likert scale) and continuous indicators give very different or somewhat similar results? 3) Given the omega code for continuous indicators, how would one adopt this formulae for measuring a secondorder latent factor omega that has 3 first order latent factors and 4 indicators per factor. I thought I could first calculate omegas for the 3 first order factors.But then I am not sure how to proceed next, i.e. how they should be combined and a secondorder factor taken into account. Thank you Kind regards Alex 


1) You will need special code for categorical outcomes. 2) I don't have experience with this but I doubt it. 3) Check articles by Tenko Raykov or email him. One question is how one defines reliability here  for one of the factors only or all factors jointly. 


Thank you for your prompt reply 

Back to top 