Message/Author 

Naomi Dyer posted on Thursday, July 06, 2006  7:32 pm



Hello  does Mplus have a way to estimate the reliability of a factor at the between level? I have to report the reliability for the factor and wasn't sure if there was a way to do it with multilevel data  and in particular in Mplus. Thank you for any assistance. 


see prof Muthen's multilevel CFA paper shown in Mplus web. 

Naomi Dyer posted on Friday, July 07, 2006  12:21 pm



Thanks however there are quite a few papers on the website  would you mind being more specific? 


Here is the reference: Muthén, B. (1991). Multilevel factor analysis of class and student achievement components. Journal of Educational Measurement, 28, 338354. (#37) 

Naomi Dyer posted on Monday, July 17, 2006  6:00 pm



Thank you. I read the article but my understanding is that the reliability formula discussed in the article is for a single item (or a single composite score)  unlike Cronbach's alpha which is the reliability of x items making up a composite/factor. Can I just compute Cronbach's alpha using group means? 


Your statement is correct. I don't know the answer to your question, but perhaps that works. 


I am also interested in this question, although I don't really have an answer. If you compute Cronbach's alpha using group means, you are ignoring withingroup variation. You are assuming consistency over people within each group. I am not sure what the purpose of your analysis is, but it seems like the reliability of a betweenlevel factor would depend on withinperson variation (typical Cronbach alpha), withingroup variation (consistency of people within each group), and betweengroup variation. Could you do this as a 3level HLM where you specify level 1 as the measurement model? 


I think the SEM literature shows how to do Cronbach's alpha in CFA form, so I suppose one can do Cronbach on both levels simultaneously (I am not a Cronbach alpha user myself). 

Naomi Dyer posted on Friday, July 21, 2006  1:35 pm



I was recently directed to two articles that speak to this  the first, Miller, M.B (1995). Coefficient Alpha: A basic intro from the perspective of classical test theory and SEM. "Structural Equation Modeling", 2(3) p 255273 and Raykov (1997). Estimation of composite reliability for congeneric measures "Applied Psycholgocial Measurement", 21(2) p 173184. In the Raykov article they propose a way to estimate reliability in a SEM by creating a "phantom" factor  which seemingly is a causal factor of the factor items  with no error. The correlation between the phantom factor and the real factor would provide the reliability. I would be interested to know your opinion on this technique if say I modeled a MCFA with a phantom variable at the between level to get the reliability. Thank you! 

Naomi Dyer posted on Friday, July 21, 2006  7:41 pm



To add, is there a way of doing this in Mplus  meaning I have tried to make a causal indicator at the between level per the other discussion postings and I keep getting an error. 


We will answer this early next week after looking at the articles. 

Naomi Dyer posted on Monday, July 31, 2006  11:53 am



Great thanks, I look forward to your response. 


I have a question concerning whether it is possible to separate unique reliable variance from error variance when using multilevel modeling? Let’s suppose I am examining whether children’s aggression varies across different relationship types. Thus, relationships (within level) could be nested within individuals (between level). So, when I get the variance estimates (at both levels), is it possible to know the extent of error vs. reliable variance? I know it is possible to do so when using social relations modeling. 


It sounds like you are asking a question that is brought up in the context of factor analysis. So if you have a 2level factor analysis model where for some reason individual is level2 (as in growth modeling in the long version) and level1 is some nesting within person (like multiple time points or multiple indicators of a factor), then I can imagine thinking of the level2 factor value as a unique reliable component (or same for level2 item residuals) whereas the level1 counterparts are unreliable sources. 

Ben Saville posted on Wednesday, May 13, 2009  4:38 pm



Dr. Muthen, I have a 2level data structure (teachers nested in schools) with 72 items (6 factors with 12 items each), and I'm interested in assessing whether the proposed factor structure is appropriate. In other words, I'd like to determine if the data support the 6 factors, or do the data support a smaller number of factors, i.e. 1 or 2 factors. The overall cronbach alpha for the 72 items ignoring clustering is 0.99, which if I understand correctly would suggest all 72 items are measuring the same thing, or there is only 1 factor (By ignoring clustering, I mean to treat all teacher observations as independent, so there are 20 teachers * 70 schools = 1400 observations). I know that if I fit a CFA ignoring the clustering, I will get biased standard errors. However, I have a colleague who has suggested that the correlations (and therefore cronbach alphas) will be unbiased regardless of whether I take the clustering into account. Is this true? I have attempted to fit a multilevel CFA model in Mplus but I'm having a difficult time getting it to converge, which I think is due either to the high correlations or small number of clusters relative to the number of parameters. What other procedures exist in Mplus that can help me determine the best factor stucture for these data? Thanks in advance. 


Correlations will be different if you take complex survey features into account. You should do a TYPE=TWOLEVEL EFA where you ask for only one factor in the between part of the model. See Example 4.5. 


Hello I am trying to calculate the composite scale reliability of twolevel data according to the paper by Raykov and Penev: Raykov T, Penev S. Estimation of maximal reliability for multiplecomponent instruments in multilevel designs. Br J Math Stat Psychol. 2009;62(1):12942. In their example of the Mpuls source code, they have the following lines (variables adapted): MODEL: %BETWEEN% SuppBTW BY SuppD2 SuppD7 (1) (P1) SuppD9 (2) (P2); SuppBTW (P7); SuppD2 (P8); SuppD7 (P9); SuppD9 (P10); %WITHIN% SuppWTN BY SuppD2 SuppD7 (1) (P1) SuppD9 (2) (P2); SuppD2 (P3); SuppD7 (P4); SuppD9 (P5); SuppWTN (P6); When running the input file (Mplus 6.1), there comes an an error message, that states: *** ERROR in MODEL command Unknown variable(s) in a BY statement: (1) It seems that I only can put in the equality restraint (1) or the parameter label (P1), but not both. Could you help me on how to solve this problem? Many thanks in advance! 


P1 and p2 are both parameter labels and equality constraints. You don't need both. Just use p1 and p2. 


Many thanks for your prompt help! 


Consequences of Reliability (or the lack of it!) in Dependent Variables I am looking for a reference in the literature about the consequences of poor reliability for DVs. Typically, the consequences of poor reliability is discussed in the context of the reliability of IVs, and the attenuation of observed relations with DVs. However, I want to consider the case where the IVs are shown to be reliable, but the DVs reliability is in question. I think I know, from one of Bengt web talks, that DVs with poor reliability result in inflated standard errors, as opposed to attenuated parameter estimates, but I don’t think he gave a citation and if he did I cannot find it now. You wouldn’t happen to have one or more citations for this? Alan R. Johnson johnson@emlyon.com 


I think you find this in most regression/econometrics books. I see it on page 316 of the Wooldridge Introductory Econometrics text, for instance. Measurement error (if wellbehaved) ends up adding to the residual, and increasing residual variance increases the slope SE. No slope estimate bias. 


Thank you Bengt, perhaps foolishly, I had been focusing my search on the Psychometric literature. I have my copy of Wooldridge open now, and I am seeing exactly the kind of explanation that I was looking for, with an authoritative citation to boot! 


This thread did not yet discuss whether multilevel reliability can be used to estimate interrater reliability. I want to analyze interrater reliability for a design of three observers rating subjects (CLUSTER variable) on several continous items assumed to assess one latent variable r. I want to assess the interrater reliability of the latent variable as ICCs of a multilevel model: MODEL: %WITHIN% rw BY r1 r2 (p2) r3 (p3) r4 (p4) r5 (p5) r6 (p6) r7 (p7) r8 (p8) r9 (p9) r10 (p10) r11 (p11); %BETWEEN% rb BY r1 r2 (p2) r3 (p3) r4 (p4) r5 (p5) r6 (p6) r7 (p7) r8 (p8) r9 (p9) r10 (p10) r11 (p11); Then I calculated the true (error free) ICC=B/(B+W) of the latent factor r (from the Variances of rw and rb) like in Muthén (1991) and interpret this ICC as the error free interrater reliability. Is this a valid approach? Can you recommend further citations for this? Thank you a lot! 


What is your level 2? How many level2 units do you have? Note that you cannot have several labels on a row without semicolons separating them. 

Melvin C Y posted on Thursday, March 08, 2012  2:21 am



Hi, I'm trying to estimate maximal reliability for a 4item factor. The unstandardized residual of one item (EEF2) at the between level is 0.000, SE=0.004 (p=.962). The standardized residual is 0.004. I believe it is advisable to set the residual to zero in this case. But I can't seem to set it in the presence of parameter labels. I tried placing the command "EEF2@0" before (P9), but it did change anything. My syntax is below. Thank you. %BETWEEN% BEF BY EEF1 EEF2 (P1) EEF3 (P2) EEF4 (P3); BEF (P9) EEF1 (P10); EEF2 (P11); !How do I set this to zero? EEF3 (P12); EEF4 (P13); %WITHIN% WEF BY EEF1 EEF2 (P1) EEF3 (P2) EEF4 (P3); EEF1 (P4); EEF2 (P5); EEF3 (P6); EEF4 (P7); WEF (P8); 


Say EEF2@0; I don't see why you need a label for this parameter. 

Melvin C Y posted on Friday, March 09, 2012  6:21 am



I included the label P11 to calculate maximal reliability under model constraint which takes into account both within and between item and factor. But if the residual of P11 is zero or close to zero, then excluding it in the calculation of reliability by fixing to zero should make not matter. Hope I had understood it correctly. Thanks. 


Regarding my post on Wednesday, February 22, 2012  9:27 am: Level 2 is raters. I have three raters. The r2 (p2)... labels are one line each in my input  I removed linebreaks to be postfriendly. (sorry for the very late reply) 


Melvin: Correct. 


Gregor: You cannot do multilevel modeling with only three rates. A minimum of 3050 is recommended. You might consider a singlelevel multitrait multimethod model where trait is rater and method is what is rated. The multivariate analysis takes into account any lack of indepedence of observations. 


I have a question about correcting for unreliability due to measurement error in a multilevel context. I want to limit estimated parameters to due to marginal sample size, so will not use multiple indicators. In a singlelevel, one would correct for unreliability, for example, as such: XL by XMean@.982; XMean@0.025; where XL is latent variable, XMean is mean composite of items measuring X, .982 is the square root of alpha reliability and .025 is (1 minus reliability) * (variance). My question is: how would one do this correction with XMean when it exists on the within and between levels? Thanks! Reference: Coffman, D. L., & MacCallum, R. C. (2005). Using parcels to convert path analysis models into latent variable models. Multivariate Behavioral Research, 40, 235259. 


We show unreliability correction in our FAQ Measurement error in a single indicator. I think you are saying that the Xmean variable exists on both levels using the Mplus latent variable decomposition. If so, I think you want to do the correction on only the Within level because that is where the measurement error shows up. On Between you would just say XLB BY XMean; XMean@0; making XLB identical to the latent betweenlevel part of XMean. 


Yes that is accurate. Thanks Bengt, I will give that a try! Regards, Ben 

Back to top 