Multilevel reliability
Message/Author
 Naomi Dyer posted on Thursday, July 06, 2006 - 1:32 pm
Hello - does Mplus have a way to estimate the reliability of a factor at the between level? I have to report the reliability for the factor and wasn't sure if there was a way to do it with multilevel data - and in particular in Mplus.
Thank you for any assistance.
 Boliang Guo posted on Friday, July 07, 2006 - 12:52 am
see prof Muthen's multilevel CFA paper shown in Mplus web.
 Naomi Dyer posted on Friday, July 07, 2006 - 6:21 am
Thanks however there are quite a few papers on the website - would you mind being more specific?
 Bengt O. Muthen posted on Friday, July 14, 2006 - 5:17 pm
Here is the reference:

Muthén, B. (1991). Multilevel factor analysis of class and student achievement components. Journal of Educational Measurement, 28, 338-354. (#37)
 Naomi Dyer posted on Monday, July 17, 2006 - 12:00 pm
Thank you. I read the article but my understanding is that the reliability formula discussed in the article is for a single item (or a single composite score) - unlike Cronbach's alpha which is the reliability of x items making up a composite/factor. Can I just compute Cronbach's alpha using group means?
 Bengt O. Muthen posted on Monday, July 17, 2006 - 5:51 pm
 Sharyn L. Rosenberg posted on Wednesday, July 19, 2006 - 9:43 am
I am also interested in this question, although I don't really have an answer. If you compute Cronbach's alpha using group means, you are ignoring within-group variation. You are assuming consistency over people within each group. I am not sure what the purpose of your analysis is, but it seems like the reliability of a between-level factor would depend on within-person variation (typical Cronbach alpha), within-group variation (consistency of people within each group), and between-group variation. Could you do this as a 3-level HLM where you specify level 1 as the measurement model?
 Bengt O. Muthen posted on Wednesday, July 19, 2006 - 6:51 pm
I think the SEM literature shows how to do Cronbach's alpha in CFA form, so I suppose one can do Cronbach on both levels simultaneously (I am not a Cronbach alpha user myself).
 Naomi Dyer posted on Friday, July 21, 2006 - 7:35 am
I was recently directed to two articles that speak to this - the first, Miller, M.B (1995). Coefficient Alpha: A basic intro from the perspective of classical test theory and SEM. "Structural Equation Modeling", 2(3) p 255-273 and

Raykov (1997). Estimation of composite reliability for congeneric measures "Applied Psycholgocial Measurement", 21(2) p 173-184.

In the Raykov article they propose a way to estimate reliability in a SEM by creating a "phantom" factor - which seemingly is a causal factor of the factor items - with no error. The correlation between the phantom factor and the real factor would provide the reliability.

I would be interested to know your opinion on this technique if say I modeled a MCFA with a phantom variable at the between level to get the reliability.

Thank you!
 Naomi Dyer posted on Friday, July 21, 2006 - 1:41 pm
To add, is there a way of doing this in Mplus - meaning I have tried to make a causal indicator at the between level per the other discussion postings and I keep getting an error.
 Linda K. Muthen posted on Saturday, July 22, 2006 - 1:43 pm
We will answer this early next week after looking at the articles.
 Naomi Dyer posted on Monday, July 31, 2006 - 5:53 am
Great thanks, I look forward to your response.
 Kätlin Peets posted on Sunday, June 08, 2008 - 11:31 am
I have a question concerning whether it is possible to separate unique reliable variance from error variance when using multilevel modeling? Let’s suppose I am examining whether children’s aggression varies across different relationship types. Thus, relationships (within level) could be nested within individuals (between level). So, when I get the variance estimates (at both levels), is it possible to know the extent of error vs. reliable variance? I know it is possible to do so when using social relations modeling.
 Bengt O. Muthen posted on Monday, June 09, 2008 - 9:07 am
It sounds like you are asking a question that is brought up in the context of factor analysis. So if you have a 2-level factor analysis model where for some reason individual is level-2 (as in growth modeling in the long version) and level-1 is some nesting within person (like multiple time points or multiple indicators of a factor), then I can imagine thinking of the level-2 factor value as a unique reliable component (or same for level-2 item residuals) whereas the level-1 counterparts are unreliable sources.
 Ben Saville posted on Wednesday, May 13, 2009 - 10:38 am
Dr. Muthen,

I have a 2-level data structure (teachers nested in schools) with 72 items (6 factors with 12 items each), and I'm interested in assessing whether the proposed factor structure is appropriate. In other words, I'd like to determine if the data support the 6 factors, or do the data support a smaller number of factors, i.e. 1 or 2 factors. The overall cronbach alpha for the 72 items ignoring clustering is 0.99, which if I understand correctly would suggest all 72 items are measuring the same thing, or there is only 1 factor (By ignoring clustering, I mean to treat all teacher observations as independent, so there are 20 teachers * 70 schools = 1400 observations). I know that if I fit a CFA ignoring the clustering, I will get biased standard errors. However, I have a colleague who has suggested that the correlations (and therefore cronbach alphas) will be unbiased regardless of whether I take the clustering into account. Is this true? I have attempted to fit a multilevel CFA model in Mplus but I'm having a difficult time getting it to converge, which I think is due either to the high correlations or small number of clusters relative to the number of parameters. What other procedures exist in Mplus that can help me determine the best factor stucture for these data? Thanks in advance.
 Linda K. Muthen posted on Thursday, May 14, 2009 - 9:45 am
Correlations will be different if you take complex survey features into account. You should do a TYPE=TWOLEVEL EFA where you ask for only one factor in the between part of the model. See Example 4.5.
 Franziska Zuniga posted on Friday, May 20, 2011 - 1:22 pm
Hello
I am trying to calculate the composite scale reliability of twolevel data according to the paper by Raykov and Penev:
Raykov T, Penev S. Estimation of maximal reliability for multiple-component instruments in multilevel designs. Br J Math Stat Psychol. 2009;62(1):129-42.
In their example of the Mpuls source code, they have the following lines (variables adapted):

MODEL: %BETWEEN%
SuppBTW BY SuppD2
SuppD7 (1) (P1)
SuppD9 (2) (P2);
SuppBTW (P7);
SuppD2 (P8);
SuppD7 (P9);
SuppD9 (P10);

%WITHIN%
SuppWTN BY
SuppD2
SuppD7 (1) (P1)
SuppD9 (2) (P2);
SuppD2 (P3);
SuppD7 (P4);
SuppD9 (P5);
SuppWTN (P6);

When running the input file (Mplus 6.1), there comes an an error message, that states:
*** ERROR in MODEL command
Unknown variable(s) in a BY statement: (1)
It seems that I only can put in the equality restraint (1) or the parameter label (P1), but not both. Could you help me on how to solve this problem?
 Linda K. Muthen posted on Friday, May 20, 2011 - 1:55 pm
P1 and p2 are both parameter labels and equality constraints. You don't need both. Just use p1 and p2.
 Franziska Zuniga posted on Friday, May 20, 2011 - 9:37 pm
Many thanks for your prompt help!
 Alan Johnson posted on Sunday, November 20, 2011 - 12:34 pm
Consequences of Reliability (or the lack of it!) in Dependent Variables

I am looking for a reference in the literature about the consequences of poor reliability for DVs.

Typically, the consequences of poor reliability is discussed in the context of the reliability of IVs, and the attenuation of observed relations with DVs. However, I want to consider the case where the IVs are shown to be reliable, but the DVs reliability is in question.

I think I know, from one of Bengt web talks, that DVs with poor reliability result in inflated standard errors, as opposed to attenuated parameter estimates, but I don’t think he gave a citation and if he did I cannot find it now.

You wouldn’t happen to have one or more citations for this?

Alan R. Johnson
johnson@emlyon.com
 Bengt O. Muthen posted on Sunday, November 20, 2011 - 5:52 pm
I think you find this in most regression/econometrics books. I see it on page 316 of the Wooldridge Introductory Econometrics text, for instance.

Measurement error (if well-behaved) ends up adding to the residual, and increasing residual variance increases the slope SE. No slope estimate bias.
 Alan Johnson posted on Monday, November 21, 2011 - 1:03 am
Thank you Bengt, perhaps foolishly, I had been focusing my search on the Psychometric literature.

I have my copy of Wooldridge open now, and I am seeing exactly the kind of explanation that I was looking for, with an authoritative citation to boot!
 Gregor Kappler posted on Wednesday, February 22, 2012 - 9:27 am
This thread did not yet discuss whether multilevel reliability can be used to estimate inter-rater reliability.

I want to analyze interrater reliability for a design of three observers rating subjects (CLUSTER variable) on several continous items assumed to assess one latent variable r. I want to assess the inter-rater reliability of the latent variable as ICCs of a multilevel model:

MODEL:
%WITHIN%
rw BY r1 r2 (p2) r3 (p3) r4 (p4) r5 (p5) r6 (p6) r7 (p7) r8 (p8) r9 (p9) r10 (p10) r11 (p11);
%BETWEEN%
rb BY r1 r2 (p2) r3 (p3) r4 (p4) r5 (p5) r6 (p6) r7 (p7) r8 (p8) r9 (p9) r10 (p10) r11 (p11);

Then I calculated the true (error free) ICC=B/(B+W) of the latent factor r (from the Variances of rw and rb) like in Muthén (1991) and interpret this ICC as the error free inter-rater reliability.

Is this a valid approach? Can you recommend further citations for this?

Thank you a lot!
 Bengt O. Muthen posted on Wednesday, February 22, 2012 - 3:07 pm
What is your level 2? How many level-2 units do you have?

Note that you cannot have several labels on a row without semicolons separating them.
 Melvin C Y posted on Wednesday, March 07, 2012 - 8:21 pm
Hi,

I'm trying to estimate maximal reliability for a 4-item factor. The unstandardized residual of one item (EEF2) at the between level is 0.000, SE=0.004 (p=.962). The standardized residual is -0.004. I believe it is advisable to set the residual to zero in this case. But I can't seem to set it in the presence of parameter labels. I tried placing the command "EEF2@0" before (P9), but it did change anything. My syntax is below. Thank you.

%BETWEEN%
BEF BY EEF1
EEF2 (P1)
EEF3 (P2)
EEF4 (P3);

BEF (P9)
EEF1 (P10);
EEF2 (P11); !How do I set this to zero?
EEF3 (P12);
EEF4 (P13);

%WITHIN%
WEF BY EEF1
EEF2 (P1)
EEF3 (P2)
EEF4 (P3);

EEF1 (P4);
EEF2 (P5);
EEF3 (P6);
EEF4 (P7);
WEF (P8);
 Linda K. Muthen posted on Thursday, March 08, 2012 - 12:38 pm
Say EEF2@0; I don't see why you need a label for this parameter.
 Melvin C Y posted on Friday, March 09, 2012 - 12:21 am
I included the label P11 to calculate maximal reliability under model constraint which takes into account both within and between item and factor. But if the residual of P11 is zero or close to zero, then excluding it in the calculation of reliability by fixing to zero should make not matter. Hope I had understood it correctly.
Thanks.
 Gregor Kappler posted on Monday, April 23, 2012 - 4:39 am
Regarding my post on Wednesday, February 22, 2012 - 9:27 am:
Level 2 is raters. I have three raters.

The r2 (p2)... labels are one line each in my input - I removed line-breaks to be post-friendly.

(sorry for the very late reply)
 Linda K. Muthen posted on Monday, April 23, 2012 - 12:53 pm
Melvin:

Correct.
 Linda K. Muthen posted on Monday, April 23, 2012 - 1:39 pm
Gregor:

You cannot do multilevel modeling with only three rates. A minimum of 30-50 is recommended. You might consider a single-level multi-trait multi-method model where trait is rater and method is what is rated. The multivariate analysis takes into account any lack of indepedence of observations.
 Benjamin Walsh posted on Thursday, May 28, 2015 - 10:21 am
I have a question about correcting for unreliability due to measurement error in a multilevel context. I want to limit estimated parameters to due to marginal sample size, so will not use multiple indicators.

In a single-level, one would correct for unreliability, for example, as such:

XL by XMean@.982;
XMean@0.025;

where XL is latent variable, XMean is mean composite of items measuring X, .982 is the square root of alpha reliability and .025 is (1 minus reliability) * (variance).

My question is: how would one do this correction with XMean when it exists on the within and between levels?

Thanks!

Reference:
Coffman, D. L., & MacCallum, R. C. (2005). Using parcels to convert path analysis models into latent variable models. Multivariate Behavioral Research, 40, 235-259.
 Bengt O. Muthen posted on Thursday, May 28, 2015 - 1:38 pm
We show unreliability correction in our FAQ Measurement error in a single indicator.

I think you are saying that the Xmean variable exists on both levels using the Mplus latent variable decomposition. If so, I think you want to do the correction on only the Within level because that is where the measurement error shows up. On Between you would just say

XLB BY XMean; XMean@0;

making XLB identical to the latent between-level part of XMean.
 Benjamin Walsh posted on Friday, May 29, 2015 - 6:53 am
Yes that is accurate. Thanks Bengt, I will give that a try!

Regards,

Ben