Naomi Dyer posted on Thursday, July 06, 2006 - 1:32 pm
Hello - does Mplus have a way to estimate the reliability of a factor at the between level? I have to report the reliability for the factor and wasn't sure if there was a way to do it with multilevel data - and in particular in Mplus. Thank you for any assistance.
Muthén, B. (1991). Multilevel factor analysis of class and student achievement components. Journal of Educational Measurement, 28, 338-354. (#37)
Naomi Dyer posted on Monday, July 17, 2006 - 12:00 pm
Thank you. I read the article but my understanding is that the reliability formula discussed in the article is for a single item (or a single composite score) - unlike Cronbach's alpha which is the reliability of x items making up a composite/factor. Can I just compute Cronbach's alpha using group means?
I am also interested in this question, although I don't really have an answer. If you compute Cronbach's alpha using group means, you are ignoring within-group variation. You are assuming consistency over people within each group. I am not sure what the purpose of your analysis is, but it seems like the reliability of a between-level factor would depend on within-person variation (typical Cronbach alpha), within-group variation (consistency of people within each group), and between-group variation. Could you do this as a 3-level HLM where you specify level 1 as the measurement model?
I think the SEM literature shows how to do Cronbach's alpha in CFA form, so I suppose one can do Cronbach on both levels simultaneously (I am not a Cronbach alpha user myself).
Naomi Dyer posted on Friday, July 21, 2006 - 7:35 am
I was recently directed to two articles that speak to this - the first, Miller, M.B (1995). Coefficient Alpha: A basic intro from the perspective of classical test theory and SEM. "Structural Equation Modeling", 2(3) p 255-273 and
Raykov (1997). Estimation of composite reliability for congeneric measures "Applied Psycholgocial Measurement", 21(2) p 173-184.
In the Raykov article they propose a way to estimate reliability in a SEM by creating a "phantom" factor - which seemingly is a causal factor of the factor items - with no error. The correlation between the phantom factor and the real factor would provide the reliability.
I would be interested to know your opinion on this technique if say I modeled a MCFA with a phantom variable at the between level to get the reliability.
Naomi Dyer posted on Friday, July 21, 2006 - 1:41 pm
To add, is there a way of doing this in Mplus - meaning I have tried to make a causal indicator at the between level per the other discussion postings and I keep getting an error.
I have a question concerning whether it is possible to separate unique reliable variance from error variance when using multilevel modeling? Let’s suppose I am examining whether children’s aggression varies across different relationship types. Thus, relationships (within level) could be nested within individuals (between level). So, when I get the variance estimates (at both levels), is it possible to know the extent of error vs. reliable variance? I know it is possible to do so when using social relations modeling.
It sounds like you are asking a question that is brought up in the context of factor analysis. So if you have a 2-level factor analysis model where for some reason individual is level-2 (as in growth modeling in the long version) and level-1 is some nesting within person (like multiple time points or multiple indicators of a factor), then I can imagine thinking of the level-2 factor value as a unique reliable component (or same for level-2 item residuals) whereas the level-1 counterparts are unreliable sources.
Ben Saville posted on Wednesday, May 13, 2009 - 10:38 am
I have a 2-level data structure (teachers nested in schools) with 72 items (6 factors with 12 items each), and I'm interested in assessing whether the proposed factor structure is appropriate. In other words, I'd like to determine if the data support the 6 factors, or do the data support a smaller number of factors, i.e. 1 or 2 factors. The overall cronbach alpha for the 72 items ignoring clustering is 0.99, which if I understand correctly would suggest all 72 items are measuring the same thing, or there is only 1 factor (By ignoring clustering, I mean to treat all teacher observations as independent, so there are 20 teachers * 70 schools = 1400 observations). I know that if I fit a CFA ignoring the clustering, I will get biased standard errors. However, I have a colleague who has suggested that the correlations (and therefore cronbach alphas) will be unbiased regardless of whether I take the clustering into account. Is this true? I have attempted to fit a multilevel CFA model in Mplus but I'm having a difficult time getting it to converge, which I think is due either to the high correlations or small number of clusters relative to the number of parameters. What other procedures exist in Mplus that can help me determine the best factor stucture for these data? Thanks in advance.
Hello I am trying to calculate the composite scale reliability of twolevel data according to the paper by Raykov and Penev: Raykov T, Penev S. Estimation of maximal reliability for multiple-component instruments in multilevel designs. Br J Math Stat Psychol. 2009;62(1):129-42. In their example of the Mpuls source code, they have the following lines (variables adapted):
When running the input file (Mplus 6.1), there comes an an error message, that states: *** ERROR in MODEL command Unknown variable(s) in a BY statement: (1) It seems that I only can put in the equality restraint (1) or the parameter label (P1), but not both. Could you help me on how to solve this problem? Many thanks in advance!
Consequences of Reliability (or the lack of it!) in Dependent Variables
I am looking for a reference in the literature about the consequences of poor reliability for DVs.
Typically, the consequences of poor reliability is discussed in the context of the reliability of IVs, and the attenuation of observed relations with DVs. However, I want to consider the case where the IVs are shown to be reliable, but the DVs reliability is in question.
I think I know, from one of Bengt web talks, that DVs with poor reliability result in inflated standard errors, as opposed to attenuated parameter estimates, but I don’t think he gave a citation and if he did I cannot find it now.
You wouldn’t happen to have one or more citations for this?
This thread did not yet discuss whether multilevel reliability can be used to estimate inter-rater reliability.
I want to analyze interrater reliability for a design of three observers rating subjects (CLUSTER variable) on several continous items assumed to assess one latent variable r. I want to assess the inter-rater reliability of the latent variable as ICCs of a multilevel model:
What is your level 2? How many level-2 units do you have?
Note that you cannot have several labels on a row without semicolons separating them.
Melvin C Y posted on Wednesday, March 07, 2012 - 8:21 pm
I'm trying to estimate maximal reliability for a 4-item factor. The unstandardized residual of one item (EEF2) at the between level is 0.000, SE=0.004 (p=.962). The standardized residual is -0.004. I believe it is advisable to set the residual to zero in this case. But I can't seem to set it in the presence of parameter labels. I tried placing the command "EEF2@0" before (P9), but it did change anything. My syntax is below. Thank you.
%BETWEEN% BEF BY EEF1 EEF2 (P1) EEF3 (P2) EEF4 (P3);
BEF (P9) EEF1 (P10); EEF2 (P11); !How do I set this to zero? EEF3 (P12); EEF4 (P13);
%WITHIN% WEF BY EEF1 EEF2 (P1) EEF3 (P2) EEF4 (P3);
Say EEF2@0; I don't see why you need a label for this parameter.
Melvin C Y posted on Friday, March 09, 2012 - 12:21 am
I included the label P11 to calculate maximal reliability under model constraint which takes into account both within and between item and factor. But if the residual of P11 is zero or close to zero, then excluding it in the calculation of reliability by fixing to zero should make not matter. Hope I had understood it correctly. Thanks.
You cannot do multilevel modeling with only three rates. A minimum of 30-50 is recommended. You might consider a single-level multi-trait multi-method model where trait is rater and method is what is rated. The multivariate analysis takes into account any lack of indepedence of observations.