Message/Author 

Anonymous posted on Thursday, September 23, 1999  4:17 pm



My intraclass correlations are very small. Do I really need to use multilevel modeling with my data? 


It is really not the size of the intraclass correlation that is the issue. It is the size of the design effect, which is a function of the intraclass correlation and the average cluster size. A design effect greater than 2 indicates that the clustering in the data needs to be taken into account during estimation. The design effect is approximately equal to 1 + (average cluster size  1)*intraclass correlation So if the average cluster size is 50, an intraclass correlation of .03 would yield a design effect of approximately 2.47. 

Joop Hox posted on Tuesday, November 16, 1999  1:15 am



Linda, do you have a source for your rule of thumb that a design effect > 2.0 is worth taking into account? 


My source is personal communication, Bengt Muthen. Seriously, however, Bengt has no reference for this rule of thumb besides his paper with Albert Satorra in Sociological Methodology in 1995. In Table 2, results are reported for a Monte Carlo study. Although design effects are not presented, the approximate design effects can be computed. It seems approximate design effects of less than two do not result in overly exaggerated rejection proportions at the 5 percent level using Method 1. For example, with an intraclass correlation of .05, the approximate design effects for cluster sizes of 7, 15, and 30 are 1.3, 1.7, and 2.45, respectively. As shown in the table the 5 percent reject proportions are satisfactory for 7 and 15 but not for 30. Table 3 gives corresponding results for standard errors. It would be interesting to hear other people's experiences about which size of design effects are considered important. 


I have an inquiry regarding the use of a longitudinal study (NLSY79) that employs a complex sample design (DEFF estimated at 1.5 to 1.3 at various time point). I am testing a model that has a continuous outcome variable, but categorical exogenous (indirect and direct effects) and "mediating" endogeneous variables that are categorical. The exogenous variables are timebound, i.e., they occur prior to my outcome (net worth in 1996). Does Mplus handle data from complex sample designs and is SEM the best way to proceed? 


Mplus does handle data from some complex sample designs for continuous outcome variables. So if you have categorical mediating variables, this would be a problem. However, with your small DEFF's, perhaps you could estimate the model as if it is a random sample. Then Mplus could handle the categorical mediating variables. If needed, you could still use weights in the analysis. Regarding the appropriateness of SEM for your problem, a growth model of the type you describe can be estimated using SEM. The latent variables are used to represent the growth factors. See the examples section of the website for some examples of growth models. 

SUNGWORN posted on Friday, April 07, 2000  9:33 pm



I try to study multilevel topic that was proposed by Muthen, but I do not understand the exact meaning of this equation...Sigma T = SigmaW + C*SigmaB. 1. why does he scale SigmaB with C and what is C*SigmaB represent? 2. How does this equation differ from .....Sigma T = SigmaW + SigmaB... in ANOVA ? 3. How does intraclass correlation from ANOVA (ICC = SSB/SST) differ form HLM program.? 


These quantities are discussed on page 299 of the Mplus User's Guide. In equation (156) it says that SigmaW + s*SigmaB is the expected value of SB, that is the expected value of the between covariance matrix in the sample. In this equation SigmaB needs to be scaled with the average cluster size s (for further details, see, e.g. Muthen, 1990). SigmaT = SigmaW + SigmaB as usual. The intraclass correlation is defined in equation (160) as the ratio of between and total variances; I would expect this is the same as in HLM. 

Anonymous posted on Tuesday, May 15, 2001  11:57 am



I am interested in looking at correlations between variables for data that have a multilevel structure (individuals within communities). From my rudimentary understanding of Mplus, I should be able to use Mplus to obtain correlations within communities and estimates of correlations between communities. If this assumption is correct, I have two questions: 1) How does Mplus handle dichotomous variables at the aggregate level? If these variables were coded 1/0, would Mplus sum these values at the aggregate (community) level or average these values? 2) How would I compute the degrees of freedom and test the significance of the correlations at each level (within and between)? Thanks... 

bmuthen posted on Wednesday, May 16, 2001  8:30 am



Mplus multilevel modeling currently only handles continuous dependent variables. Dichotomous variables are, however, allowed if they are not dependent variables. Individuallevel variables are essentially communitylevel means at the community level. Significance of correlations on either level can be computed using t ratios of parameter estimates to standard errors. 

Anonymous posted on Tuesday, August 20, 2002  2:23 pm



I'm confused about the relationship between the Mplus hierarchical SEM and the Mplus options for the analysis of complex sample survey data. Doesn't hierarchical modeling already take into account design effects, or does it only adjust the parameter estimates but not the SEs of the estimates for clustered data ? I thought conventional HLM analyses inherently take clustering into account. Thanks. 


TYPE=COMPLEX adjusts the standard errors and chisquare fit statistic for clustering but not the parameter estimates. Yes, hierarchical modeling, ie, TYPE=TWOLEVEL, takes clustering into account. TYPE=TWOLEVEL models the clustering and therefore adjusts the parameter estimates, standard errors, and chisquare test statistic. 

Anonymous posted on Tuesday, August 20, 2002  4:26 pm



To follow up on your reply: does this mean that the CLUSTER option is redundant in Mplus and does not need to specified when doing a hierarchical SEM analysis ? Thanks again. 


No, this is how Mplus knows which clusters individuals belong to so that multilevel modeling can be done. 

Anonymous posted on Thursday, September 30, 2004  6:44 am



I am doing a 2level SEM, the latent factor INFOR is measured by five observed indicators (X1X5). I find that once I change the the fixed value of observed indicators, (simply by switching the order position of measurement indicaotrs, ie. INFOR BY X1X5, change to INFOR X2 X1 X3 X4 X5) all model estimates seems indentical except for the variance of latent factor at both levels. As a result, the intraclass correlation of latent factor moves from 27% to 91%. So my quesiton is: how to solve the problem? which indicator estimated should be fixed to one? is the difference in ICC values of latent factor related to the scale of observed indicators? Thanks a lot! 


I can understand that the variance of the factor will change depending on the item selected for setting the factor's metric but I don't understand why the intraclass correlation would change to dramatically. Can you please send the two outputs and the data to support@statmodel.com? 


This invariance of the latent ICC can only be expected when the within loadings are held equal to the between loadings. Otherwise, different factors are considered. With loading invariance, you will find latent ICC invariance. 

Jim McMahon posted on Monday, October 11, 2004  11:42 am



The rule of thumb value of 2 or greater for the design effect to justify a multilevel model [Oct 29, 1999] is very interesting to me. I am working on multilevel models with dyadic data (2 subjects per group). The ICC would have to be 1.0 in order for the design effect to be 2, according to 1 + (average cluster size  1)*intraclass correlation This implies that one never needs to use HLM with dyadic data, although there is a growing literature on such models. Any thoughts on this? Thanks, Jim 


The rule of thumb is intended to give a flavor of when one would go wrong in terms of standard errors and chisquare when ignoring the multilevel structure of the data. Multilevel modeling can be done even when there are smaller design effects when the multilevel structure itself is of interest, as it clearly is with dyadic data. 

Tom Munk posted on Thursday, November 17, 2005  11:48 am



I have been trained to compute ICC as: between variance/(between variance + within variance). For example, I have a model with: between variance = 140 within variance = 900 my icc calculation = .135 But MPLUS provides an ICC of .300. How does MPLUS calculate ICC? How should it be interpreted? 


Yes, this is how we compute an intraclass correlation. See formula 203 Technical Appendix 10. I would have to see where you get your information to comment further. Please send your input, data, output, and license number to support@statmodel.com. 

Marco posted on Wednesday, December 14, 2005  8:18 am



Hello Linda, hello Bengt, in the 1994paper about MCA, it says that the ICC is a rough indication of betweenvariation (step 2 of the recommended 5 steps) and that the need of an multilevel analysis could be tested be restricting SigB = 0 in an MCA. How would I do the latter and is it worth the effort (compared to simply look at the ICC´s to determine the need of an multilevel strategy)? Many thanks! 

bmuthen posted on Wednesday, December 14, 2005  8:25 am



You set Sig_B = 0 simply by saying (assuming 2 variables): %between% y1y2@0; y1 with y2@0; Probably not worth the effort. Use the icc's instead. 


I obtain invariant ICCs as well. When fitting an empty 2group model on a data file I obtain two ICCs (one for each group), but when I specify the regression models on both levels, the dependent variable's ICCs change. Why is that? The regression model should refer to the level specific variances of the dependent variable without changing them. I thought the ICC of a dependent variable to be independent from explanatory variables. When doing a kind of preface analysis checking the ICCs, I obtain higher values compared to those ICCs from the regression model. Which of those should I report? Kind regards and thanks for your help. Florian Fiedler. 


Pleas send the outputs from the models with and without covariates, your data, and license number to support@statmodel.com. 


Ok, I just did. Hope this will help. 

Noona Kiuru posted on Thursday, June 15, 2006  2:27 am



I have a question concerning relations between intraclass correlations and statistical significance of betweenlevel and withinlevel variance estimates in the multilevel context. I would like to test whether clusters become more homogeneous across time (two time points) or, alternatively, more differentiated from other clusters. I thought that this could be done by comparing estimates of betweenvariance in outcome variable between Time 1 and Time 2, on one hand, and estimates of withinvariance between Time 1 and 2 on the other. This comparison could be made by constructing a model for the observed variables at Time 1 and Time 2 in which the between and within –level variances of the observed outcome variables are set equal across time. However, one referee suggested that this is not a proper way to study whether the clusters become more homogeneous (or similar). He/she suggested that I should test statistical significance of the difference between the intraclass correlations of two time points (these are .18 and .34, respectively) instead. Do you have any ideas what would be the best way to study this issue (i.e., whether clusters become more homogeneous across time)? And if it should be done by comparing intracluster correlations how should I do this? Thanks 


The intraclass correlation is VB/(VB + VW), where VB is betweencluster and VW withincluster variance. One way to handle the test of the icc being the same across time is to use parameter labels in Model and then use Model Constraint: Model: %Within% y1y2 (vw1vw2); %between% y1y2 (vb1vb2); Model constraint: new(icc1 icc2 diff); icc1=vb1/(vb1+vw1); icc2=vb2/(vb2+vw2); diff = icc2icc1; "diff" gives you the estimate and SE for the icc difference. For the model where you set the icc's equal, you simply add diif=0. You can also compare the loglikelihoods for these 2 models in a standard 1 df chisquare test. 

David Bard posted on Friday, August 11, 2006  12:04 pm



1) I'm aware of kappa coefficients for this, but I'm wondering if it's also appropriate to calculate an ICC for twolevel regression models where the dependent variable is categorical. Can you simply divide the sum of the between level residual variances and twice their covariances (i.e., between variability) by the standardized variance of the probit (=1) or logit (=pi/sqt(3), right?) link functions, i.e., the total variance? I suspect I'm inaccurate, b/c the ICCs I've calculated in this fashion change substantially depending on the link function I use. 2) Again for a categorical DV twolevel regression model, I've tried modeling the covariance between a random threshold and a random level1 slope. I get an estimate, but the total between variance changes quite a bit when I include this term (variance doubles without the covariance modeled: .052 to .122). Does it make sense to have this term in the model in the categorical DV case, and should this change in the betweenlevel variability be of concern? 

David Bard posted on Friday, August 11, 2006  2:16 pm



I've just realized I was using the standard deviation of the standardized logistic distribution when calculating the ICC described above. When I change it to a variance, pi^2/3, the "alleged" ICCs are much more similar across link functions. So I guess my question 1) now is simply whether this calculation is appropriate? Question 2) still stands as is. 


1) this is a somewhat reasonable approach I think, much like one can say it is somewhat reasonable to work with an rsquare for a continuous latent response variable underlying a categorical dv. 2) yes, it makes sense to have this term included. If the variance of the slope is significant, I would include it. If that makes the results change a lot, perhaps that is because the likelihood is a lot better. 


Dear Bengt and Linda Muthen, I have data on adolescents' and their best friends' delinquency on five waves and want to look at associations between the growth curve of the target adolescents and growth curve their best friends. So I want to look whether the slope of delinquency of the target adolescent is predicted by the intercept of deliqnuency of the best friend, and vice versa (and the interceptintercept correlation and slopeslope correlation). What would be the best way to model such associations? I am aware that I am dealing with dyadic data which should be treated with a multilevel method, and therefore should look at intraclass correlations, but at the same time my aim is to look at the strength of the association between the intercepts, the slopes, and the interceptslope correlations. 


What comes to mind when I read your message is the type of analysis suggested in the following paper: Khoo, S.T. & Muthén, B. (2000). Longitudinal data on families: Growth modeling alternatives. Multivariate Applications in Substance use Research, J. Rose, L. Chassin, C. Presson & J. Sherman (eds.), Hillsdale, N.J.: Erlbaum, pp. 4378. (#79) You can request paper 79 from bmuthen@ucla.edu if you can't get the paper. 

Anup posted on Saturday, February 24, 2007  2:28 pm



What is the appropriate formula to calculate design effect for 3level design? Are the formula presented above still applicable? Since there are 2 ICCs: level 2 and level 3 in such design, is there any adjustment needed in design effect formula 1 + (average cluster size  1)*intraclass correlation ? 


I don't know off hand what the formula is. It is probably covered in the classic Cochran book Sampling Techniques. Note that the formula you cite is for the special case of estimating means and with equal cluster sizes 

elisa posted on Tuesday, March 13, 2007  3:37 am



hello, I'm carry on a multivel analysis, just an empty model for categorical variables. How is possible to obtain the ICC with it's p value and the p value of the estimates (thresholds and variances)? Thanks a lot Regards, Elisa 

Boliang Guo posted on Tuesday, March 13, 2007  3:56 am



ICC(logistic reg)=(Level2 Var)/(Level2 var+ (phi2/3)) look Snijders's book 

elisa posted on Wednesday, March 14, 2007  2:35 am



Thanks a lot, but I would like to know if Mplus provides the ICC estimate "automatically", and if Mplus provides also the ICC pvalue in order to have a direct test. (I saw in another message posted here that Mplus provides an ICC, but I'm not able to find the command and I calculated it) If Mplus does not provide the ICC pvalue, how is possible to test its significance? CATEGORICAL ARE responce ; CLUSTER = group; TYPE = twolevel random ; ESTIMATOR = ml; MODEL: %WITHIN%; %BETWEEN% responce; And, how is possible (if possible) to obtain automatically the pvalue of the other estimates (thresholds and variances)? Thanks Elisa 

Boliang Guo posted on Wednesday, March 14, 2007  2:50 am



as I know, statistically significant level 2 variance is necessary for Multilevel analysis, ICC provide information of the degree of level 2 variance over total variance. I am using MLwiN, HLM, Mplus and WinBugs, only WinBugs can give uncertainty of ICC. well, I think you get no nore information from ICC than the significance of level 2 variance. I do not know the command for ICC, but I think you can use Model constraint to get ICC in Mplus. maybe Prof Muthen can give more information on this topic. 


Yes, you would use MODEL CONSTRAINT to create the ICC by dividing the estimated between variance by the sum of the estimated between and within variances. The program then gives a standard error for this parameter. 


I have a multilevel path model, that includes variables X1b (measured at the group level), Y1 and Y2 (measured at the individual level). The between group model suggest that X1b effects Y1b (average Y1 in groups) (X1b  > Y1b). The within group portion of the model suggests that Y1 effects Y2 (both at the individual level) (Y1 > Y2). I also want to test if Y1 mediate the relationship between X1b (group level) and Y2 (individual level) (X1b > Y1 > Y2). Is this possible in MPlus? If so, how can I develop a syntax, or is there any example that I can use? Thanks, 

Mike Tobak posted on Thursday, June 28, 2007  1:10 pm



I wonder if Mplus can automatically produce intraclass correlations as well as between and within group covariance matrix without estimating any specific models. I just want to check if my data is appropriate for multilevel analysis (I am afraid that the variance of certain variable is close to zero at the second level), and I don't need to test any model at this time. I just tried a simple model and Mplus couldn't give the estimations due to nonpositive definite matrix. And I want to know if there is any option in Mplus to produce intracorrelations and between/withingroup covariance matrix alone. Thanks. 


Re: Metin See Example 9.3 in the Mplus User's Guide. I think this is very close to what you want. 


Yes, you can use TYPE=BASIC TWOLEVEL; to do this. You can use the SAMPLE and SIGB options of the SAVEDATA command to save the matrices. See the Mplus User's Guide for more information. 

Mike Tobak posted on Thursday, July 12, 2007  9:26 pm



I have a question about the between and within groups matrices. I think the scaled betweengroup matrix should be larger than pooled within group matrix, since S(between)=Cov(within)+c*Cov(between). However, the scaled betweengroup matrix calculated from my data (I programmed in Matlab using the given formula) is less than pooled within group matrix, producing negative (but each covariance/variance component is very small, close to zero) between groups matrix. Is there any possibility to have negative (but close to zero) estimated variance for between groups? It sounds weird, but I checked my code, and it works well for other data. Can I say that my data has no multilevel structure? Thanks. 


The formula you give is equation (199) in Appendix 10 on the Mplus web site. Note that this formula is for the expected value of SB, not for the sample SB, where the sample SB is defined in (197). Using this expected formula, the estimated SigmaB is given in (202) and yes it can obtain negative values indicating no between variation. 

Mike Tobak posted on Sunday, July 15, 2007  8:21 pm



Prof. Muthen, Thank you! I used the correct formula as you said. I noticed that you mentioned (Muthen 1994, multilevel covariance analysis, sociological methods & research, vol.22, No.3, 376398) that "the ML estimator of SigmaB is frequently not positive definite and might not even have positive variance estimates. This means that, in practice, we might have to resort to analyzing SB to get a notion of the SigmaB structure. Fortunately, experience shows that when it is possible to analyze both matrices, similar results are obtained." (on page 389). Does that mean that even if the ML estimator (given in (202)) has negative variance estimates, there could still be between variations and we can get similar results by analyzing SB? I am a little confused here. According to your answers to my last question, there should be no between variations if the estimated SigmaB from (202) has negative values. Thanks for your time and help! 


We know too little of these situations to say anything with certainty. If the SigmaB estimate from (202) has negative values it is likely that there is no between variation in the population, but on the other hand there may be some. The quote from my paper indicates similar results when analyzing Sb and estimated SigmaB  "when it is possible to analyze both matrices". I would say that precludes the case of negative variances for estimated SigmaB. The best way to find out if there is between variation is to fit the full twolevel model using FIML. 

Mike Tobak posted on Saturday, July 28, 2007  1:17 pm



Prof. Muthen, THANK YOU! I wonder if we can test it by multilevel regression (with random slopes). For instance, we fit a levelone model with variable y on two other levelone variables x1 and x2, and a leveltwo model of the random intercepts/slopes on a leveltwo variable W. We found that the intercepts/slopes from levelone model are indeed random effects (variance is significant). By doing this, can we say that the levelone units are not independent since intercepts/slopes are random effects, even thought we got negative (but close to 0) estimated SigmaB? In fact, that is what happened to my data. The calculated intraclass correlations are close to 0, but the estimated SigmaB is negative (close to 0 though). Then I use multilevel regression (in HLM or SAS). if I specify an ANOVA model (without any exploratory variables at levelone and leveltwo), the intercept is not a random effect (hence agree with small intraclass correlations); but if I include some levelone exploratory variables, the random slopes of such exploratory variables are random effects (having significant variance tested at leveltwo model). Does this make sense to you? Small intraclass correlations but the slopes of exploratory variables are random across different groups? Many thanks!! 


Yes, ICC's can be close to zero while slopes are random. An ICC is similar to a random intercept. 

Mike Todd posted on Tuesday, September 25, 2007  3:43 pm



Hi all: I am having trouble reconciling the variance component and the ICC values reported the output from my twolevel linear regression model. The values reported in the output are as follows: ICC = 0.241 Within variance (Sw)= 2.769 (ML estimated sample value) Between variance (Sb) = 1.140 (ML estimated sample value) Sb/(Sb + Sw) = 0.292 Am I looking at the wrong parts of the output or am I missing a piece of the calculation? Thanks for any guidance you can provide. Mike 


We compute the intraclass correlations using the unbiased estimates obtained via SAMPSTAT or TYPE=BASIC not the consistent ML estimates from the model. 

Mike Todd posted on Tuesday, September 25, 2007  5:04 pm



Thanks for getting back to me so quickly, Linda. It looks like the discrepancy was due to a misinterpretation of the output on my part. The values I presented above were from a Monte Carlo run, where the sample stats from the first data replication were reported in the output. I (incorrectly) assumed that the accompanying ICC was based on these statistics. When I took the raw data from the first replication and used them in a "nonMonte Carlo" twolevel run, I obtained an ICC that is consistent with the sample statistics (i.e., ICC = 0.292). Thanks again for your help. Mike 

Mike Todd posted on Wednesday, September 26, 2007  9:50 am



After sleeping on this, I realized I should have asked what exactly the ICC reported in the Monte Carlo output represents. Is this described in any of the documentation or online? Thanks again. 


They are the intraclass correlations for the first replication in Version 4.21. In earlier versions, they were for the last replication. Is this what you are asking? 

Mike Todd posted on Wednesday, September 26, 2007  11:56 am



Thanks, Linda. That's exactly what I was asking. The output I found confusing was generated under version 4.1. I just reran the Monte Carlo analysis under 4.21 and got the expected result. Thanks again! 


I just start using Mplus to see if some models require multilevel CFA, and am new to multilevel analysis. As I put in different numbers of dependent variables into a 2level model, I find that the ICC of the variables are different. For instance, when I put in 5 DVs, the ICCs for these 5 DV are different when I put in one more DV. All my data do not contain missing value. May I ask the mechanism for this difference? Using other software does not seem to result in such difference. 


Please send the two outputs and your license number to support@statmodel.com. 

Sharon posted on Saturday, March 29, 2008  11:02 am



I have a question relevant to a posting on May 15, 2001. Like that individual, I am interested in looking at correlations in a multilevel context. To do this, I have been examining the between and within correlation matrices using twolevel analyses. (a) Is it appropriate to interpret correlations involving two continuous variables in the within correlation matrix as the equivalent of standard rs, but corrected for the multilevel nature of the data (e.g., can I square them to get estimates of effect size)? (b) are there any special considerations in interpreting these correlations? (c) I'd like to report these correlations in articles I'm submitting; do you know of any examples where this has been done in the literature (or any problems people have had)? Thanks for any guidance you can provide. 


The within covariance is the covariance between the variables excluding the covariance due to clustering. In terms of correlations, you should decide if you want to calculate the correlations using total variances or within and between variances. Mplus uses within and between variances. I know of no special considerations in the interpretation of the correlations. I am not familiar with squaring a correlation to get effect size. 


I have a simple question which I'm having trouble getting an answer to concerning the interpretation of ICCs. Take the following hypothetical example: 100 high schools are randomly sampled, from each of which 100 seniors are randomly sampled. SAT scores of the seniors are recorded. The ICC is found to be a statistically significant .01. Only 1 percent of the total variance in SAT scores is between schools. Based on this small ICC, would it be wrong to interpret this as follows: "The school a student attends is not particularly relevant to the student's performance on the SAT." thanks for any help. 


That sounds reasonable. 


Hello! How to test difference of ICCs between two groups (in this case gender)? Lotta 


You can use MODEL CONSTRAINT. See the user's guide for more information. 


Hi, my understanding of the intraclass correlation for multilevel analysis is that it is like the Rsquare in the linear regression (i.e., how much variance is explained by the betweencluster variance). However, if the dependent variable is count data, does intraclass have substantive meaning? And, can I obtain it from MPLUS analysis? Thanks! 


Your understanding is correct. We don't currently know how to define a variance/residual variance for a count variable. 


Hello Am I right to assume that it is not possible to calculate intraclass correlations with (dependent) categorical variables only? Thank you! 


See short course Topic 7A slides 4449. The ICC for categorical outcomes is shown on slide 49. 


Thank you Linda. But it is still not clear to me whether I need to specify a model first to get these correlations? 


You can use TYPE=TWOLEVEL BASIC to obtain the information to compute the ICC's for categorical outcomes. We do not do this in Mplus. 

Boliang Guo posted on Wednesday, August 27, 2008  7:18 am



I remember there is a formula in Prof Muthen's Utrchet lecture. for random intercept only model, the ICC for 2level logistic regression, ICC(logistic reg)=(Level2 Var)/(Level2 var+ (phi2/3)) look Snijders's book, you pls check Mplus discussion for some answers 


Hi, I'm running a simple path analysis in which adolescent and parental negative emotional expression predict adolescent selfregulation. To account for nonindependence of my data, I used TYPE=COMPLEX and clustered it by family (805 families). In my model, there is a significant path between my exogenous variables of adolescent negative emotional expression and parental negative emotional expression. Since I've clustered my data by family, is this correlation the same as an interclass correlation? This is my first time using the TYPE=COMPLEX command, so I'm a little hesitant in interpreting my output. thank you for your help! 


My first impression is that you account for nonindependence twice  once is enough. Type=Complex would treat family members as members of the cluster=family. If a family member is observed on p variables, there are p observed variables in the model. If on the other hand you have a parent and an adolescent in each family and each is observed on p variables, there are 2*p variables in the model. When you say that parental expression is regressed on adolescent expression, this is the situation that comes to my mind. In this case, Type = Complex is superflous because the intrafamily correlation between parent and adolescent is modeled as you say. For related work in a growth model framework, see Khoo, S.T. & Muthén, B. (2000). Longitudinal data on families: Growth modeling alternatives. Multivariate Applications in Substance use Research, J. Rose, L. Chassin, C. Presson & J. Sherman (eds.), Hillsdale, N.J.: Erlbaum, pp. 4378. at my UCLA web site (#79). 


Hi Bengt, Thank you for the quick reply and reference suggestion! May I doublecheck that I understand your response? My model looked like this: selfreg on mom adol; mom with adol; Since I already clustered my "mom" and "adol" predictors by family using the TYPE=COMPLEX, are you saying that allowing "mom" and "adol" to be correlated in my model is redundant? If that is true, should I assign an meaning to that correlation in my model, given that it is significant? I reran my analyses removing this correlation, and none of my fit indices or estimates changed 


Your statement mom with adol; is the default in a regression with these 2 as independent variables, so that's why your fit indices don't change when you remove the term. I now see your model more clearly  you have only 1 dependent variable, which is selfreg for the adolescent. I was responding as if you had several dependent variables such as one for the adolescent and one for the adult. Your model accounts for the intraclass correlation between the variables mom and adol, and Type=Complex accounts for the intraclass correlation for the dependent variable selfreg (adolescents clustered within family). So you are doing things right as far as I can tell. 


Hello, Could you tell me how (or whether) the size of bivariate correlations among the lowerorder units influence intraclass correlations? Let's say that I have couples as higherorder units, and individuals within these dyads as lowerlevel units. Does the intraclass correlation depend on what the correlation is between the members of the dyads on variable x? Or, does the intraclass correlation depend only on means and variances around it? Thank you! 


The intraclass correlation is computed using variances. It is affected by other model parameters only indirectly if model misspecification causes the variances to be misestimated. 


I've seen several people suggest that the intraclass correlation for a binary outcome can be calculated by using (pi^2/3)for the estimate of the level 1 variance. What I'm not clear on is whether the "pi" is just the constant 3.14... or whether this is a value that has to be calculated from the data (e.g. the probability of the outcome in the null model). I would greatly appreciate it if someone could clarify this for me. 


Pi is the number 3.14... 


Thank you so much for your quick response. 


I simply want to calculate ICC with three variables; 1, my outcome, 2, my ID for individuals and 3, my ID for schools. All of the examples I've looked at include other school level variables for the between command. How can I just find out how much variation I have in my outcome at the individual and school level, with ICC. 


Use Type = Basic; So no MODEL command. 


Thank you. However, my outcome is categorical. Is it still the case that MPlus does not calculate an ICC with a categorical outcome? If so, is it possible to get the coefficients that would be necessary for me to calculate the ICC? 


Mplus does not give ICC's for categorical outcomes. See Slide 66 in the short course handout for Topic 7 for the formula. 


Hi! I wanted to calculate ICCs for the binary measures of a twopart LGM. However, mplus provides no variances for the categorical outcomes. I guess this is because I'm using MLR!? 


See the Topic 7 course handout starting with slide 60 looking at slide 66 for the ICC formula. This can be done only with maximum likelihood. 


Hi, I am about to set up a (singlelevel) SEM based on a clustered sample (kids in school classes) by using cluster=codecl; and ANALYSIS: TYPE=COMPLEX; In the paper, I would like to report the intraclass correlation coefficients of the outcome variables. Following the Mplus Discussion, I saw that it is possible to obtain the ICC directly in the Mplus output but I don't know how to do this. Unfortunately, there is no information on ICC in the Users Guide. 


Emmanuel, The ICC is obtained through TYPE=TWOLEVEL since TYPE=COMPLEX only corrects for the standard errors. If your outcome(s) are continuous, you will get the ICC values directly in your output. /Amir 


Intraclass correlations are given with TYPE=TWOLEVEL. You can do a TYPE=TWOLEVEL BASIC to get them. 


Hi, I have a data set with clustering (4 schools, and 45 classrooms). When I inspect the ICCs and accompanying design effects across classrooms, the highest design effect is around 1.9. But, the highest design effect across schools is 2.8, and the design effects for 4 out 10 model variables revealed values higher than 2. Meanwhile, I didn’t measure any variable at the classroom and school level. I have several questions regarding this model: 1. Would Type=Complex provide sufficient correction against clustering in such a case? Which 2. If I should use Type=Complex, what should be the clustering variable? School, or classroom? 3. Is it possible to estimate scaled chisquare and robust SE for both school and classroom in a single model? 4. Is it possible to include school or classroom as level 2 variable and estimate variations in the outcomes across these clusters in MPlus? 


Four schools is not enough for TYPE=COMPLEX or TYPE=TWOLEVEL. You need between 3050 schools for this. I suggest creating three dummy variables that represent the four schools and using those in the analysis to control for nonindependence of observations. 

xusihua posted on Friday, February 26, 2010  6:53 pm



hello, how to comparing intraclass corelations within a single group? 


If you mean testing the equality of iccs across variables you can do that by defining the icc's in terms of model parameters in Model Constraint and using Model Test to test their equality. 

M Hamd posted on Friday, April 09, 2010  4:28 pm



In team research, generally teamlevel analysis is done by aggregating variables to group level, by showing high ICC (>.8). However, this type of analysis does not look at individuallevel analysis and just conducts grouplevel analysis. For example, social cohesion and group performance will be studied at group level. team members will provide perceptions of group bonding and these will be aggregated (reference shift consensus model, Chan 1998) and analysis will be done at grouplevel. I am viewing these relationships using Multilevel SEM, my ICC for cohesion is .34 (and design effect > 2) as given by MPlus. I am using cohesion at both team and individual level. Now a regular team researcher would also ask me to show highlevel of aggregation for cohesion at group level. Do you think this expectation is justifiable. Cause if I understand correctly, if ICC is too low (let us say .01) then this variable is essentially an individuallevel variable; likewise if ICC is too high (let us say .9) then the variable is essentially a grouplevel variable; and in both cases we don't need a multilevel model. I hope if you can tell me that in MSEM, do i need to show high level of aggregation (ICC>.8) for a variable that exists at both levels? <i> Many thanks. 


It is not necessary for the ICC to be high to model a variable on both levels. Typically ICC's are fairly low, around .1 or .2. 

M Hamd posted on Saturday, April 10, 2010  10:27 am



Thank you Dr. Linda. 


I have repeated observations of individuals within families. I am doing CFA with Clustering on family and individual, and using TYPE=COMPLEX. I wanted to get ICCs of the indicators for descriptive purposes. (I have ICC for each indicator computed outside of MPlus, but I want ICC under FIML handling of missing data.) TYPE=TWOLEVEL COMPLEX is needed for more than one cluster variable, and requires MLR which does not give any ICC. Can I calculate these? Do I need to get between variances for family and subject from separate TWOLEVEL models? Is there some way to use the MLR scaling correction instead? Is it like a design effect? 


You can get the ICC's using TYPE=TWOLEVEL BASIC; You don't need to include COMPLEX for the ICC's. 


When using Type=TWOLEVEL, I am informed that only 1 cluster variable is permitted and that the number of cluster variables (i.e., individual, family) is exceeded. The error message says to use TYPE=TWOLEVEL COMPLEX "*** ERROR in VARIABLE command Two cluster variables are allowed for TYPE=TWOLEVEL COMPLEX. Only one cluster variable is allowed for TYPE=COMPLEX (single level). Limit on the number of cluster variables reached." I took from this that you can have one variable with TWOLEVEL and one variable with COMPLEX, and can only get two cluster variables with TWOLEVEL COMPLEX. Am I missing something or makeing a bad assumption? 


Only one cluster variable can be used for TOWOLEVEL or COMPLEX. Only when they are used together can two cluster variables be used. 


Yes. As I gathered. I think we are on the same page now. I did not have a problem in the main factor model because I only clustered on individual, and did not model the variance by level, or desire between and within factors. But I did want to model the levels in an auxiliary (empty) model to get the ICCs under FIML. (I have them from a standard package based on nonmissing data). Or does the FIML preserve the observed ICCs? Maybe the effort is pointless. So is the answer simply that there is no way to get what I want in MPlus, or is there some way by estimating betweenfamily and betweenindividual variance in separate twolevel models to calculate the ICC after FIML, or is there some way to approximate it from the MLR scaling correction? For example divide the correction by the average cluster size or something? This is what I was originally asking. 


If you use TYPE=TWOLEVEL BASIC with the cluster variable that goes with TWOLEVEL, you will obtain ICC's computed with FIML. 


Well, I will make one last try. I must be expressing things poorly. Apologies. Since MPlus can only do 2level and not 3level (except when lowest level is wide), is there any way, hand calculation, trick, mushing together of information from various models ... etc that will allow me to figure out 3level ICCs after FIML? Your advice about TWOLEVEL BASIC and only 1 cluster variable, only gives 2level ICC. I am after 3level ICC. 


You can not obtain ICC's for the third level in Mplus. 

Anonymous posted on Tuesday, August 24, 2010  11:04 pm



Dear @Prof. Muthen I'm running a multilevel SEM with random slopes. My model is not calculated ICC. Am I right to assume that it is not possible to calculate ICC by multilevel SEM with random slopes? How is ICC calculated in my model, if mplus can calculate ICC for multilevel SEM with random slopes? Please help me. My model is as follows; usevariables ARE mbo1mbo3 jc1jc5 clus subTI1g subTI4g situum; CLUSTER = clus; WITHIN = mbo1mbo3 jc1jc5; BETWEEN = subTI1g subTI4g situum; ANALYSIS: TYPE = twolevel random; algorithm = integration; estimator = ML; MODEL: %WITHIN% fwmbo by mbo3@1 mbo2(1) mbo1(2); fwjc by jc5@1 jc4(3) jc3(4) jc2(5) jc1(6); s1  fwjc ON fwmbo; %BETWEEN% s1 ON subTI1g subTI4g situum; subti4g WITH subti1g; subti1g on situum; subti4g on situum; OUTPUT: tech1 tech8; 


We do not compute ICC's with TYPE=RANDOM. It is not clear how this would be done. 

Anonymous posted on Wednesday, August 25, 2010  10:01 pm



Thank you so much for your quick response. 

Jan Stochl posted on Friday, August 27, 2010  7:05 am



Hello, I would like to ask again about the design effect size computation. As mentioned in this thread, the formula is 1 + (average cluster size  1)*intraclass correlation My question is whether "median cluster size" instead of "mean cluster size" can be used. I ask because in my data about 95% of cluster sizes are quite small (up to 10) but few of them are much larger (around 60 or 70). These few clusters distort the "mean" of cluster size a lot due to sensitivity of mean and thus inflate design effect. Would median cluster size be more appropriate in this case? Thank you. 


The formula above is the DEFF for a mean when all cluster sizes are the same so average cluster size is actually cluster size. It is an approximation in other cases but it seems like a useful one. See the following article which is available on the website: Muthén, B. & Satorra, A. (1995). Complex sample data in structural equation modeling. Sociological Methodology, 25, 267316. Using the median may yield better information in your case. 


Hello, I'm trying to calculate ICCs for a set of level 1 variables by modeling an empty twolevel model using "TYPE IS TWOLEVEL BASIC". I calculated this in HLM as well (by dividing the level 2 variance by level 1 + level 2 variance) and found that some of the ICCs were similar numbers but different in magnitude by 2 decimal places. For example: ICCs calculated from HLM > ICCs in MPlus output 0.00045 > 0.041 0.00029 > 0.029 0.00031 > 0.031 0.00073 > 0.053 Other variables with larger ICCs were more similar across the 2 methods: 0.322 > 0.329 0.463 > 0.453 Since MPlus reports only 3 significant figures for ICCs, is it possible that numbers smaller than .001 are reported in a larger scale? Is there any way to change the way this is reported? Any insight into this would be greatly appreciated! 


Please send the HLM and Mplus outputs and your license number to support@statmodel.com. 


In such a syntax below in which our covariates (e.g., gender, ses1, ses2) and the latent factor indicators (ques1ques30) are categorical variables; if we want to calculate the icc values should we indicate these variables in the VARIABLES command with "CATEGORICAL ARE"? TITLE: Intraclass Correlation Coefficients DATA: FILE IS icc.dat; VARIABLE: NAMES ARE class gender gpa ses1 ses2 ques1ques30; USEVARIABLES ARE gender gpa ques1ques30 ses1 ses2; CLUSTER IS class; ANALYSIS: TYPE IS TWOLEVEL BASIC; ESTIMATOR IS WLS; OUTPUT: SAMPSTAT STANDARDIZED; Thanks... 


Only dependent variables should be placed on the CATEGORICAL list. 


So 1. we do not place the covariates on the CATEGORICAL list? 2. we do not place ques1ques30 on the CATEGORICAL list as they are independent observed variables for instance; although where they are categorical indicators of "factor1 BY ques1ques30"? Thanks... 


You do not place covariates on the CATEGORICAL list. In a factor model, the factor indicators, quest1ques30, are dependent variables so they should be placed on the CATEGORICAL list. 

Anne Chan posted on Thursday, April 07, 2011  6:45 pm



Hello! I run an interceptonly model to get the ICC. The ICC looks ok (0.1, average cluster size: 49). But the between level variances is nonsignificant. Does the significance of the betweenlevel variances of the interceptonly model relate to the significance of the ICC? Thanks! 


You should run Type = Twolevel Basic to get icc's. Whether your model gets significant betweenlevel variances depends on the model and may or may not agree with the icc's. For instance if a 1factor model fits well, you might get significant betweenlevel factor variance even with small icc's. 

Joe King posted on Thursday, November 17, 2011  11:05 pm



when running a zero inflated negative binomial, is it possible to get an intraclass correlation or some analog to it? 


Good question. There is a betweenlevel variance clearly. The issue is if there is a withinlevel variance to be used for the icc computation. That leads to the question if one can view negbin as having a continuous latent response variable formulation  like logistic regression, where it can be viewed as having a residual with variance pisquare/3. One formulation of negbin expresses it as a generalization of the Poisson by capturing heterogeneity by adding a residual to the log(mean) linear regression, where exp(residual) has a Gamma distribution. So if one can deduce that exp(residual) variance (presumable as a function of the negbin variance parameter), I guess there is a chance for this icc. Which is a long way of saying I don't know. 

Joe King posted on Friday, November 18, 2011  6:18 pm



well i think this is similar to what has been stated in a recent article , titled Repeatability for Gaussian and nonGaussian data: a practical guide for biologists by Shinichi Nakagawa and Holger Schielzeth, Biological Reviews,85, 935–956, (2010) in a zero inflated negative binomial they given an equation that looks at this, they called it repeatability but it seems like a similar concept of capturing the variability between groups. so in your response you are referring to the PoissonGamma mixture where gamma is capturing the over dispersion that exists the Poisson is not? when MPlus runs the NB, is this how it calculates the dispersion parameter? thank you for your willingness to discuss this. 


I think you are saying it right. Mplus uses the "negbin2" parameterization and estimates its dispersion parameter alpha, the details of which are given in Hilbe's negbin book (now in second edition). I have not seen papers on this topic  I have never seen an icc with any count model. That of couse, doesn't mean that you can't estimate the magnitude of level 2 variance, that is, the numerator of the usual icc. 

Joe King posted on Friday, November 18, 2011  6:30 pm



i have Hilbe's 2nd ed book, its very good and the NB2 is what i was thinking about. I havent seen it with any count models either but wanted to make sure i wasnt missing anything. Thank you Dr. Muthen. 


Linda et al. Regarding the ICC reported in the output for a TWOLEVEL regression; the ICC value changes for the dependent variable depending on the predictors conditioned on (especially after adding variables at level 2). I recognize that you have told us to use TWOLEVEL BASIC to obtain ICC values. But what do these other ICCs represent? They don't appear to be the ML estimates of the between residual variance relative to the total residual variance (but they are close). Thank you, Laura 

Antti Kärnä posted on Friday, December 09, 2011  9:44 am



Hello, I have been planning 2level CFA with ordinal indicators. When I fit the model using the WLSMV estimator, I get the ICCs for the 12 items: U1 0.036 U2 0.012 U3 0.000 U4 0.020 U5 0.039 U6 0.023 U7 0.009 U8 0.007 U9 0.018 U10 0.025 U11 0.026 U12 0.029 Are these ICCs too small as such to justify a betweenlevel model? Thanks in advance! 


Linda One additional question. When using Mplus version 6 TYPE=TWOLEVEL BASIC, is the ICC still calculated as shown in formula 203 (using the variance components determined with 197202) in the Technical Appendix 10? Calculating by hand using both MSb and MSw from an ANOVA (and a scaling factor) or using components from PROC CANDISC from SAS, I get fairly different estimates. For one of my variables, I get an ICC of .17 and Mplus gives me .23. Would be glad to send the data if you want to test. Laura 


Formula (203) applies for the MUML estimator. For the rest of the estimators ML/MLR/MLF the unrestricted H1 model is estimated to get the within and between variance. Usually the ICC changes with the covariates due to this misspecification where a covariate is on the within list but actually it is not a within level variable because it hasn't been centered. You can use this command to fix this misspecifcations: centering=GROUPMEAN(x); for all x variables that are on the within= list. When the sample is not large the ICC can also change when you add covariates due to the fact that the covariates carry more information  in particular information about the between level component which would be measured more precisely. In that case you should just use the estimates with the covariates included. If this doesn't help send the example to support@statmodel.com. 


Answer to Antti of December 9. These iccs do look a bit small. But the real test is doing the 2level model and seeing (1) if the results are different than what you get when ignoring clustering, and more importantly (2) if you get any significant betweenlevel parameter estimates. 


Tihomir Thank you for the explanation. My data included the group mean of the variable in question on the USEVARIABLES list so including that in the unrestricted H1 model for the BASIC run resulted in an inappropriate ICC calculation. (Or at least inappropriate in terms of what I personally was expecting). Thanks, Laura 


In Mplus there are two estimation of the Sigmas: one is based on the sample statistics and is reported as covariances (within and between) and the second is reported in the model results. These two estimations are not equivalent and ICC estimation is based on the first. Why is that? 


We base the ICC's on the sample statistics because we want to see the intraclass correlations in the data. 

Emil Coman posted on Wednesday, February 01, 2012  1:26 pm



I have tried to obtain the significance of the ICC using Bengt's above suggestion (Thursday, June 15, 2006  8:41 am), so I wrote: Usevariables are cidibtot ; cluster is educ; ANALYSIS: TYPE=TWOLEVEL BASIC; Model: %Within% cidibtot (vw1); %between% cidibtot (vb1); Model constraint: new(icc1 ); icc1 = vb1 / (vb11+vw1); BUT, there is no icc1 parameter listed in output. What am I messing up here? Thanks, Emil 


Please send the output and your license number to support@statmodel.com. 


Can you refer me to a source or an annotated output which explains how Mplus calculates the sample statistics in a TWOLEVEL analysis? At the moment, the most confusing thing to me is that the sample statistics of the dependent variable (e.g. variances, ICC) differ dependent on specifications for the independent variable (eg. centering, adding/deleting a predictor, using x and xw vs. decomposing x into a latent within and a latent between variable etc.). How do the sample statistics in a TWOLEVEL analysis relate to those obtained by TWOLEVEL BASIC? Thank you Katrin 


Please send two outputs that illustrate this issue and your license number to support@statmodel.com. 


Hello, I'm doing a twolevel analysis with pupils clustered in classes. Is it possible that the ICC is quite small (0.012, average cluster size = 19) but the explained variance on level 2 is 86 %? I just want to know if it is a contradiction or if it might be possible from a statistical point of view. Thank you so much! 


I think it is common to see large Rsquares on between. Don't forget the 86% is explaining only the betweenlevel variance which may be rather small. 


I experience that growth models using different approaches (multilevel or not, ML or Bayes) do not give consistent results. In particular, using two parallel growth models I now get a "significant" result with Bayesian (diffuse priors) for a crosslagged path between Intercept and Growth factor at the individual level when using a multilevel model and correlated intercept/growth factors at the between level. Singlelevel analysis gives no "significant" path, and a multilevel model imposing a (questionable?) structural model at the between level also gives no significant path at the within level (nor at the between level). Multilevel modelling will often result in the opposite of what I get (in my case, I go from nonsignificant to significant coefficients when moving to multilevel). ICCs in the current analysis are moderate , varying from less than .03 and up to over .07 (threewave data). Average cluster size is 6.29. I would normally tend to trust results from multilevel modeling more, but it appears to me that the results are not robust. Moreover, the design effect seems not particularly large? 


Please send two key outputs that show the differences and your license number to support@statmodel.com. 

Agnes Szabo posted on Tuesday, March 05, 2013  6:13 pm



Hello Linda and Bengt, I am doing a CFA with a sample of 9000 students from 96 schools. The intraclass correlations are very small (lower than 0.01) and the values of DEFF are lower than 2. Do I need to use multilevel modelling with my data? (The multilevel structure is not of interest, I am just interested in the psychometric properties of the scale.) Thank you very much, Agnes 


It sounds like you would not need to take nonindependence of observations into account in your case. You could run the model with and without TYPE=COMPLEX to see if there are any differences. 

Melvin C Y posted on Wednesday, March 13, 2013  5:05 am



My data consist of student (n=1000), class (n=100) and school (n=25). On average I have 10 students per class and 4 classes per school. I am trying to obtain ICCs for a class level variable (teacher experience) nested in schools. I thought this should be quite simple but I'm confused about the cluster sizes that Mplus generates. The syntax is below: . usevar=exp; cluster=school; analysis: type = basic twolevel; The summary of data correctly displayed 25 clusters(schools). But the average cluster size was 40 which seemed to reflect the number of students rather than classes per school. ICC is decent at .14 In another run, I removed all student variables in my data, keeping only the school ID and the class level variable (exp). The average cluster size reported (per school) was 4, which is correct. But ICC is now .06. Should a three level dataset be set up differently if I'm only interested in class and school level analysis? Or can I use the first procedure which perhaps account for student level structure (hence, more information?) even if it is not explicitly modelled. Thank you. Melvin 


You would need to create a data set where the unit of analysis is class. I think this is what you did when you got an ICC of .06. 


Or, you can keep your data as they are and request threelevel analysis and add a studentlevel variable on your USEV list in addition to exp, where exp is declared a level2 variable (see UG examples). This gives you icc's on two levels. 

Melvin C Y posted on Wednesday, March 13, 2013  5:43 pm



Thanks Linda. As a follow up question, Hox (2010, p 34) wrote in his multilevel analysis book that there are two methods to compute 3 level ICCs at class and school level. L2/(L1+L2+L3) or (L2+L3)/(L1+L2+L3) The second formula assumes that two students from the same class must come from the same school. When school variance is large then values produced by both formula will be quite different. My questions are: 1) Assumung that I have student variable varying over class and school, which formula does Mplus use? 2) In my question above where the unit of analysis is class, what actually happens when I didn't remove all student cases? 3) Is creating a new dataset (without students) the only way to get Icc for class nested in school? Thank you. 


If you are running a three level analysis the ICCs that Mplus produces are L2/(L1+L2+L3) and L3/(L1+L2+L3). Note that there are two ICCs for each student level variable. If a variable is defined on the class level the ICC is computed the same way but L1=0. 

deana desa posted on Monday, April 08, 2013  6:02 am



Hi, I would like to compare ICCs between two models which related to different specifications of between and within level weights. Here is one part of the specification for model 1: CLUSTER = idsc; WEIGHT = tw; and here's for model 2: CLUSTER = idsc; WEIGHT = L1w; BWEIGHT = L2w; WTSCALE = CLUSTER; BWTSCALE = SAMPLE; The ICCs derived from Mplus are as follows: Model 1: Intraclass Correlation 0.417 Model 2: Intraclass Correlation 0.420 Can I know how can I verify these two values from mplus output? I tried to compute from the estimated values but I know that it's not right to do so. I also saved the SIGB and SWM matrix files but how to read these file? 


Deana We dont have a way to compare these two estimates  however you can get a confidence interval for the ICC by specifying the unrestricted model in the model command and using a new parameter in model constraints to define the ICC as the new parameter. Tihomir 


Is it possible to get ICCs for clustered data using Type = Complex? The data used to create my latent variables are categorical. I have 61 clusters. Thank you. 


You need to use TYPE = TWOLEVEL BASIC to gett ICC's. You can do this for categorical variables. The ICC is for the underlying continuous latent response variable. 

deana desa posted on Tuesday, April 16, 2013  3:37 am



Thanks, Tihomir! I tried the suggested approach like the following: MODEL: %within% math1 (vw1); %between% math1 (vb1); MODEL CONSTRAINT: NEW(icc1); icc1 = vb1 / (vb1 + vw1); Now, the given ICC from the sample is .420 and the estimated ICC1 is as follows: New/Additional Parameters ICC1 0.428 So, these ICC and ICC1 are different intraclass correlations? I think my question is how can I verify the ICC computed from the sample as outputed in Mplus (i.e. .420)? 


When you get the ICC from TYPE=BASIC, is math the only variable used? 

deana desa posted on Wednesday, April 17, 2013  2:46 pm



Linda, Yes, math is the only variable for this null model. 


Can you please send the output to support for the run where you show the 2 values of 0.428 and 0.420. 


Hi. Is there a simple way to confirm, when computing intraclass correlations, if the default result is ICC(1,1) vs ICC(1,k)? 


What do you mean by those two expressions? 

rwstew posted on Tuesday, July 30, 2013  4:13 pm



By ICC(1,1) I mean a calculation based on a single rating to help determine how accurate a single rater may be, and by ICC(1,k) I mean a calculation based on the average of all raters within each group/cluster. With the calculation described as BetweenVariance/(WithinVariance + BetweenVariance), I suspect the ICC given represents averages within each group. Thank you for your help. 


Right. 


Dear Drs. Muthén, in my multilevel model I have an observed DV, 2 observed IVs and two latent IVs. To calculate ICC for DV and IVs I computed with TWOLEVEL BASIC one model with regression equation (including latent factor part as well as regression part "DV on IV1 IV2") and one model without regression equation (including only the latent factor part "IV1 by x1 x2 x3" on both levels). Average sample size as well as ICCs vary between both models. Which one should be reported  the ICC from the model with regression equation or from the model without regression? Thank you very much for your feedback. 


You should report the ICC's from TWOLEVEL BASIC. See the following FAQ on the website for further information: Icc changes from one model to another 


Dear Prof. Muthén, thank you very much for your quick response. Sorry for my misleading question  I used TWOLEVEL BASIC already as I read it in the statmodel threads about ICC. What I would like to know is which ICC should be used (and reported) when testing whether multilevel analysis should be applied (in a first step before conduction multilevel SEM): a) ICC from the model excluding all hypothezised regression paths (e.g. %within% fw by x1 x2 x3; y; %between% fb by x1 x2 x3; w; or b) ICC from a model including all hypothezised paths %within% fw by x1 x2 x3; fw on y; %between% fb by x1 x2 x3; fb on w; Thank you very much. 


You should not use the ICC's from either of these models. You should use the ICC's from TYPE=BASIC based on the data not a model. 


Thank you very much. Sorry for adding a followup question. I think your answer refers to ICC for observed variables. For this part of the model it helps me a lot. However, I also would like to calculate the ICC for a latent factor and I think that I have to model this latent factor to get its' within variance and between variance for computation of its' ICC (variance within/total variance). As I have to model the latent factor  should I model it with or without covariates? 


I would use the model without the covariates. To do this, you should hold the factor loadings equal across between and within. 


Great, thank you very much. This was very helpful. 


Dear Prof. Muthén May I know the standard ICC value for multilevel (two level) analysis in Mplus? What is the justification of ICC in within and between levels? Can you please recommend some references to show the cutoff value for ICC that analyse using Mplus? Thank you! 


There is no "standard" ICC value and no cutoff. Please study the paper on our website Muthén, B. & Satorra, A. (1995). Complex sample data in structural equation modeling. Sociological Methodology, 25, 267316. 


Thanks, prof. 


Dear Drs. Muthén, I am testing a 12model with a level 1 IV (employee workfamily balance), a level 1 moderator (employee job autonomy), and a level 2 DV (academic reputation of the department). The average cluster size is 25 and the number of clusters is 44. However, the ICCs of the IV and M are low (around .03). I understand the issues involving low ICCs in, for example, 21models, but is it also problematic in 12models? And if so, what could be a solution here? Aggregate the individuallevel variables to Level 2? Thank you! Jeroen 


An icc of 0.03 can be important when you have large cluster sizes. Try out the 2level modeling. 


Hello, I was reading that for 2level modeling, a traditional estimator such as ML can produce downwardlybiased estimates when cluster size is small, or when the ICC departs from .50. I was reading that an alternative estimator such as MVU should be used to prevent such bias (see http://www.biomedcentral.com/14712288/12/126/). Is this true in 2level modeling in Mplus? Should I use an alternative estimator if ICC departs from .5 or cluster size is small? Thank you. 


I would not be concerned about that unless your specific focus is on estimating ICCs. Check the Hox multilevel book to see if he expresses concern about this  this book also discusses simulations that show performance at different cluster sizes etc. 


Dear Dr Muthens, Firstly thank you so much for your help so far! I have a quick question about ICC's I am hoping you could answer. I am wondering why MPLUS does not provide the ICC when producing a LGC model for varying times of observation (type = random)? Is there any way to calculate this from the output given by MPLUS? Please feel free to direct me to a resource that explains this if there is one. Thank you in advance! 


You can get ICC's with TYPE=TWOLEVEL BASIC and no MODEL command. See the FAQ on ICC's on the website. 


Dear! in my thesis with 2 level analysis the ICC becomes 0.0000000005 so shall I proceed with it or should I change the paper to multivariate logistic concept and analysis? thank you! 


It sounds like you don't need 2level modeling. 


Dear Drs. Muthen, I am trying to use the design effect adjusted standard errors approach to account for clustering in my data as an alternative to MLM as I only have eight clusters (Huang, 2015). I am primarily interested in effects at Level 1, my sample size is 108 and my ICC is .3710. I am manually computing the test statistics for each of the model coefficients in my model. I was instructed to use the df for the adjusted N which is N/(1+[ICC*{average cluster size – 1}]). So my adjusted N = 18.85. Would you please advise me on how to determine the critical value to test for significance of each of the coefficients in my models? For example, how would I calculate degrees of freedom for the model coefficients in a simple path model with two observed outcome variables and several predictors? 


I wouldn't choose that path but with only 8 clusters I would instead take a fixed effects approach and regress on 7 dummy variables. 


In the monte carlo examples from your multilevel models in Chapter 9, how are clustering/ICCs taken into account and what ICC is assumed? I was just curious how one could alter the monte carlo model input to reflect different ICCs. Thanks! Susan 


You can simply change the betweenlevel variance of the random intercept/random slope keeping in mind the general formula icc = (Between variance)/(Within+ Between variance) 


Perfect! Thank you so much! Best, Susan 


Hello, I am analyzing longitudinal achievement data of 22 classrooms. When doing a unconditional twolevel growth model there is significant variation in the "between" intercepts (ib)but not slopes (iw)and iccs of around .25. When i add covariates on both levels (e.g. sex, ses) they are predicting quite well and remove the variance in the between intercept (ib). Is it fine to just do single level analysis with these predictors if this is the case (no variance in between intercepts and slopes)and im not interested in classroom effects? thank you very much! 


You could use Type=Complex for a singlelevel analysis with Cluster=classroom. 


Thank you, could you please explain a bit why and when to use type=complex? As far as i know it is taking nonindependence of samples into account and gives more precise standard errors. So i can use it in general in growth models if im not interested in classroom (higher level) effects/predictors? Because Linda K. Muthen once said: "A growth model is what is called a disaggregated model. You should use TWOLEVEL not COMPLEX for disaggregated models. " 


Yes, you can use Complex for 2level growth if you are not interested in getting the different components of variation. The growth model is aggregatable when you use the standard 2level growth modeling of using of the same time scores on within and between. But it's a long story so see also our handout and video for Topics 7 and 8. 

Ads posted on Monday, October 19, 2015  9:46 am



If the ICC relates to a random intercept, is there an equivalent to the design effect for determining if a random slope is also needed? Also, I have unequal cluster sizes in a dataset I am analyzing, and it was mentioned on this thread that the design effect [1 + (average cluster size  1)*intraclass correlation] is for the special case of equal size clusters. Is there an equivalent way to quantify influence of clustering with unequal cluster sizes (and are there references/rules of thumb for interpreting its magnitude)? Thank you! 


Q1. No, but you can for instance see if BIC improves. Q2. No, but you can check how the SEs change from ignoring clustering to taking it into account. 

Ads posted on Monday, October 19, 2015  3:05 pm



Thanks! Following up, if your random slopes model won't converge/run because you think the random slopes might be negligible (and thus you can't look for changes in parameters/model fit for things like BIC and SEs), would there be a way to show reviewers that there was basically no random slope and this precluded model estimation? The only way I know to address this is through using TECH1 output to identify that the parameter causing problems is the random slope  but, I am not sure how you confirm that the reason the random slope is problematic is because it is very small, thus precluding model estimation (i.e., you can see there is a problem with estimating that parameter, but how do you justify what the problem is). 


Tech8 can help you see why you have nonconvergence. Perhaps you have negative ABS changes, perhaps you have many dimensions of integration, in which cases you may need more integration points. Otherwise, send output, data, and license number to Support. 

Ads posted on Tuesday, October 20, 2015  3:20 am



Thank you  this is very helpful. 


Dear Mplus team, I would intuitively assume that the ICCs of a threelevel (school, class) model could be calculated from two twolevel models: ICC (class, 3level) = ICC (2level, cluster = class)  ICC (2level, cluster = school). But this seems not to be correct, what is the reason for this? Thanks christoph 


The way Mplus computes 3level icc's is shown in the FAQ on our web site: ICCs for 3level 


thanks, I know the formula, but what I am wondering is, why the school level variance of a twolevel model is not equal to the school level variance of a threelevel variance. Christoph 


If a 3level model is appropriate, I would expect a 2level model to give distorted estimates. 


I have two inquiries regarding the use of intraclass correlation coefficient (ICC) and the design effect size to estimate the need to use a multilevel analysis. I'm using a sample of 9,937 children distributed in 61 schools, with 13 categorical variables loading in a given factor model. The average cluster size is 131.34. In order to calculate the design effect (1 + (average cluster size  1) * ICC) this cluster size is recommended to be used. However, I noticed that, in this sample, that cluster size is not normally distributed and maybe, an average value is not a good measure for a central tendency estimate. Should I use the median cluster size instead of a mean cluster size? Besides, in order to calculate the ICC for a factor "P", which is loaded from 13 categorical variables, I used the variance (BV) from the factor "P", in the between level analysis, in order to estimate the ICC for the factor "P", as follows: ICC = BV/(π^2/3+ BV). This must be computed manually, and there is no output to provide ICC for a factor loaded from categorical variables, right? Thank you very much for your help. 


Correction: ICC = BV/(phi^2/3+ BV) 


That formula for the design effect is only approximate because it assumes equal cluster sizes and estimation of the mean. So I don't know that mean or median matters much. Your within variance of phi^2/3 is a residual variance for binary item modeled by logit. I don't think that is the withinlevel factor variance. 


Yes, I used the wrong Within variance and did not noticed the specification of that formula. However, since the design effect is only an approximation which assumes equal cluster sizes and my cluster sizes are far from being equal, what solution should I use in order to estimate if I should or should not use a clustered factor analysis from categorical variables? Or, there is an alternative formula to calculate the design effect, based on an alternative assumption of inequality of cluster sizes? Thank you very much again. 


I think the design effect is less important to get at than actually doing an analysis with and without taking clustering into account and comparing the resulting SEs. 


Dear Dr. Muthen, continuing in this discussion, I ran all analysis with and without taking cluster into account. In fact, there were little differences (for more or for less) in the factor loadings (around 0.050) and SE (around 0.005). The ICC for our models were all less than 0.025. How I shall proceed? 1) Include cluster to follow the design of the study? 2) Or keep the model simple and perhaps more parcimonious? 


You can use Type=Complex which keeps the same model but allow for clustering in the SEs. 

Miel Ann posted on Tuesday, February 16, 2016  2:37 am



I want to simulate a three factor multilevel CFA model with three indicators per latent variable. In within model,the variance of factors (FW13) are fixed to 1, and correlations between the factors are set to 0.3. Most of the factor loading are 0.8, and two crossloaded factor loading (F2 ¡æ y3, F3 ¡æ y6) specify to 0.4. The between model has same factor structure as the within model. To set ICC be equal to 0.3, the residual variance of y1 to y9 are adjusted 0.51(within model), the residual variance of y1 to y9 are adjusted 0.219 (between model) In the output, I get ICCs for Y variables. However, MPLUS provides 0.394~0.489. How does MPLUS calculate ICC? How can I develop a syntax, or is there any example that I can use? Your response is much appreciated. 


ICC is a variance ratio, ICC = B/(B+W). The total y variance is about 1.15 for within and 0.86 for Between which gives ICC approx 0.4. 

Miel Ann posted on Friday, February 19, 2016  12:47 am



Thank you for your reply. Following your reply,I got my icc value to .427 But MPLUS result provides 0.394~0.489 (results are attached below) when all items are set tohave same factor laoding and residual variance I wonder why that difference occurs. Do I have to use the result from the Mplus or my calculation for further analysis  Estimated Intraclass Correlations for the Y Variables Intraclass Intraclass Intraclass Variable Correlation Variable Correlation Variable Correlation Y1 0.449 Y2 0.471 Y3 0.472 Y4 0.449 Y5 0.489 Y6 0.488 Y7 0.395 Y8 0.394 Y9 0.420 


There might be an error in your calculations. To be able to answer you we will have to see your output and your calculations. Send to Support along with your license number. 

Miel Ann posted on Friday, February 26, 2016  10:02 am



I have one more question. The Mplus iccs that are reported in the output are for the first replication only. How do I verify the accuracy of the icc values I have set? And when calculating the icc according to the formula, factor correlation or intercept value may affect the icc value? 


Q1 Run one replication with a very large sample. Q2. No, only variances. 

Miel Ann posted on Sunday, March 27, 2016  2:59 am



I would like to calculate variance of indicator for icc value. My understanding of the indicator (y1) variance is that (factor loading)^2 + factor variance + residual variance In my study, y1 had crossload (F1, F2) fw1 by y1@ 0.6 fw2 by v1@ 0.5 y1@ 0.79 fw1@ 1 fw2@ 1 fw1 with fw2 @0.5 In this situation, how do I calculate the variance of y1? (0.6)^2*(1) + (0.5)^2*(1) + 0.79=1.4 Is it right? 


You want to ask this general question on SEMNET. 


Hey, we’re wondering how to test the significance of the ICC. We’re running two level models with Mplus to test the effects of level 2 predictors on level 1 variables (sample: 660 pupils, 59 classes). We took the ICC(1) from the null model and calculated the ICC(2) according to the formula ICC(2)=(k*ICC(1))/(1+(k1)*ICC(1)) where k is the average group size of level 2 units. Now we want to estimate the significance of the ICC(1) & ICC(2). Recording to Snijders & Bosker (99) the significance of the ICC(1) can be tested by an FTest for a group effect in the ANOVA. They give the formula F=(k*S²between)/S²within where k is the average group size, S²b the observed betweengroup variance, S²w the observed withingroup variance. The degrees of freedom are defined as N–1 & M–N, with N for the no. of level 2 units & M for the total no. of level 1 units. A significant F value means a significant ICC(1). The values the null model gives enable us to calculate the F value & the degrees of freedom manually, but that doesn't tell us the critical values to interpret the F value. Is it possible to do an FTest as described above in Mplus to get the values we need? Or is there another way to test the significance of the ICC(1)? Do you know how to test the significance of the ICC(2)? Thanks! 


Mplus does not do an Ftest although you can express the ICC(1) and ICC(2) formulas in Model Constraint and get a nonsymmetric confidence interval for them using bootstrapping; that should be good. 


Thank you so much! 


ICC(1) question... I have a model with 1 DV, Y; 3 lower level IVs X1, X2, X3, that are group mean centered, clustered data with clustering variable COMP. If I specify a model with just Y, the ICC given in the output correctly equals the between var /(within + between var) But if I add X1, X2, X3 to the usevariables and within = part of the model  but don't connect them up with Y  the ICC given in the output does not equals the between var /(within + between var)  the latter is unchanged from the previous model but the 'Mplus calculated' ICC has changed Any idea why this is? 


Perhaps this FAQ is useful here: Icc changes from one model to another 

Chris Stride posted on Wednesday, October 05, 2016  3:03 am



Cheers! Hadn't spotted that, but yes, that explains it perfectly 

May Lee posted on Sunday, November 13, 2016  4:29 am



I recently purchased Mplus and so am quite naive about it's use. My question is, if my hypotheses are at an individual levle of analysis, however, supervisors rated multiple employees' outcome, causing nesting in my data. What is the appropriate model in mplus user's guide suit for me. Thanks! 


Type = Twolevel analysis. See UG for examples. 

May Lee posted on Sunday, November 13, 2016  6:46 pm



Thank you for your reply. What does UG mean? Could you tell me the page in the user's book? Thanks again! 


UG means User's Guide. Look at the index. 

May Lee posted on Monday, November 14, 2016  10:59 pm



Dear Professors, I see the UG, and find example 9.1 seems suitable to my model(my hypotheses are at an individual levle of analysis, however, supervisors rated multiple employees' outcome, causing nesting in my data). I try to write a snytax, is it right? TITLE: individual level moderated model, nested data DATA: FILE = ex9.1a.dat; VARIABLE: NAMES = y x w xm clus; CLUSTER = clus; WITHIN = x w xw; DEFINE: CENTER x w(GRANDMEAN) xw=x*w; ANALYSIS: TYPE = TWOLEVEL; MODEL: %WITHIN% y ON xw x w; OUTPUT: SAMPSTAT; CINTERVAL; And I have another three questions confusing me: 1¡¢TYPE = TWOLEVEL is random intercept, Why not choose TYPE = TWOLEVEL RANDOM for both random intercept and random slope for the model?Which one is more suitable? 2¡¢Why grandmean centered (not groupmean centered) within group's variables? 3¡¢I can not find the intercept term in the output, why? Shall I report the intercept in the tables? Thanks for your time, and sorry for my poor English and poor methodolgy knowledge. 


Go by UG ex 9.2 instead which gives you a random slope. Also see our Topic 7 video and handout on our website to study the basics of multilevel modeling in Mplus. 

May Lee posted on Wednesday, November 16, 2016  3:26 am



Bengt, Thank you for your reply. Another question: How to deal with the control variables in type=twolevel random analysis? Does control variables need to be treated as a random intercept and a random slope too? How about the Dummy control variable? 


No. 

May Lee posted on Wednesday, November 16, 2016  8:27 pm



Ok. Thank you, Bengt. 


with a twolevel path analysis, everything seemed to be normal, except for the ICC. why does ICC become ZERO? 


A primary reason is that there is almost zero betweenlevel variance. 

João Maroco posted on Thursday, April 13, 2017  3:34 am



Dear Linda, I am using PISA 2015 data with the *new* 10 plausible values (I am just using the PVSCIE with DATA: FILE IS imp.txt; ! A list of the 10 data files one for each PV TYPE IS imputation; My question concerns the calculation of the ICC. Reading through the forum I found the answer to my puzzled question on how the ICC given by Mplus is different from the one you calculate using the Model Estimates (Mplus uses the sample statistics, and not the model estimates). My question now is how come they can be so different? Like the ICC given by Mplus is 0.041 And I get this value by looking at sample statistics: ICC = 354.683/(354.683 + 8163.298) =0.0416 which is the value Mplus gives in the Intraclass Correlation for PVSCIE. However, if I use Model estimates my ICC is ca. 10 times larger: ICC(Model)= 4217.786/(4217.786+8150.979)=0.341 My Model has 0 degrees of freedom (it's just an intercept only model) so I guess I can't really talk about goodness of fit... Thus differences between ICC calculated from the sample vs ICC calculated from the model could be seen as a badness of fit of the intercept only model... Any thoughts on this would be greatly appreciated, Best regards, João Marôco 


See the FAQ on our website: Icc changes from one model to another 

Mirela Bilc posted on Thursday, September 28, 2017  7:01 am



Hello! When specifying a model with a binary outcome I get the ICC (using a model constraint), but no estimates for the within and between level. Is there any possibility to obtain these estimates for binary data? Thank you, Mirela 


Please send the full output to Support along with your license number. 


After I start using Mplus 8, I am always getting ICC 0.000 when I use TWOLEVEL with CLUSTER as schools. The analysis results are good but ICC for different variables in different data set always give me 0.000 ICC. WHY? 


Dear Bengt & Linda, I have a 3level model: items nested within judges nested within subjects. There are 42 items, each rated by the same 2 raters, for 25 participants. I aim to calculate the variance of the items across the two judges for the same subject. I have set up the following threelevel model to find the ICC at level 2: VARIABLE: NAMES ARE PID RATER ITEM SCORE; CATEGORICAL = SCORE; WITHIN = ITEM; CLUSTER = PID RATER; ANALYSIS: TYPE = THREELEVEL BASIC; But get the warning: *** WARNING Clusters for RATER with the same IDs have been found in different clusters for PID. These clusters are assumed to be different because clusters for RATER are not allowed to appear in more than one cluster for PID. Why can't I cluster the data (at level 2) by Rater? Many thanks, Matthew 


Answer for Dogan: Send your output and data to Support along with your license number. 


Answer to Constantinou: It is just a warning. Mplus goes on, assuming that RATER 1 in PID 1 is not the same as RATER 1 in PID 2, etc. 


Hi Bengt, Many thanks for your response. That's the problem: I don't want Mplus to make that assumption because Rater 1 in PID 1 is the same as Rater 1 in PID 2, and so on! Is there a way to cluster by rater which is a repeatedmeasures variable? 


Because you have only two raters (two observations at level 3) we generally do not recommend to have a separate "level". Instead we recommend using multiple groups (separate groups for the two raters) or bivariate modeling, so you would have ITEM1 and ITEM2 represent the two raters. We generally recommend that you have at least 10 units for a random effect modeling (two would not be enough). It also sounds that the nesting is actually crossclassified rather than 3 level which is what the Mplus message indicates. Because of that I think the multiple group method will not work for you and instead you should use the bivariate approach. 


Just to be clear  I would recommend a model like this NAMES ARE PID ITEM1 ITEM2 SCORE; CATEGORICAL = SCORE; WITHIN = ITEM1 ITEM2; CLUSTER = PID; ANALYSIS: TYPE = TWOLEVEL BASIC; 


Hi Tihomir, Many thanks for your thoughtful comments. Indeed, our data is not nested and has a multiple membership design (perhaps this is what you were referring to with the multiple group, although it is not clear to me why you suggest that this won't work). I checked the Utretch lectures and saw that Mplus handles crossclassified/multiple membership models using the MODEL syntax, which if I am not mistaken rules out the possibility of calculating ICCs. We have instead opted for SPSS's VARCOMP function under generalizability theory. In terms of your suggested syntax  it's v. helpful but our data is in long format, such that 'score' holds the ratings and 'item' lists the item label (e.g., 144). Would you thus suggest placing ratings for each rater in two separate variables, and including a dummy var coding for rater? 


Yes you have to reformat the data so that column one contains all the values given by rater 1 and column 2 has all the values for rater 2. If an item is rated by only one rater you have to enter a missing value for the other rater. 


Dear Bengt & Linda, I'm struggling with the ICCs using type = twolevel basic. usevar = clid dmot emot gmot; cluster = clid; analysis: type = twolevel basic; The ICC differ (a) when I list the three motvariables simultaneously (see above) or (b) when I run three different analyses listing each motvariable only (e.g., usevar = clid dmot;) Which ICC should I use? Thank you! 


See the FAQ on our website: Icc changes from one model to another It would be fine to use the icc from analyzing only one variable at a time. 


Dear Drs. Muthen and Muthen, We are conducting a multilevel exploratory factor analysis of a 34 categorical item instrument using MPlus 8. When running the code we received the following warning message: “One or more individuallevel variables have no variation within a cluster for the following clusters”. We have investigated this a little online and have read the FAQ on this warning statement. This indicates that “it would seem important to be aware if a large number of clusters have this warning for a key outcome variable.” A quick diagnostic of our data revealed that each of the items have no variation within at least 1 cluster. At the extreme, one item has no variation within 40 of the 100 clusters. Not surprisingly the itemlevel ICCs are also large (average ICC is .52 and with a range of .24 to .82). Do you have any thoughts on whether or not it is appropriate to proceed with an MEFA? Or perhaps resources we could read related to this type of situation? Thank you! 


I am not aware of any literature on this. We just thought it would be a useful piece of information to have to know your data better, particularly when level 1 is time. I think your situation is fine because there are rather few instances of nowithincluster variation. 

EvavdW posted on Tuesday, June 19, 2018  3:36 am



Dear professors Muthen, I ran a two level model to obtain the ICC for several variables. As you can see below the ICC for the last variable in the list (READ) is extremely high: GENDER 0.006 CITO1 0.042 CITO2 0.054 CITO3 0.053 CITO4 0.058 CITO5 0.066 MONKEY 0.090 LION 0.075 TTR1 0.122 TTR2 0.093 TTR3 0.113 TTR4 0.076 TTR5 0.070 READ 0.991 I was wondering whether this is okay? Or is it an artefact? Or does it indicate an error in my data? I already checked for data entry errors and if missings are defined correctly, and this does not seem to be the problem. With kind regards, Eva 


Looks like the READ variable has very little Withinlevel variation and is better treated as a Betweenlevel variable. Confirm this in a Twolevel Basic run and see if that's the case in the data and why that is the case. 

Back to top 