We have repeatedly come across a finding that we do not fully understand, and would like to have an expert's opinion about.
The situation is the following: we have a survey data set in which individuals are grouped within higher level units (e.g. regions, countries, time points). We want to analyze a dependent variable, operationalized by means of multiple indicators, using a multilevel regression approach. This multilevel model is to be estimated outside the Mplus framework, eg in Stata or SPSS.
Before conducting the multilevel analysis, however, we perform a multigroup CFA (in Mplus) to test whether the measurements are equivalent over groups (scalar equivalence - equality of intercepts and loadings). Our tests shows that the assumptions of equivalence are fulfilled. Therefore, we export the factor scores.
It is our observation, however, that these factor scores have a much larger intraclass correlation (about double) compared to the original sum-score indicators (or a composite score based on these indicators). This has happened over and again with different data sets and examples.
What is the reason for this? Is this a methodological artefact? Does this mean that our approach of exporting factor scores to use in a multilevel analysis is not appropriate?
Intraclass correlations have the within-group variation in the denominator and this variation includes measurement error which attenuates the icc. The factor scores don't have the measurement error and therefore get higher icc.
Note that you can do 2-level multi-group factor analysis within Mplus, taking into account clustering, weights, and stratification.
Thank you for the prompt reply. Your explanation makes perfect sense.
I am aware of the multilevel analyses' capabilities of Mplus of course and use it extensively. It just so happens that when I need to work with the datasets with a large number of variables (like ANES or GSS) it is typically easier to do the analysis in other packages, like Stata.