I am running a multilevel model with random intercepts and random slopes. The dependent variable is math achievement, the independent variable SES.
The syntax is the following: ---------------------------- VARIABLE: NAMES = idc math ses; CLUSTER = idc; USEVARIABLES = math ses; WITHIN = ses;
DEFINE: CENTER ses (GROUPMEAN);
ANALYSIS: TYPE = twolevel random;
MODEL: %WITHIN% slope | math ON ses; %BETWEEN% math WITH slope; ---------------------------
If I calculate the correlation between intercepts and slopes using the covariance of intercepts and slopes, the variance of the intercepts and the variance of the slopes (all in the regular Mplus output), my result is r = 0.214
If I save the values of the intercepts and slopes (using save=fscores) and then calculate the correlation between intercepts and slopes, my result is r = 0.357
What is the reason for the different values of the correlation?
If I got your point right, your explanation refers to the latent modeling of intercept and slope used in Mplus. To check that I ran the same model in HLM and also saved the values of the intercepts and slopes and correlated them. The results are quite similar: the correlation within the model is r = 0.221, the correlation using the saved values is r = 0.376
Therefore, I assume that the difference between the two correlations is not due to the difference between true factor scores and estimated factor scores. Rather, I assume that the reason is connected with multilevel models in general.
I don't see how you reach your conclusion or what you mean by the difference being connected with multilevel models in general. The difference between model-estimated and factor score estimated correlations is well known in the literature as our FAQ points to. That's the whole story.