I am trying to run a two-level random intercept and slope analysis using Mplus. The model I am trying to test is as follows:
%WITHIN% s1 | intq on demo9c;
%BETWEEN% s1 intq on tech3;
The issue I am facing is that unless I set the residual variance of the random slope (s1) to zero, the model fails to converge (intraclass correlation is relatively low). Once I do that, I get a solution in which the residual variance of the intercept term also approximates 0.
My question, then, is the following: If the residual variance of the random intercept term approximates 0 and I have to set the residual variance of the slope to 0 (i.e. slope is not random), is there any advantage to testing the model using a multi-level program vs. using standard OLS regression? In particular, do multi-level programs handle disaggregated level 2 predictors ("tech3" in this case) any differently than OLS regression? If so, how?
In specifying a two-level CFA, I notice that often the theta_between matrix (between group residuals) is fixed at zero (e.g. example 9.6 in the Mplus 5 user's guide, p. 241). Theoretically, I'm struggling with why this is so.
Are we assuming then that the variances in the random intercepts is the same for each indicator and given by the variance of F_between?
Residual variances are usually very small on between. This is why we fixed them to zero. Most multilevel programs don't estimate these parameters. See the following reference which is available on the website:
Muthén, B. & Asparouhov, T. (2011). Beyond multilevel regression modeling: Multilevel analysis in a general latent variable framework. In J. Hox & J.K. Roberts (eds), Handbook of Advanced Multilevel Analysis, pp. 15-40. New York: Taylor and Francis.