Question on what prompts algorithm=integration to become necessary for model execution when running 2-level models with random slopes.
In short, running dyadic models looking at physiological synchrony. In what is referred to as 'nondirectional concurrent synchrony' in the lit, within-dyad standardized physio data is regressed on each partner (so M on B and B on M) with M=mom, B=baby. These estimates should be identical since standardized within-dyad (and they generally are, as expected). However, in some models, depending on whether I regress M on B or B on M, one or the other model requires algorithm=integration [montecarlo] (whereas the other does not). In this case, the estimates are not precisely identical, but essentially the same. I'm wondering why the need to invoke algorithm=integration in one vs. the other. It must be due to which variable is the x variable in this case (M or B), but wanted any other insight nonetheless. Cases missing on all variables are the same across both runs.
More generally, in a two-level random setup with continuous indicators (in some cases with very little variance), what might prompt that requirement for algorithm = integration?
If you have statements like this s | B on M; B; M; v.s. s | M on B; B; M; If the predictor has missing values you will get that message (requiring montecarlo integration) because the model will contain the product of two latent variables. If the predictor doesn't have missing values then you won't get this message.
I would actually recommend that you look through this article
Thanks, Tihomir. I am going to look closely at the models & the paper (thanks!), but this was my instinct and inclination.
To be sure, I am running separate bivariate dyadic models. So no centering per se that is separately done since both data streams (M & B) are in the model.
In one model, I regress B on M.
In another (separate) model, I regress M on B.
One of the models requires algorithm=integration, the other does not. When you say the montecarlo integration is required b/c the model contains the product of two latent variables, can you clarify? Is it b/c the missing data on the predictor is modeled as a latent variable as well? In other words, is it the case that the model with missing data on the predictor is the product of two latent variables whereas in the other case, the model is only one latent variable (i.e., the slope)?