No, you shouldn't model it as 2-level. Just use the parallel process model you have on within and the 4 growth factors will be correlated and account for within-family correlations.
Wen-Hsu Lin posted on Thursday, January 16, 2014 - 4:45 am
Thanks a lot.
Wen-Hsu Lin posted on Friday, January 17, 2014 - 7:22 pm
Sorry a follow up question. If I have a cross-dyadic variable (family cohesion), I will just need to use on statements to specify the impact of family cohesion on growth factor. Like: variable: names are id kdep1-kdep5 pdep1-pdep5 family; USEVARIABLES ARE kdep1-kdep5 pdep1-pdep5;
Hello, I am receiving an error message when I run my dyadic parallel process model. It indicates that I may have negative latent variable variances or residual variances, but I cannot find the problem. I do have a correlation between two slopes that is .69. However, I'm not sure how to fix this. Would you please be able to help me figure this out?
A big correlation in combination with several small ones can give a non-pos-def cov matrix. Sometimes correlating observed variable residuals between the processes at each time point helps in that it reduces factor correlations.
Also, I have another question. Does it matter that my processes are not measured on the same scale? For example, two of my processes are measured on a likert scale and the other is a sum scale. I'm wondering if this is the issue because the warning signs are always with the sum scale outcome variable. Thanks,
Re: dyadic LGM's. I have a mom-baby model in which I correlate the two outcome processes' time trend to test for what is known in our lit as 'trend' synchrony. So e.g., if intercepts only (i.e, the growth factors) characterizes the process for both partners and the correlation is significant then we have evidence for 'trend synchrony' wrt to the average level of a given process in the dyad (same would apply for whatever function best fit the process). Now, is it possible to regress or predict that *correlation* with an exogenous (continuous) outcome?
In short, we are not predicting the growth factor for any single partner, but rather the variance around their association in the dyad.
At first blush, I don't think this could be done in the LGM framework, but essentially this is what would be at L2 in a conventional MLM framework.
Could I simply use Constraint feature along the lines of 5.23 to define the covariance (which is fine) and then regress that on an x variable? Is it indeed necessary to use correlation metric? As LGM will already provide relevant growth factor parameters, defining covariance would seem relatively straightforward using Constraint, but may be be missing something.
You should construct a model that always gives consistent estimates. The reason we use Var=Exp(a+bx) instead of Var=a+bX is because a+bX can become negative depending on the value of X. This applies to correlations as well. You don't want to be using a model that yields correlation values above +-1. If the correlation is known to be positive the Logit function if fine. If the correlation can also be negative the most popular choice is the hyperbolic tangent function. https://en.wikipedia.org/wiki/Hyperbolic_functions
If you think that the interpretation of your model would substantially improve, using a model that in principle violates the boundary conditions for variance covariance parameters, but in practice it works, then use that model. Just a simple example: if the predictor is binary 0/1 the model Var=a+bx is fine as long as a+b and a are both positive.
So after some additional wrangling and detailed reading of the UG, I'm not sure this can be done.
In short, what I want to do is regress an x variable (SBSK) on the parameter labeled (cov) in the above post. I don't think it's possible with an ON statement for this reason:
It's akin to creating a new variable that was not originally part of the model (i.e., the parameter labeled (cov) in the model is now a new Y variable, not a constraint, that I want to predict using the SBSK variable)
I would use a separate model/run for that regression. If the clusters are fairly long >20 observations, using the sample cluster correlation (as a variable) is an option. The Bayesian approach provides much more in that direction. You can actually use a Log(Variance) as a random effect variable on the between level.