Message/Author 

WenHsu Lin posted on Tuesday, January 14, 2014  11:31 pm



Hi, my data included 5 waves of depression of a kid and one parent. I would like to model the growth of the depression of the dyadic data.Should I use following syntax: variable: names are id kdep1kdep5 pdep1pdep5; within=kdep1kdep5 pdep1pdep5; cluster=id; analysis: type= twolevel random; model: %within% ki kpkdep1@0 kdep2@1 kdep3@2 kdep4@3 kdep5@4; pi pppdep1@0 pdep2@1 pdep3@2 pdep4@3 pdep5@4; %between% ki kp pi pp; or regular parallel growth curve modeling syntax? 


No, you shouldn't model it as 2level. Just use the parallel process model you have on within and the 4 growth factors will be correlated and account for withinfamily correlations. 

WenHsu Lin posted on Thursday, January 16, 2014  4:45 am



Thanks a lot. 

WenHsu Lin posted on Friday, January 17, 2014  7:22 pm



Sorry a follow up question. If I have a crossdyadic variable (family cohesion), I will just need to use on statements to specify the impact of family cohesion on growth factor. Like: variable: names are id kdep1kdep5 pdep1pdep5 family; USEVARIABLES ARE kdep1kdep5 pdep1pdep5; model: ki kpkdep1@0 kdep2@1 kdep3@2 kdep4@3 kdep5@4; pi pppdep1@0 pdep2@1 pdep3@2 pdep4@3 pdep5@4; ki kp pi pp on family; 


Right. 


Hello, I am receiving an error message when I run my dyadic parallel process model. It indicates that I may have negative latent variable variances or residual variances, but I cannot find the problem. I do have a correlation between two slopes that is .69. However, I'm not sure how to fix this. Would you please be able to help me figure this out? Thanks so much, Danyel 


A big correlation in combination with several small ones can give a nonposdef cov matrix. Sometimes correlating observed variable residuals between the processes at each time point helps in that it reduces factor correlations. 


OK  thanks. I have a dyadic model with husbands' and wives' outcome vars. at 4 time points with three constructs. Is this what you mean for each time point? d1accsen with m1accsen d1deptts m1deptts d1relaq1 m1relaq1; m1accsen with d1deptts m1deptts d1relaq1 m1relaq1; d1deptts with m1deptts d1relaq1 m1relaq1; m1deptts with d1relaq1 m1relaq1; d1relaq1 with m1relaq1; 


Also, I have another question. Does it matter that my processes are not measured on the same scale? For example, two of my processes are measured on a likert scale and the other is a sum scale. I'm wondering if this is the issue because the warning signs are always with the sum scale outcome variable. Thanks, Danyel 


On the question in your first message  Yes. But you can write your WITH statements more succinctly as: d1accsenm1relaq1 WITH d1accsenm1relaq1; On the question in your second message No. 


Dr. Muthen, Thanks for replying. I appreciate your help! Danyel 


Re: dyadic LGM's. I have a mombaby model in which I correlate the two outcome processes' time trend to test for what is known in our lit as 'trend' synchrony. So e.g., if intercepts only (i.e, the growth factors) characterizes the process for both partners and the correlation is significant then we have evidence for 'trend synchrony' wrt to the average level of a given process in the dyad (same would apply for whatever function best fit the process). Now, is it possible to regress or predict that *correlation* with an exogenous (continuous) outcome? In short, we are not predicting the growth factor for any single partner, but rather the variance around their association in the dyad. At first blush, I don't think this could be done in the LGM framework, but essentially this is what would be at L2 in a conventional MLM framework. 


In the ML framework you can use the Constraint= feature along the lines of User's Guide example 5.23, but use a different model for the correlation  something like logit: E(a+bx)/ 1+E(a+bx) In the Bayes framework You can use something like (available for twolevel setup) Table 1 http://www.statmodel.com/download/HamakerAsparouhovBroseSchmiedek&MuthenMBR.pdf 


Thanks Tihomir. Could I simply use Constraint feature along the lines of 5.23 to define the covariance (which is fine) and then regress that on an x variable? Is it indeed necessary to use correlation metric? As LGM will already provide relevant growth factor parameters, defining covariance would seem relatively straightforward using Constraint, but may be be missing something. 


You should construct a model that always gives consistent estimates. The reason we use Var=Exp(a+bx) instead of Var=a+bX is because a+bX can become negative depending on the value of X. This applies to correlations as well. You don't want to be using a model that yields correlation values above +1. If the correlation is known to be positive the Logit function if fine. If the correlation can also be negative the most popular choice is the hyperbolic tangent function. https://en.wikipedia.org/wiki/Hyperbolic_functions If you think that the interpretation of your model would substantially improve, using a model that in principle violates the boundary conditions for variance covariance parameters, but in practice it works, then use that model. Just a simple example: if the predictor is binary 0/1 the model Var=a+bx is fine as long as a+b and a are both positive. 


Following up on my question above: if my model already provides the covariance estimate per course, could I do something like this assuming I define SBSK as a variable using CONSTRAINT? M_int WITH B_Int (cov); SBSK ON COR; !MODEL CONSTRAINT: !NEW(COR); !COR = (cov)/(SQRT cov); 


So after some additional wrangling and detailed reading of the UG, I'm not sure this can be done. In short, what I want to do is regress an x variable (SBSK) on the parameter labeled (cov) in the above post. I don't think it's possible with an ON statement for this reason: It's akin to creating a new variable that was not originally part of the model (i.e., the parameter labeled (cov) in the model is now a new Y variable, not a constraint, that I want to predict using the SBSK variable) Any other thoughts? 


I would use a separate model/run for that regression. If the clusters are fairly long >20 observations, using the sample cluster correlation (as a variable) is an option. The Bayesian approach provides much more in that direction. You can actually use a Log(Variance) as a random effect variable on the between level. 

Back to top 