Message/Author 


Hello, I have a question about trends in data (e.g., linear, quadratic) when using RDSEM. Does the covariance structure for autocorrelations with residuals account for trends, or is there a need to remove linear/quadratic/other trends prior to analysis? Thanks in advance! 


I think you want to include a trend in the modeling  so if you have Y ON X (and Y^ residual AR), you could add Y ON TIME in some form. See our 2017 Hopkins training of Short Course Topic 12 and 13 videos and handouts on our website. 

MingChi posted on Saturday, June 29, 2019  1:57 am



Could you please tell me, where I can find the simulation syntax about this paper. I need learn. Tihomir Asparouhov, Ellen L. Hamaker & Bengt Muth (2017) Dynamic Latent Class Analysis, Structural Equation Modeling: A Multidisciplinary Journal, 24:2, 257269, DOI: 10.1080/10705511.2016.1253479 


This is not yet available in Mplus. 

Jon Heron posted on Monday, September 16, 2019  8:49 am



Until I watched Bengt's crossclassified analysis presentation on Friday I had been thinking that I needed to detrend my own data to enable me to make the necessary stationarity assumption. I have 203 participants each with 52 weeks of depression scores. My first step was to use MplusAutomation to run a series of AR(k) models (k = 0,1,...,8) on each participant in turn, and then record the pvalue for the kth lagged effect as k was increased to determine how many lags would be needed to describe the autocorrelation. Eyeballing the raw data for anyone who cannot be described by AR(2) shows a range of weird and wonderful curvilinear patterns. If some people display no systematic change whilst others are linear and others even something closer to cubic do you think crossclassified still doable? many thanks, Jon 


Yes, it is doable. Take a look at User's Guide example 9.39. Because the random coefficient for time, time^2, time^3 is subject specific you would be able to model linear, quadratic, cubic, or no trends within the same model. If a subject has no trend the random coefficients for the powers of time should be near zero. 

Jon Heron posted on Tuesday, September 17, 2019  8:49 am



Awesome, thanks Tihomir :) 

Jon Heron posted on Thursday, September 19, 2019  9:28 am



So, in Bengt's video example on smoking urge and adding a time trend, he jumps back and forth between a crossclassified model and a twolevel model and shows how a linear trend can be added to both. Am I right in thinking that the (somewhat simpler) twolevel approach only works is there is withinwave variation in TIME_t such that it can be added as another timevarying covariate? In my own data I merely have 52 waves so perhaps that's why I'm not getting anywhere when trying to replicate the twolevel model. cheers, Jon 


1. I think I was too quick on my first reply. You don't need to be using crossclassified. That is needed only if you need to have a random effect that is time specific. But reading your description you are not talking about that at all. You should be using type=twolevel for that kind of modeling. 2. The two level is an RDSEM model and the crossclassified is a DSEM model, so they won't be super easy to compare. You would have to be going through equations (6566) and (7072) in http://www.statmodel.com/download/DSEM.pdf The models would be easy to compare if you were using ^ (RDSEM) instead of & (DSEM) but ^ currently is not available for crossclassified (only &). But this shouldn't concern you if you move back to twolevel models where you can use ^ instead of &. 3. You can think of Time as a time varying covariate and there is no requirement on the time variable. It should be the same syntax. You can even run the RDSEM model with the autoregressive parameters fixed to 0 and that should give you exactly the twolevel model. 

Jon Heron posted on Thursday, September 19, 2019  11:51 pm



brilliant, thanks Tihomir 

Jon Heron posted on Friday, September 20, 2019  3:02 am



No question, just an update: I estimated a "mean(y)/Phi/logv" model  I guess you'd call it a 2level model for a repeated continuous variate with random intercept, random AR(1) autocorrelation and random residual variance. Very catchy. This model reported problems with AR coeffs for ~25% of the cases which I put down to some participants exhibiting timetrends in their data. I fitted a range of polynomial regressions to each participant in turn and concluded that the most complex trend could be explained by a cubic hence I detrended *everyone* using a cubic. Working with the detrended data (residuals plus observed means) and fitting the same mean(y)/Phi/logv model leads to no reported problems with AR coefficients, a mean Phi which has halved and a mean logv reduced by 75%. this feels like a good result as data artifacts were inflating those quantities. I then think  I wonder if I can achieve the same outcome by working with the raw data and adding linear/quad/cubic terms as 3 additional random effects. This model shows substantial variation in these random effects but modest means (consistent with the population mean behaviour being pretty well behaved  something also borne out when I fit the simple xclassified model Bengt describes as "Quick and dirty, but not so dirty"). thanks for all the pointers! 


Great. Write it up and send for posting. 

Back to top 