Jon Heron posted on Friday, July 24, 2020 - 6:36 am
we are currently fitting some DSEM models with the usual phi, logv and a linear time trend. 21 months of weekly data.
following a warning about extreme AR parameters / possible non-stationarity we undertook a series of N=1 models consisting of a lagged effect and a time effect.
this exercise, a time-consuming one as n=799, led to a number of failed runs. It turned out there were individuals for whom the lagged variable had no variance and/or had an extremely high correlation with the original repeated measure. We also observed that either of these occurrences were more common when there was substantial missing data.
We were wondering firstly if there were any rules of thumb when it comes to dropping cases with problematic lagged variance/covariance, and secondly how useful the exercise of N=1 models was likely to be in debugging N>1 models.
21 months of weekly data sounds excellent but if most of these are missing values it could be a problem. We generally recommend cleaning up the data - removing individuals with only a few observations - or individuals that have no variation in the outcome variable
Removing logvar also will probably reduce the issues. Reducing the number of subject specific effects will reduce problems. It is very useful to look at the N=1 cases. Also make sure that time trend is not more complex than just linear. Possibly try switching to RDSEM. Also make sure you are correctly describing the issue as users often confuse "convergence problems" with "not being able to compute standardized coefficients for some iterations" (which doesn't have anything to do with convergence but how wide the SE are).
Jon Heron posted on Monday, July 27, 2020 - 7:09 am
thanks Tihomir, that's really helpful
Out of 799 N=1 DSEM models, I end up with 16 failed runs. I ran three sets of models, AR(1) with no time adjustment, AR(1) adjusted for linear time and AR(1) adjusted for quadratic. Trawling the output files it's clear that a combination of trimming individuals with little data and/or individuals with a large span of blank data will remove those problems.
As you suggested switching to RDSEM I ran the same set of 799 sets-of-three N=1 models. Rather than 16 problem IDs we now have 571.
It's going to take a little while to decipher what distinguishes these 571 from the remaining 228, but I just wondered if you were aware of any issues regarding RDSEM with N=1. The average number of measurements in our sample is 63 so missing data can't be the explanation. Grabbing a few outputs at random it appears this is a common error
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY. THE POSTERIOR COVARIANCE MATRIX FOR THE PARAMETERS IS NOT POSITIVE DEFINITE, AS IT SHOULD BE.
THE PROBLEM OCCURRED IN CHAIN 1.
Any idea? BTW I'm running ver-8.3 as I've not gotten round to asking IT to upgrade.
I am not aware of any issues for single level RDSEM models but if you send your example (convergence problem for one person where DSEM converged) to firstname.lastname@example.org we can provide more information.
Jon Heron posted on Tuesday, July 28, 2020 - 7:02 am
Thanks Tihomir I'll see if we can obtain permission to share some of these data.