Thanks in advance for any pointers. We have some situations in N=1 models in which we would like to test whether a single factor or two factors makes more sense. We were expecting to be able to use DIC as evidence in this determination. However, we are finding that DIC prefers a two-factor model even when the correlation between those two factors is very high (above .9, including .99). We are starting to wonder whether there's something wrong with our assumption that DIC would be informative here. (Plenty of detail available if any of it would help; we did our best to use procedures outlined in powerpoint slides for presentations on DSEM.)
Thanks for the suggestions, Tihomir. Does your answer change at all given that we are talking about N=1 models here?
Notably, we also expected DIC to tell us whether there should be one factor or two, yet we keep finding that DIC suggests two factors, even when the correlation between the two factors is very high. That might suggest we are doing something wrong, but I also wondered if anyone has checked if DIC behaves as expected in the N = 1 case on this front.
Thanks, Tihomir, we were thinking in terms of standard SEM/CFA and hadn't considered the autocorrelations. It looks like that may be what's at issue.
A related question from the same analyses. Some of our models, particularly some "dumb" baseline models that don't look like they should be very good, are returning negative pDs (often with smaller DICs than the "good" models!). We have found some indication in the literature that some are inclined to interpret negative pDs as suggesting a problem with the model. We are inclined to agree both because the idea of negative parameters is silly and because we tend to get this for models that we consider on the silly side. I'd appreciate any guidance on this issue (including any pointers for good discussions or simulations on this point).
My experience is that negative pDs go away with many iterations, i.e., they mostly indicate inadequate iterations (which may indirectly be a sign of poor model or model estimation, for example, variance fixed to 0 or poorly identified model). You can try fbiter=50000; as a first step. If the problem persists send it to firstname.lastname@example.org
Thanks--the negative pD in at least one case persists beyond fbiter = 1000000 with a thin of 100, so apparently if the model gets silly enough even lots of iterations won't make it go away! I'll do some work with that model and will send it on if problems persist. Thanks!
I am running a 2 level DSEM model and am looking at the DIC to determine which model has the best fit. I tried linear, quadratic, cubic, and quartic models and the DIC kept decreasing. However, the beta coefficients for the quadratic model were not significant. Does this mean that I need to stop at the linear model?
I would recommend several additional steps before you settle this.
1. Run the a two-level analysis with the trend and see if this makes it easier to answer the above question
2. Run the RDSEM model instead of the DSEM model since that disentangles the trend from the dynamics. You might find section 14 useful http://www.statmodel.com/download/RDSEM.pdf The RDSEM model would also be comparable to the two-level model.
3. Remove non-significant dynamic paths as some of these could compromise the power to detect significance in the trend.
4. If these steps do not help you should consider these two issues:
a. polynomial trends may in fact be useful in accommodating non-polynomial trends well and while individual coefficients in the polynomial trend would lack significance the overall DIC criterion may indeed be making a valid point. Here I would recommend looking at the Mplus time series plots to see if you can visually justify non-linear trend.
b. Significance of individual coefficients is generally more reliable in DSEM than DIC comparison. This is particularly the case when you have a large number of pD due to missing data for example. The DIC would generally be difficult to estimate well and will have some variability. You can study the variability of DIC by changing the random seed of the MCMC using the bseed option. If the DIC difference between two models is so large that it overcomes that DIC variability then the lower DIC model should be preferred. If the DIC differences are small compared to the variability then you should ignore the DIC.