Message/Author 


Thanks in advance for any pointers. We have some situations in N=1 models in which we would like to test whether a single factor or two factors makes more sense. We were expecting to be able to use DIC as evidence in this determination. However, we are finding that DIC prefers a twofactor model even when the correlation between those two factors is very high (above .9, including .99). We are starting to wonder whether there's something wrong with our assumption that DIC would be informative here. (Plenty of detail available if any of it would help; we did our best to use procedures outlined in powerpoint slides for presentations on DSEM.) 


DIC should work. I am attaching a little simulation which can illustrate how it works. If this doesn't help you can send your example to support@statmodel.com  MONTECARLO: NAMES ARE y1y6; NOBS = 500; NREP = 10; ANALYSIS: estimator=bayes; MODEL MONTECARLO: ETA1 BY Y1y6@1 (&1); ETA1*1; ETA1 on ETA1&1*0.4; y1y6*1; MODEL: ETA1 BY Y1@1 Y2Y6*1 (&1); ETA1*1; ETA1 on ETA1&1*0.4;  MONTECARLO: NAMES ARE y1y6; NOBS = 500; NREP = 10; ANALYSIS: estimator=bayes; MODEL MONTECARLO: ETA1 BY Y1y6@1 (&1); ETA1*1; ETA1 on ETA1&1*0.4; y1y6*1; MODEL: ETA1 BY Y1@1 Y2Y3*1 (&1); ETA1*1; ETA1 on ETA1&1*0.4; ETA2 BY Y4@1 Y5Y6*1 (&1); ETA2*1; ETA2 on ETA2&1*0.4;  If the hypothesis testing can be summarized with testing the factor correlation <1, I would recommend the simpler Ztest for the parameter as if it is in an ML estimation. 


Thanks for the suggestions, Tihomir. Does your answer change at all given that we are talking about N=1 models here? Notably, we also expected DIC to tell us whether there should be one factor or two, yet we keep finding that DIC suggests two factors, even when the correlation between the two factors is very high. That might suggest we are doing something wrong, but I also wondered if anyone has checked if DIC behaves as expected in the N = 1 case on this front. 


Correlation of 0.99 is not enough to replace two factors with one  the autocorrelation of the two factors must be the same as well. 


Thanks, Tihomir, we were thinking in terms of standard SEM/CFA and hadn't considered the autocorrelations. It looks like that may be what's at issue. A related question from the same analyses. Some of our models, particularly some "dumb" baseline models that don't look like they should be very good, are returning negative pDs (often with smaller DICs than the "good" models!). We have found some indication in the literature that some are inclined to interpret negative pDs as suggesting a problem with the model. We are inclined to agree both because the idea of negative parameters is silly and because we tend to get this for models that we consider on the silly side. I'd appreciate any guidance on this issue (including any pointers for good discussions or simulations on this point). Thanks! 


My experience is that negative pDs go away with many iterations, i.e., they mostly indicate inadequate iterations (which may indirectly be a sign of poor model or model estimation, for example, variance fixed to 0 or poorly identified model). You can try fbiter=50000; as a first step. If the problem persists send it to support@statmodel.com 


Thanksthe negative pD in at least one case persists beyond fbiter = 1000000 with a thin of 100, so apparently if the model gets silly enough even lots of iterations won't make it go away! I'll do some work with that model and will send it on if problems persist. Thanks! 


Dear Drs. Muthen, I am running a 2 level DSEM model and am looking at the DIC to determine which model has the best fit. I tried linear, quadratic, cubic, and quartic models and the DIC kept decreasing. However, the beta coefficients for the quadratic model were not significant. Does this mean that I need to stop at the linear model? Thanks, Mary Mitchell 


I would recommend several additional steps before you settle this. 1. Run the a twolevel analysis with the trend and see if this makes it easier to answer the above question 2. Run the RDSEM model instead of the DSEM model since that disentangles the trend from the dynamics. You might find section 14 useful http://www.statmodel.com/download/RDSEM.pdf The RDSEM model would also be comparable to the twolevel model. 3. Remove nonsignificant dynamic paths as some of these could compromise the power to detect significance in the trend. 4. If these steps do not help you should consider these two issues: a. polynomial trends may in fact be useful in accommodating nonpolynomial trends well and while individual coefficients in the polynomial trend would lack significance the overall DIC criterion may indeed be making a valid point. Here I would recommend looking at the Mplus time series plots to see if you can visually justify nonlinear trend. b. Significance of individual coefficients is generally more reliable in DSEM than DIC comparison. This is particularly the case when you have a large number of pD due to missing data for example. The DIC would generally be difficult to estimate well and will have some variability. You can study the variability of DIC by changing the random seed of the MCMC using the bseed option. If the DIC difference between two models is so large that it overcomes that DIC variability then the lower DIC model should be preferred. If the DIC differences are small compared to the variability then you should ignore the DIC. 

Back to top 