

Sufficient power for difftest 

Message/Author 


I have three CFA models with 32 categorical indicator items: [1] a onefactor model, [2] a secondorder model with three lowerorder factor and one higherorder factor and [3] a bifactor model, with one general factor (on which all items load) and three specific factors (on which only a specific constellation of items load). With a sample of 247, I examined whether these models fitted my data, using the WLSMV estimator with delta parameterization. Using the Difftest function, I examined whether the bifactor model significantly degraded in fit when it was constrained into a secondorder model and whether the secondorder model significantly degraded in fit when constrained into a onefactor model. As the specified models are quite large, I would to examine whether I have sufficient power to compare these models. Is it possible to examine whether I have sufficient power to compare these models using the difftest function, for instance by using a montecarlo simulation study? 


It is not automated  each replication would have to be analyzed twice using the two models. Perhaps you can automate is using our R functions: http://www.statmodel.com/usingmplusviar.shtml 


Thank you for this clarification, I will take a look at this option. I also have a related question: I conducted a power analyses on the bifactor model with one additional predictor that correlates with all four variables. The model fit information tells me that, out of the 500 computations, 496 were successful and the power of the parameter of interest seems sufficient. However, about half of the replications also show an error message, which is often a warning of a nonpositive definite psi matrix. Could my sample size , in combination with a complex model, explain this non positive definite matrix? 


Perhaps your population values for the factor covariance matrix psi makes it close to nonpos def, e.g. by a high correlation or a variance close to zero. Try running the Monte Carlo with a large sample size to learn more. 

Back to top 

