I have three CFA models with 32 categorical indicator items:  a one-factor model,  a second-order model with three lower-order factor and one higher-order factor and  a bifactor model, with one general factor (on which all items load) and three specific factors (on which only a specific constellation of items load). With a sample of 247, I examined whether these models fitted my data, using the WLSMV estimator with delta parameterization. Using the Difftest function, I examined whether the bifactor model significantly degraded in fit when it was constrained into a second-order model and whether the second-order model significantly degraded in fit when constrained into a one-factor model. As the specified models are quite large, I would to examine whether I have sufficient power to compare these models. Is it possible to examine whether I have sufficient power to compare these models using the difftest function, for instance by using a monte-carlo simulation study?
Thank you for this clarification, I will take a look at this option.
I also have a related question:
I conducted a power analyses on the bifactor model with one additional predictor that correlates with all four variables. The model fit information tells me that, out of the 500 computations, 496 were successful and the power of the parameter of interest seems sufficient. However, about half of the replications also show an error message, which is often a warning of a non-positive definite psi matrix. Could my sample size , in combination with a complex model, explain this non positive definite matrix?
Perhaps your population values for the factor covariance matrix psi makes it close to non-pos def, e.g. by a high correlation or a variance close to zero. Try running the Monte Carlo with a large sample size to learn more.