Power analysis for CFA of large scale
Message/Author
 Mike C Parent posted on Friday, March 21, 2008 - 5:49 pm
Hi all (first time posting here!),

A reviewer for a paper I wrote suggested that I conduct a power analysis in accordance with the MacCallum, Browne, and Sugawara paper for a CFA I did. The trouble is, the scale is pretty large. My understanding of SEM/CFA power analysis (based on my reading of MacCallum et al and a test using Preacher's online sem power calculator) is that with really high df I'll hit perfect power even with a tiny sample. My scale has almost df = 1000. Am I missing something, or would power analysis not be useful for my CFA?

Thanks!
 Bengt O. Muthen posted on Saturday, March 22, 2008 - 12:33 pm
I assume when you say your scale is large that you mean that you have many observed factor indicators and when you say your scale has df = 1000 this is the df for your H0 model. If I remember correctly the paper you refer to considers the overall power to reject the model if it is incorrect - you may ask yourself if is that what you are interested in. I can imagine that with a highly restricted model (with high df) you would have an easy time to reject the model due to small deviations. Or, are you interested in the power to reject that a certain parameter is zero? If I am not understanding you, please send the paper you refer to.
 Michael Reichenheim posted on Monday, December 05, 2016 - 9:39 am
Hello,

I’m attempting a Post hoc Power Study on quite a high-dimensional CFA model (9 dim, 36 indicators) using infos obtained from one of the estimated models (n=502). In the POPULATION MODEL, I’ve specified all possible parameters (Unstd factor loadings, factor variances, factor covariance, item residuals and item thresholds. The MODEL follows these specifications.

The SUMMARY OF ANALYSIS indicates that from the 10,000 ‘requested number of replications’, 9992 were ‘completed’.

MODEL FIT INFORMATION and MODEL RESULTS seem ok at first, with Average ESTIMATES very close to Population ESTIMATES; SE averages close to estimated SE; and ‘95% Cover’ / ‘% Sig Coeff’ also ok.

However, looking at TECH9, there are 2,231 (out of 10,000) showing Replications with “RESIDUAL COVARIANCE MATRIX (THETA) NOT POSITIVE DEFINITE’, which is somehow troublesome. I know the POPULATION MODEL is quite too stringent, but since I would still want all parameters specified as in the estimated model, I thought maybe it would be possible to discard these ‘unwanted’ replications, and calculate ‘95% Cover’ / ‘% Sig Coeff’ just from the ‘well-behaved’ matrices. Is this possible?

Thanks,
Michael R.
 Bengt O. Muthen posted on Monday, December 05, 2016 - 5:46 pm
If you discard those 2,000 replications you will get biased results for your parameters because you get a selective sample. Those reps converge despite their Theta problem. There is no automatic way to remove those reps and I would not recommend it.
 Michael Reichenheim posted on Tuesday, December 06, 2016 - 1:21 am
Hi Bengt,

Thanks. Yes, there's selectivity, but can I then trust and safely report my results by including these replication? I guess so, but regardless, is there a way to look at the outputs of some of these 2000 runs just to see what's going on?
 Linda K. Muthen posted on Tuesday, December 06, 2016 - 6:19 am
You can save all of the data sets and then analyze some of the ones that failed. This should give you more information than you get from TECH9.