Hi, I'm trying to fit a second-order and a bifactor model on a 14-item scale with 2 underlying (first-order) factors. A unidimensional and correlated 2-factor model run fine, but the second-order and bifactor model are both producing an error. For instance the second order model:
TITLE: Second-order factor analysis TVS DATA: FILE IS TVS.dat; VARIABLE: NAMES ARE y1-y16; CATEGORICAL ARE y1-y16; ANALYSIS: ESTIMATOR = WLSMV; MODEL: f1 BY y1-y10; f2 BY y11-y16; f3 BY f1-f2;
gives the follwong error: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 80.
The bifactor model produces a similar parameter model.
Do you know what the problem is with these models and how I could run these anyway?
We are testing a bifactor model for an empirically supported traditional 3 factor model due to high correlations between the factors in the 3 factor CFA. Generally, the correlations between the 3 factors range from .6-.8.
We hypothesized, based on theory, that there would be one general factor (with 10 indicators) and three specific factors (2 factors with 3 indicators and 1 factor with 4 indicators); all continuous; variable correlations set to 0. However, our models with the three specific factors and one general factor would not converge. I have tried different starting values as well as setting the covariance to 0 for one of the specific factors as some of the other exploratory analyses I did suggested that that factors had a negative covariance.
Due to the nonconvergence, I tested a model combining two of the specific factors. This collapsing of the factors was also based on theory as well as a .76 correlation between these two factors in the three factor CFA. Is this an accepted solution (to combine the two factors?). Doing this yielded better fit statistics for the bifactor model compared to the three factor CFA.
For us to diagnose the problem, you would have to send input, output, data, and license number to email@example.com.
The EFA version of bi-factor analyais can be very helpful in these situations, so getting V7 might be worth your while if you do a lot of bi-factor modeling.
JOEL WONG posted on Thursday, August 01, 2013 - 1:40 am
I've been reading the works of Steven Reise on bifactor models, and I've 3 questions on the test of bifactor models in Mplus:
1. In a bifactor CFA, why is it important to specify that the specific factors are uncorrelated with each other, i.e., f1 WITH f2@0? Would it be a problem if we know from a regular CFA that the specific factors are in fact strongly correlated with each other?
2. Based on a bifactor model, Reise computes a coefficient omega hierarchical (omegaH), which is how much variance in summed scores can be attributed to a single general factor. Can Mplus compute OmegaH or is information available in the output to compute OmegaH?
3. Reise also computes an explained common variance (ECV) index in bifactor models (common variance explained by the general factor divided by (common variance explained by general factor + common variance explained by specific factors). In the Mplus output under "Model Results," there is a section on variances for the general factor and each of the specific factors. Are these the same as the common variance Reise referred to? If so, could I use these to compute the ECV?
Thanks a lot.
Reise, S. P., Moore, T. M., & Haviland, M. G. (2010). Bifactor models and rotations: Exploring the extent to which multidimensional data yield univocal scale scores. Journal of personality assessment, 92(6), 544-559.
Try freeing the first factor loading of each factor and fixing the factor variance to one to see if perhaps the first factor loading is not estimated close to one. If that is the problem, you can choose another factor loading that is estimated close to one to set the metric.
If that is not the problem, try running the factors separately.
We are comparing second-order/bifactor models (different scales: 1-3 and 1-5). Bifactor input: Model:emo by sdq3* sdq8 sdq13 sdq16 sdq24;con by sdq5* sdq12 sdq18 sdq22 revsdq7;wb by swem1* swem2-swem7;g by sdq3* sdq8 sdq13 sdq16 sdq24 sdq5 sdq12 sdq18 sdq22 revsdq7 swem1-swem7;emo with con@0; emo with wb@0; emo with g@0; con with wb@0; con with g@0; wb with g@0;emo@1; con@1; g@1; wb@1; With default standardisation the model doesn't converge, even with increased iterations/different first indicators. The model does converge with default freed variances when correlations between specific factors are released, which reveals variance of g=0. Does this mean the model doesn't work (reject?), and the syntax above produces a result because we force g to have variance? We tried reversing swem items (these are inversely related) and the model converges. We are aware of floor effects in sdq items. 1. why doesn't the default standardisation work? 2.Why does reversing swem items cause the model to converge? Is it “different-method variance” as in MTMM? 3.Does the above point to problems with forcing a general or higher-order factor?
Many thanks for this. Bifactor EFA suggested the presence of only 2 specific factors (and this fitted with theory). However, a model where one of the subscales loads only onto the general factor and has no specific factor has the same problems as before: it will not converge unless we reverse the swem items or standardise by freeing the first loading/setting factor variances @1. The model that does not converge results in problematic factor loadings and variance for the swem as below, and that is why we think it has something to do with swem items being reversed (opposite) to the sdq ones:
Perhaps by "reverse" you mean that this would imply that their loadings change sign from negative to positive. If so, you can give negative starting values for the loadings - that sometimes helps when the loading estimate is negative.
If this doesn't help, send output and data to Support along with your license number.
Louise Black posted on Wednesday, February 21, 2018 - 2:27 am