When specific factors have only 2 indicators you cannot identify the loading for the second of those indicators. Think of the specific factor as absorbing a residual correlation between those 2 indicators - there is only 1 such correlation and therefore you can only identify 1 parameter, in this case the specific factor variance.
Li Lin posted on Tuesday, March 02, 2010 - 12:35 pm
Hi, Can bifactor model be used in EFA? If it can, could you provide an example with Mplus code? Thanks!
The bi-factor model is a CFA model, not an EFA model - it imposes more than m^2 restrictions. There is a general factor and uncorrelated specific factors. Or am I misunderstanding your question?
Li Lin posted on Tuesday, March 02, 2010 - 1:19 pm
Thanks, Dr. Muthen. You have got my question correctly. I saw bifactor model was included in "Results of different exploratory factor models" Table in this paper "The role of the bifactor model in resolving dimensionality issues in health outcomes measures" (http://www.springerlink.com/content/jh561175967n4503/), and wondered.
Kätlin Peets posted on Wednesday, February 23, 2011 - 8:25 am
We are interested in testing the degree of subject-specificity versus subject-generalizability of motivational constructs. We are comparing second-order factor models with bifactor models. We are not sure about the interpretation though. What can we say about subject-specificity of a construct if the second-order factor model does not worsen the fit compared to the bifactor solution?
We also have another question. If we find that a bi-factor model fits the data best but subject-specific factors do not have significant variance, should we still prefer this model over the others? In addition, as our sample size is not very high (less than 200), can the low variance estimates be influenced by sample size?
I don't know how much power there is to reject a second-order model in favor of a bifactor model under various circumstances. For one thing, it depends on how many items you have per specific factor - at some point the models are not distinguishable. You want to read the literature on this, such as the Yung, Thissen, McLoad (1999) Psychometrika article. Or do a simulation study.
You would want the specific factor variances to be substantial relative to their SEs, but you are right that a small sample may not produce that in which case you can still argue for the model. Again, a simulation study might shed more light.
I am running a bifactor CFA with categorical indicators; this is to examine whether PTSD items load onto both a general factor and specific symptom factors. I have fixed the correlations of the general factor with the symptom factors at zero. Do I need to fix the correlations of the symptom factors with eachother at zero? Conceptually, it makes sense that they would be correlated. However, in all the examples the lower-order factors are fixed to be uncorrelated.
MODEL: f1 BY u1-u5; f2 BY u6-u7; f3 BY u8-u12; f4 BY u13-u17; f5 BY u1-u17; f5 WITH f1-f4@0;
Could you shed some light on this? Thank you very much for your continued help! Sheila
In general, I don't think the correlations among the specific factors are identified. If you ask for Modindices when they are fixed at zero, you can see if any MIs are non-zero which would indicate that they could be identified.
We are working on a bifactor CFA we received a warning that the model may not be identified and the parameter involved is the in the PSI matrix between anger and psych. Is there an issue with the input syntax?
In general, the correlations between the specific factors need to be fixed at zero. Also, the correlations between the specific factors and the general factor need to be fixed at zero. In a model where therefore all factor correlations are fixed at zero, you can specify this conveniently by saying
Joshua Isen posted on Thursday, October 25, 2012 - 5:39 pm
I'm implementing a bifactor model where the correlations amongst all factors are fixed at zero. (There are no other variables/covariates in the model besides the factor indicators.) The Mplus output indeed confirms that all factor correlations are zero. However, when I save the data as Fscores, and then simply use these factor scores in a follow-up analysis, the correlations amongst factor scores are non-zero. This seems puzzling to me. Why is this happening?
The correlations among the factors in the model and the correlations using factor scores are not the same unless factor determinacy is one.
Joshua Isen posted on Saturday, October 27, 2012 - 1:03 pm
Thank you for the reference. I assume the "quality" of my estimated factor scores is based on the factor determinacies. Since there seems to be a rule of thumb that determinacies < .80 are unreliable, this suggests that I shouldn't use my estimated factor scores for further analysis.
Drs. Muthen, We ran a bifactor analysis to specify a new measurement model. The bifactor model fits better than any other plausible model we have compared it to, but we get an error that the residual covariance matrix is not positive definite.
When I check the residual variances for the estimated R^2, there is one small negative residual variance (-.09 for a 17-item model and -.14 for a 15-item model). Also, the item with this problem has a loading on its content factor of .94 and .96 respectively, which seem questionably high. Should we be concerned with this? If so, what are some possible solutions?
Some other models we have tried that eliminate this message include: 1) Using THETA parametrization, which gets rid of this error message, and also reduces the factor loading on the content factor to .87. However, we are unsure about the implications of this switch and whether it is a legitimate "fix." 2) Freeing the item factor loadings and fixing the factor variances to 1 and factor means to 0, as well as allowing the content factors to correlate. This reduces the standardized content factor loading for the item in question to .76. This, however, complicates multiple group analyses.
Also, we did run the model as continuous data and got the same warning. In this model, the same item's residual variance was not estimated, and again the residual variance for the estimated R^2 was negative and small.
Perhaps you want to try bi-factor EFA to see if there should be modifications to your bi-factor CFA model.
Li Lin posted on Wednesday, January 23, 2013 - 12:17 pm
I am trying to fit a bifactor model using the following model specifications: model: VULD by SFVULb SFVULj SFVULc SFVULk SFVULd SFVULl SFVULh SFVULm SFVULi SFVULn SFVULe SFVULo SFVULf SFVULp SFVULg SFVULq; LABD by SFVULb SFVULj SFVULc SFVULk SFVULd SFVULl SFVULh SFVULm SFVULi SFVULn; CLID by SFVULe SFVULo SFVULf SFVULp SFVULg SFVULq;
VULD with LABD-CLID @0; LABD with CLID @0;
However, error message appeared in output - "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED."
I have more like a couple of questions about the bifactor model than a problem per se. I am using the factors in my bifactor model as predictors in a survival analysis. Given that the factors are all constrained to be orthogonal to me, it seems to me that there is no need to run a model with all of them entered simultaneously as predictors given that the results shouldn't be different for a set of orthogonal predictors than the zero-order results for each predictor entered by itself. Does that make sense? I hope so as given that there are several factors (13 actually) in my bifactor model there are too many integration points for memory capacity when I do try to enter all the factors simultaneously as predictors to try to confirm my intuition. When I use Monte Carlo integration with 5000 integration points to run the full model, the results do differ from the zero-order results for each predictor entered by itself. Is the Monte Carlo integration accounting for this pattern going contrary to my intuition? Or is my intuition just wrong in the first place? Thanks!