Tor Neilands posted on Tuesday, December 07, 2004 - 10:39 pm
Mplus permits the computation and display of a factor score determinacy coefficient for continuous latent factors via the FSDETERMINACY option on the OUTPUT line. The Mplus User's Guide states that values for this coefficient range from 0 to 1, with larger values indicating better measurement of the factor by the observed indicators.
Is there any published literature describing cutoffs, rules of thumb, or other considerations for what consitutes satisfactory, good, excellent, etc. measurement of factors by observed items as reflected by the value of the determinacy coefficient?
With many thanks,
bmuthen posted on Tuesday, December 07, 2004 - 10:47 pm
This should be in most factor analysis texts. For example, Mulaik's? Lawley-Maxwell?
Is there a particular reason why factor score determinancy is not computed with TYPE=COMPLEX? Can you please refer me to any relevant references? Can you recommend any other methods that I can use for a complex EFA with categorical data in MPlus?
If you have categorical outcomes you don't get factor determinacy because that is a concept valid only for continuous outcomes. With categorical outcomes you would instead consider "item information" which Mplus provides.
See Technical Appendix 8 on the website. It is under Fisher Information Matrix. See pages 494-495 of the Mplus Version 5 User's Guide. A brief description of the three methods in given along with a table that shows which information matrices can be used with different estimators.
In a multiple growth model I have only 3 indicators for one of the time points. A CFA for that time point only, implies a nonsignificant negative residual variance for one indicator. The scale of the factor is chosen so that another indicator is the reference. I get factor determinacy 1. Is the factor determinacy=1 a consequence of one of the indicators having a nonsignificant negative residual variance? Is factor determinacy=1 a bad thing? I have no reason to suspect a zero measurement error for that indicator. However removing that time point from the analysis, "cripples" the rest of my analysis. I end up with 2 time points only, thus I cannot have a growth analysis. Thank you.
How do people interpret zero residual variance estimates? My interpretation would be that the indicator with zero residual variance is not really free of measurement error, but whatever error is present in that indicator is also present in the rest of the indicators used, and thus is "absorbed" by the common factor. Did you encounter this interpretation? Does it make any sense? Thank you.
That could be one valid interpretation, I think. In factor analysis a distinction is made between errors and uniquenesses. An item may have a unique part - relative to other items - but it isn't an error. These 2 components of the residual cannot be distinguished - the variances cannot be separately identified - unless in special models.
A more typical explanation, however, is that the model is either misspecified or that this item is really almost error-free and almost the same as the factor. I would think the misspecified cause is more probable.
Hi Linda and/or Bengt, A student and I recently used the option for the first time to produce factor score determinacies. We are feeling like we need some more information than what we've requested to interpret the output with confidence. Specifically, the output includes determinacies based on those with complete data as well as determinacies based on each missing data pattern. Is there an option to request more information on the missing data patterns (i.e., what each pattern is and how many participants fall into each pattern)? Thanks in advance!
On May 7, 2008, Linda replied to an earlier post by saying that factor score determinacies were not yet added to CFA for categorical indicators. This seems to imply that there is a plan to include these at some point and we were wondering if you knew when that would that be (we could really use them for an analysis that we are running now). Thanks!
thanks very much Linda! Re: factor score determinacy for categorical indicators, is it possible to state why it doesn't make sense in this forum or is there some relevant reading you could point us to? Thanks!
Because the quality of the factor score estimation for categorical items is not a single number as for continuous items but depends on the factor value itself. See IRT books under information functions.
1) Does it make sense to report the squared factor score determinacy as an estimate of factor score reliability?
2) Factor score determinacy is not available in Mplus for multilevel factor models. When interested in the determinacy of a between level factor score, does it make sense to specify the multilevel factor model and obtain the correlation of the between factor of interest with the estimated between level factor scores?
1) Loosely stated, I think of factor determinacy as a validity matter. It relates to the bias in the factor score estimates. When you say factor score reliability I think of the precision with which they can be estimated, that is, their standard errors. Those two concepts are different to me.
2) I think that is awkward to specify. If you are concerned about bias in the estimated factor scores due to having few items per factor or small loadings, I would use a Bayesian plausible value approach instead.
Thanks a lot, this is very helpful. Seems I have some misconception there. Interestingly, though, one finds that in the (applied) literature FS determinacy is referred to as both a validity coefficient and a measure of internal consistency.
May I add a follow-up to question 2? While I appreciate your suggestion to use plausible values, I have to add some measure of reliability for between-level factor scores to a paper already submitted. One reviewer suggests to report Cronbach's alpha but I'd assume that alpha is dependent on both the within and between level intercorrelation of the items. Since you mention the standard errors, I wonder whether one might estimate reliability comparable to "separation reliability" in IRT, which is given as variance accounted for by the model divided by the variance of the estimated scores, where variance accounted for by the model is the difference between the variance of the estimated scores and the mean square of their standard errors. (Anyway, it seems that Mplus does not report SEs for factor scores from multilevel models.)