Message/Author 


Greetings, Mplus permits the computation and display of a factor score determinacy coefficient for continuous latent factors via the FSDETERMINACY option on the OUTPUT line. The Mplus User's Guide states that values for this coefficient range from 0 to 1, with larger values indicating better measurement of the factor by the observed indicators. Is there any published literature describing cutoffs, rules of thumb, or other considerations for what consitutes satisfactory, good, excellent, etc. measurement of factors by observed items as reflected by the value of the determinacy coefficient? With many thanks, Tor Neilands 

bmuthen posted on Tuesday, December 07, 2004  4:47 pm



This should be in most factor analysis texts. For example, Mulaik's? LawleyMaxwell? 


Is there any method to compute factor determinancy scores when using ANALYSIS: TYPE=COMPLEX option. 


Mplus does not provide factor score determinacy values for TYPE=COMPLEX; 


Is there a particular reason why factor score determinancy is not computed with TYPE=COMPLEX? Can you please refer me to any relevant references? Can you recommend any other methods that I can use for a complex EFA with categorical data in MPlus? Thanks, Alison 


If you have categorical outcomes you don't get factor determinacy because that is a concept valid only for continuous outcomes. With categorical outcomes you would instead consider "item information" which Mplus provides. 


Hello Bengt, I've Mplus version 5, and in an EFA with categorical outcomes and Type=Complex the output prints the FACTOR DETERMINACIES. But you said "is a concept valid only for continuous outcomes." CATEGORICAL ARE d110CIT1d920CIT1; CLUSTER = cluster; ANALYSIS: TYPE IS COMPLEX EFA 2 5 MISSING H1; (...) RESULTS FOR EXPLORATORY FACTOR ANALYSIS (...) FACTOR DETERMINACIES 1 2 ________ ________ 1 0.994 0.989 Nevertheless, the FSDETERMINACY doesn't work in CFA models. What is the "item information"? 


We generalized the factor score determinacy to categorical outcomes and added it to EFA. We have not yet added it to CFA. Item information includes the item characteristic curves and information functions which are available with the PLOT command. 

Stephan posted on Thursday, August 07, 2008  6:29 pm



Hello, after screening the handbook and technical apendices I was wondering if there's a formula available which shows how factor score determinacy is calculated in MPlus? Thanks, Stephan 


If you send a fax number to support@statmodel.com, I can fax you the formulas. 


Hello, Where might I find documentation on the INFORMATION option in the ANALYSIS section? I see that it can one of three types  but didn't find an explanation of the types. I understand that can provide additional information for an EFA in a CFA framework with categorical variables. I'm guessing that certain estimators must go along with the INFORMATION option  if so, which ones? Thank you 


See Technical Appendix 8 on the website. It is under Fisher Information Matrix. See pages 494495 of the Mplus Version 5 User's Guide. A brief description of the three methods in given along with a table that shows which information matrices can be used with different estimators. 


In a multiple growth model I have only 3 indicators for one of the time points. A CFA for that time point only, implies a nonsignificant negative residual variance for one indicator. The scale of the factor is chosen so that another indicator is the reference. I get factor determinacy 1. Is the factor determinacy=1 a consequence of one of the indicators having a nonsignificant negative residual variance? Is factor determinacy=1 a bad thing? I have no reason to suspect a zero measurement error for that indicator. However removing that time point from the analysis, "cripples" the rest of my analysis. I end up with 2 time points only, thus I cannot have a growth analysis. Thank you. 


Please send the full output and your license number to support@statmodel.com. 


How do people interpret zero residual variance estimates? My interpretation would be that the indicator with zero residual variance is not really free of measurement error, but whatever error is present in that indicator is also present in the rest of the indicators used, and thus is "absorbed" by the common factor. Did you encounter this interpretation? Does it make any sense? Thank you. 


That could be one valid interpretation, I think. In factor analysis a distinction is made between errors and uniquenesses. An item may have a unique part  relative to other items  but it isn't an error. These 2 components of the residual cannot be distinguished  the variances cannot be separately identified  unless in special models. A more typical explanation, however, is that the model is either misspecified or that this item is really almost errorfree and almost the same as the factor. I would think the misspecified cause is more probable. 


Hi Linda and/or Bengt, A student and I recently used the option for the first time to produce factor score determinacies. We are feeling like we need some more information than what we've requested to interpret the output with confidence. Specifically, the output includes determinacies based on those with complete data as well as determinacies based on each missing data pattern. Is there an option to request more information on the missing data patterns (i.e., what each pattern is and how many participants fall into each pattern)? Thanks in advance! 


P.S. to my last past. On May 7, 2008, Linda replied to an earlier post by saying that factor score determinacies were not yet added to CFA for categorical indicators. This seems to imply that there is a plan to include these at some point and we were wondering if you knew when that would that be (we could really use them for an analysis that we are running now). Thanks! 


Ask for the PATTERNS option in the OUTPUT command. After reflection, we decided factor score determinacy did not make sense for categorical factor indicators. Instead you should look at the information functions that are part of the PLOT command. 


thanks very much Linda! Re: factor score determinacy for categorical indicators, is it possible to state why it doesn't make sense in this forum or is there some relevant reading you could point us to? Thanks! 


Because the quality of the factor score estimation for categorical items is not a single number as for continuous items but depends on the factor value itself. See IRT books under information functions. 


Hello, 1) Does it make sense to report the squared factor score determinacy as an estimate of factor score reliability? 2) Factor score determinacy is not available in Mplus for multilevel factor models. When interested in the determinacy of a between level factor score, does it make sense to specify the multilevel factor model and obtain the correlation of the between factor of interest with the estimated between level factor scores? Thank you! 


1) Loosely stated, I think of factor determinacy as a validity matter. It relates to the bias in the factor score estimates. When you say factor score reliability I think of the precision with which they can be estimated, that is, their standard errors. Those two concepts are different to me. 2) I think that is awkward to specify. If you are concerned about bias in the estimated factor scores due to having few items per factor or small loadings, I would use a Bayesian plausible value approach instead. 


Thanks a lot, this is very helpful. Seems I have some misconception there. Interestingly, though, one finds that in the (applied) literature FS determinacy is referred to as both a validity coefficient and a measure of internal consistency. May I add a followup to question 2? While I appreciate your suggestion to use plausible values, I have to add some measure of reliability for betweenlevel factor scores to a paper already submitted. One reviewer suggests to report Cronbach's alpha but I'd assume that alpha is dependent on both the within and between level intercorrelation of the items. Since you mention the standard errors, I wonder whether one might estimate reliability comparable to "separation reliability" in IRT, which is given as variance accounted for by the model divided by the variance of the estimated scores, where variance accounted for by the model is the difference between the variance of the estimated scores and the mean square of their standard errors. (Anyway, it seems that Mplus does not report SEs for factor scores from multilevel models.) 


You should be able to get SEs for estimated betweenlevel factor scores in Mplus. 

Back to top 