Message/Author 


Greetings, Mplus permits the computation and display of a factor score determinacy coefficient for continuous latent factors via the FSDETERMINACY option on the OUTPUT line. The Mplus User's Guide states that values for this coefficient range from 0 to 1, with larger values indicating better measurement of the factor by the observed indicators. Is there any published literature describing cutoffs, rules of thumb, or other considerations for what consitutes satisfactory, good, excellent, etc. measurement of factors by observed items as reflected by the value of the determinacy coefficient? With many thanks, Tor Neilands 

bmuthen posted on Tuesday, December 07, 2004  4:47 pm



This should be in most factor analysis texts. For example, Mulaik's? LawleyMaxwell? 


Is there any method to compute factor determinancy scores when using ANALYSIS: TYPE=COMPLEX option. 


Mplus does not provide factor score determinacy values for TYPE=COMPLEX; 


Is there a particular reason why factor score determinancy is not computed with TYPE=COMPLEX? Can you please refer me to any relevant references? Can you recommend any other methods that I can use for a complex EFA with categorical data in MPlus? Thanks, Alison 


If you have categorical outcomes you don't get factor determinacy because that is a concept valid only for continuous outcomes. With categorical outcomes you would instead consider "item information" which Mplus provides. 


Hello Bengt, I've Mplus version 5, and in an EFA with categorical outcomes and Type=Complex the output prints the FACTOR DETERMINACIES. But you said "is a concept valid only for continuous outcomes." CATEGORICAL ARE d110CIT1d920CIT1; CLUSTER = cluster; ANALYSIS: TYPE IS COMPLEX EFA 2 5 MISSING H1; (...) RESULTS FOR EXPLORATORY FACTOR ANALYSIS (...) FACTOR DETERMINACIES 1 2 ________ ________ 1 0.994 0.989 Nevertheless, the FSDETERMINACY doesn't work in CFA models. What is the "item information"? 


We generalized the factor score determinacy to categorical outcomes and added it to EFA. We have not yet added it to CFA. Item information includes the item characteristic curves and information functions which are available with the PLOT command. 

Stephan posted on Thursday, August 07, 2008  6:29 pm



Hello, after screening the handbook and technical apendices I was wondering if there's a formula available which shows how factor score determinacy is calculated in MPlus? Thanks, Stephan 


If you send a fax number to support@statmodel.com, I can fax you the formulas. 


Hello, Where might I find documentation on the INFORMATION option in the ANALYSIS section? I see that it can one of three types  but didn't find an explanation of the types. I understand that can provide additional information for an EFA in a CFA framework with categorical variables. I'm guessing that certain estimators must go along with the INFORMATION option  if so, which ones? Thank you 


See Technical Appendix 8 on the website. It is under Fisher Information Matrix. See pages 494495 of the Mplus Version 5 User's Guide. A brief description of the three methods in given along with a table that shows which information matrices can be used with different estimators. 


In a multiple growth model I have only 3 indicators for one of the time points. A CFA for that time point only, implies a nonsignificant negative residual variance for one indicator. The scale of the factor is chosen so that another indicator is the reference. I get factor determinacy 1. Is the factor determinacy=1 a consequence of one of the indicators having a nonsignificant negative residual variance? Is factor determinacy=1 a bad thing? I have no reason to suspect a zero measurement error for that indicator. However removing that time point from the analysis, "cripples" the rest of my analysis. I end up with 2 time points only, thus I cannot have a growth analysis. Thank you. 


Please send the full output and your license number to support@statmodel.com. 


How do people interpret zero residual variance estimates? My interpretation would be that the indicator with zero residual variance is not really free of measurement error, but whatever error is present in that indicator is also present in the rest of the indicators used, and thus is "absorbed" by the common factor. Did you encounter this interpretation? Does it make any sense? Thank you. 


That could be one valid interpretation, I think. In factor analysis a distinction is made between errors and uniquenesses. An item may have a unique part  relative to other items  but it isn't an error. These 2 components of the residual cannot be distinguished  the variances cannot be separately identified  unless in special models. A more typical explanation, however, is that the model is either misspecified or that this item is really almost errorfree and almost the same as the factor. I would think the misspecified cause is more probable. 


Hi Linda and/or Bengt, A student and I recently used the option for the first time to produce factor score determinacies. We are feeling like we need some more information than what we've requested to interpret the output with confidence. Specifically, the output includes determinacies based on those with complete data as well as determinacies based on each missing data pattern. Is there an option to request more information on the missing data patterns (i.e., what each pattern is and how many participants fall into each pattern)? Thanks in advance! 


P.S. to my last past. On May 7, 2008, Linda replied to an earlier post by saying that factor score determinacies were not yet added to CFA for categorical indicators. This seems to imply that there is a plan to include these at some point and we were wondering if you knew when that would that be (we could really use them for an analysis that we are running now). Thanks! 


Ask for the PATTERNS option in the OUTPUT command. After reflection, we decided factor score determinacy did not make sense for categorical factor indicators. Instead you should look at the information functions that are part of the PLOT command. 


thanks very much Linda! Re: factor score determinacy for categorical indicators, is it possible to state why it doesn't make sense in this forum or is there some relevant reading you could point us to? Thanks! 


Because the quality of the factor score estimation for categorical items is not a single number as for continuous items but depends on the factor value itself. See IRT books under information functions. 


Hello, 1) Does it make sense to report the squared factor score determinacy as an estimate of factor score reliability? 2) Factor score determinacy is not available in Mplus for multilevel factor models. When interested in the determinacy of a between level factor score, does it make sense to specify the multilevel factor model and obtain the correlation of the between factor of interest with the estimated between level factor scores? Thank you! 


1) Loosely stated, I think of factor determinacy as a validity matter. It relates to the bias in the factor score estimates. When you say factor score reliability I think of the precision with which they can be estimated, that is, their standard errors. Those two concepts are different to me. 2) I think that is awkward to specify. If you are concerned about bias in the estimated factor scores due to having few items per factor or small loadings, I would use a Bayesian plausible value approach instead. 


Thanks a lot, this is very helpful. Seems I have some misconception there. Interestingly, though, one finds that in the (applied) literature FS determinacy is referred to as both a validity coefficient and a measure of internal consistency. May I add a followup to question 2? While I appreciate your suggestion to use plausible values, I have to add some measure of reliability for betweenlevel factor scores to a paper already submitted. One reviewer suggests to report Cronbach's alpha but I'd assume that alpha is dependent on both the within and between level intercorrelation of the items. Since you mention the standard errors, I wonder whether one might estimate reliability comparable to "separation reliability" in IRT, which is given as variance accounted for by the model divided by the variance of the estimated scores, where variance accounted for by the model is the difference between the variance of the estimated scores and the mean square of their standard errors. (Anyway, it seems that Mplus does not report SEs for factor scores from multilevel models.) 


You should be able to get SEs for estimated betweenlevel factor scores in Mplus. 


Similar to another person in August 2008, I'm curious how factor score determinacy is calculated. Is there documentation on this formula? Thanks for your help! 


It's a long formula but what it boils down to is the correlation between model estimated factor scores and true factor scores. 


Thanks for your prompt response! I'd love to see the formula if possible even though it's long. I'm working on a project involving factor scores and fully understanding how the determinacy is calculated will help move it forward. 


You should look it up in a good factor analysis book where you will get a full explanation. 

John C posted on Wednesday, October 18, 2017  9:07 pm



Are there restrictions on estimating factor score determinacy? In particular, can it be estimated for a just identified model, e.g., a single factor with three continuous indicators? (even though unidimensionality in that case cannot be tested). 


Q1: I don't think so. Q2: I think so. 

Tyler Moore posted on Thursday, December 12, 2019  4:16 pm



Hi all, I'm calculating factor score determinacies using the equation (multiplying matrices), and I'm wondering if calculating score determinacy for the higherorder factor in a higherorder model is as simple as using the loadings of the lowerorder factors on it as though it were a unidimensional model. I guess what I'm asking is, how does one calculate determinacy of a higherorder factor? Thanks! 


The firstorder factors are regressed on the secondorder factors using a matrix called Beta in Mplus (and traditional SEM language). So instead of using only the Lambda matrix in the determinacy formula, you have to use Lambda*(IBeta). 

Tyler Moore posted on Saturday, December 14, 2019  2:54 pm



Thank you, this makes sense, and I see I can get beta by requesting tech1. However, the only beta output provided by tech1 is the parameter numbers and starting values. Which of those corresponds to the beta you used in the above comment? Also, if estimating a 3factor model (with 1 secondorder), the lambda has dimensions itemsby3, but the beta matrix appears to be 4by4 (appears to include the general factor). Since those aren't conformable, how would I multiple them? Thanks again! 


Lambda is observed variables by latent variables and you have 4 latent variables. The secondorder factor influences the observed variables by slopes (loadings) of zero. The beta matrix is latent variables by latent variables. 

Tyler Moore posted on Monday, December 16, 2019  4:39 pm



Perfect  makes sense now. One last question: The score determinacy is coming out to be less for the secondorder than for the bifactor (estimated separately obviously). Is that what you would expect? I can see how that would be the case given the secondorder factor has only 3 indicators, whereas the bifactor general has [items] indicators. Is it really that simple? Thanks again! 


In principle not only the number of indicators matters but also the actual parameters. You can use formulas (225226) http://statmodel.com/download/techappen.pdf to compute the determinacy. Note, however, that the second order model is nested in the bifactor model. So the bifactor model is expected to extract more information from the data (or at least not get worse) and therefore have better determinacy. You might find the bifactor discussion on page 304 useful. http://www.statmodel.com/download/NET.pdf 

Back to top 