Message/Author 


Hi, How do you set the factor variance to 1 in MPLUS when running a CFA with categorical data? 


If the factor is named f, then you say f@1; 

RA Sterling posted on Thursday, January 05, 2006  7:17 pm



Thanks for your reply. My previous post was a bit vagueI'd like to set the factor variances to 1 and get a factor loading for each observed variable on a construct. I followed your advice (setting f@1), which set the factor variance to 1 and did NOT set the first variable of each construct to 1; however,I did not receive a factor loading for the first variable for each construct. Can you tell me how to achieve a factor loading for each observed variable, while setting the factor variances for each factor to 1? Thanks again for your help. 


If you read about the BY command in the user's guide, you will find out how to set the metric by fixing the factor variance to one and having all factor loadings free. There is an example of this. 

Wei Chun posted on Saturday, August 29, 2009  5:05 am



Dear Linda, I would like to estimate a SEM model with single latent variable indicators in which factor loadings and error variances need to be fixed. How do we fix the error variances in Mplus? Many thanks. 


VAriances are referred to by the name of the variable. To fix the factor variance of f to zero, say: f@0; See Chapter 16 of the user's guide for more information. 


Hi. I set the metric of my continuous latent variable by fixing its variance to one (and allowing all factor loadings to be freely estimated). Should my parameter estimates (factor loadings) output in Model Results be identical to those presented in the STDYX Standardization output, when I use MPlus default (setting the first factor loading to 1 and letting the variance of the latent variable be freely estimated)? 


I think you are asking if your raw estimates when (1) fixing factor variance to one and having all loadings free should be the same as the STDYX estimates when (2) using the Mplus default. If that's the question, the answer is no because in (1) your observed indicators are not standardized. 


Using the example in the website for multiple group analysis (MIMIC model; cont4), may I say that both codes below will produce the same fit measures? !Default. Model: f by y6y9; f on x1x3; Model gB: [y8]; !Fixing the variance of F to 1. Model: f by y6* y7y9; f@1; f on x1x3; Model gB: [y8]; How could I compare the factor loadings from these outputs? 


Setting the metric by fixing one factor loading to one versus fixing a factor variance to one will result in the same fit if you do it correctly. Compare the standardized factor loadings. 


Is it possible that by fixing a factor var to 1 instead of fixing a marker indicator to one on that factor (i.e., to scale the latent variable) one could get better or worse fit to a model? Although I see the post above, an adviser ran the same model in LISREL by freeing the marker indicator and got better fit. Assuming estimator is the same and all else being equal, why might this be the case? In the current situation, all observed variables are on the same metric (forced responses to a Qsort). Which brings up my second question: with 100 items on a forced choice qsort as the indicators, would ML be still the best estimator option? Or would one have to look at the normality of each individual card response across the sample (4 to +4, rescored to a 19 metric). Thanks for any input. JD 


If you set the metric fixing a factor variance to one and freeing all factor loadings, you will get the same fit as if you have a free factor variance and fix one factor loading to one. If not, you have made another change, for example, leaving the indicator fixed to one and fixing the factor variance to one. If you are treating the items as continuous, I would use MLR which is robust to nonnormality. If you are treating them as categorical, this is not an issue. 


Thanks. When I do use MLM (SatorraBentler) the model does not converge (it does with ML). Why might this be case? MLR ironically, does allow the model to converge. 


Ironically after freeing the marker loadings and setting the latent factors to 1, the MLM estimation did converge. Does something of this nature, which allows a model to converge (v. not) indicate something problematic with the indicators themselves? 


The first factor loading is fixed to one as the default. If when estimated it is negative or not close to one, this can cause convergence problems. You can fix another factor loading to one. Choose one that is positive and large. 

Jo Brown posted on Monday, May 28, 2012  10:24 am



what is the advantage of setting the latent variables' variance to 1 over using the default approach? 


There is no advantage. It is just a reparametrization. Model fit is identical. 

Jo Brown posted on Tuesday, May 29, 2012  2:23 am



Thanks Linda. 


Hi, If want to use my latent variable as a dependent variable in a regression (e.g. f by y1y4; f on X;) , can I freely estimate all loadings and set the variance to 1, or do I need to set a loading to 1 and let the variance be freely estimated, since X will be explaining the variance of f? Thanks, L 


You can do either one. 


I keep getting error messages when trying to run a type=twolevel basic command. I am trying to get a correlation matrix. I have specified the within variables in the VARIABLE command and that seems fine. However, MPlus is saying that I have some variance within certain clusters of some of my between variables even though I do not (I am looking back at the raw data). So I have tried setting the variance of those variables to 0, hoping that would run. But I am not doing that correctly, since I am getting an error statement that doesn't recognize Os@0; (Os is one of the problem variables) in the VARIABLE command. Can you please assist? Thank you! 


There must be a problem in your reading of the data. If you can't see it, send the output, data, and your license number to support@statmodel.com. 

SABA posted on Tuesday, December 22, 2015  7:34 am



Hi, I am doing confirmatory factor analysis and my model is as follows ANALYSIS: TYPE IS missing; ESTIMATOR IS ML; MODEL: F1 by ed0028a ed0028b ed0028c ed0028d ed0028e ed0028f; F2 by ed0028g ed0028h ed0028i ed0028j; Output: SAMPSTAT STDYX; In output both standardized and unstandardized loadings are either 1.00 or 0.99 for all items and standard errors are 0.00. Could you please tell what is the problem? Thank you 


Please send the output and your license number to support@statmodel.com. 


Hi, I am testing a measurement model over 3 timepoints in which I constraint the factor loadings to be equal over time. As mentioned there are 2 ways of identifying the model, fixing one of the items of each latent factor to 1 or fixing the variance of the factors to 1, model fit should be the same. This is the case in my basic model. In a multigroup model in which I did not impose any additional constrains, the two methods result in different numbers of parameters estimated as indicated by the DFs. Could you tell me where it goes wrong when I add multigroup to the model? grouping= gender (0=m 1=f); ANALYSIS: TYPE IS general; ESTIMATOR IS mlr; factor loading 1: Model: dT1 by d1_t1 (1) d2_t1 (2) d3_t1 (3); dT3 by d1_t3 (1) d2_t3 (2) d3_t3 (3); dT4 by d1_t4 (1) d2_t4 (2) d3_t4 (3); Variance factors 1: MODEL: dT1 by d1_t1* d2_t1 (2) d3_t1 (3) d4_t1 (4); dT3 by d1_t3* d2_t3 (2) d3_t3 (3) d4_t3 (4); dT4 by d1_t4* d2_t4 (2) d3_t4 (3) d4_t4 (4) dT1@1; dT3@1; dT4@1; Thanks in advance! Maurits 


It seems that in one model you have three factor indicators. In the other you have four. 


Sorry I shortened the model in this post due to the post size exceeding the maximum length, forgot to delete the fourth indicator in the second model. In the actual models the amount of indicators is equal. Do you have any suggestions what it might be then? 


Please send the two outputs and your license number to support@statmodel.com. 

Back to top 