The usual factor scores obtained by the "regression method" give unbiased slopes when regressed on (used as IVs), but not when used as DVs. There are other factor score methods that give scores that are unbiased for DV use. See
Skrondal, A. and Laake, P. (2001). Regression among factor scores. Psychometrika 66, 563-575.
Another and probably better approach is to use Plausible Values, obtained in a Bayesian analysis. See
Asparouhov, T. & Muthén, B. (2010). Plausible values for latent variables using Mplus. Technical Report.
This paper is on our web site under Papers, Bayesian Analysis.
Junyan Luo posted on Thursday, May 19, 2011 - 11:43 am
I am new to this forum, and my question might have been already answered. I did a CFA analysis and wanted to save the factor scores. Here is my syntax : TITLE: Moderation avec variables latentes et sauvegarde des scores factoriels; DATA: FILE IS Modération.dat; VARIABLE: NAMES ARE x1 x2 x3 y1 y2 y3 z1 z2 z3; MODEL: f1 BY x1-x3; f2 BY y1-y3; f3 BY Z1-Z3; SAVEDATA: FILE IS CFA_Factors.sav; SAVE = FSCORES;
When I open the file with saved factor scores, I find two columns for each factor. The first one contains factor scores (centered but not standardized). The second contain a fix number, identical for each observation. Can you please tell me the meaning of that second column and is there a way of not saving that second colunm? Furtheremore, is it possible to save standardized factor scores ?
The second number is the standard error of the factor score. This value is the same for each observation when factor indicators are continuous. There is no way to avoid saving it. You cannot save standardized factor scores.
Gabriela R posted on Tuesday, August 23, 2011 - 3:55 am
Hello, I specified a multiple-group LGM (3 groups) and my solution in "Model Results" is different than the solution in the "STDYX" section. I chose to present the standardized findings.
The slopes in 2 groups are non-significant and the slope in the other one is. I now have to discuss whether the 3 slopes are "significantly different" from each other. If I save factor scores and apply an ANOVA in SPSS I obtain values that corresponds to the Model Results section. Is there a way to do find out whether the slopes are significantly different from each other based on standardized scores?
Use MODEL CONSTRAINT to create the standardized coefficients as new parameters. Use MODEL TEST to test the significance.
Theodor Tes posted on Tuesday, August 14, 2012 - 11:59 am
I was considering the following approach: - CFA (from a standardized questionnaire) - extract factor scores (non-refined method & refined, regression method) - use them as DV in a regression framework (possibly SUR).
After reading on forums here (excellent read, thank You!) I found out that I cannot do it, but instead I could (perhaps) use them as independent variables (one variable as dependent regressed on all factor scores). Am I wrong here?
However, somewhere else in an older post, there was a remark, that suggested (in a similar setup) to estimate the "full model". Does it makes sense to estimate the CFA model with items as usual + adding several variables (which I was considering for the regression analysis) into the CFA framework, by allowing them to load on several (each) of the latent variables?
for example I had f1 by a1 a2 f2 by a3 a4 and let x be the independent variable of interest. Normally, I would regress x on extracted factor scores, say f1' f2'
or could i estimate the "full model" the "new cfa" f1 by a1 a2 x f2 by a3 a4 x
if it makes sense, whats the name of such an analysis? (what should I look for to know how to interpret such results?)
Look at our User's Guide under "MIMIC" modeling and you can see how you do this "full model" analysis in a single step.
You want to avoid regressions with factor scores - they generally don't behave the way the true values do.
Theodor Tes posted on Tuesday, August 14, 2012 - 11:18 pm
Thank You! I will take a look.
Y Chang posted on Friday, February 01, 2013 - 2:19 pm
I am doing the CFA model for the categorical variables (IRT model). I have 7 factors in the model. I am trying to find the the standard error of the factor score. The first parts in the output file are as same as the response file and the second parts are 7 columns. I assumed they are the factor score. I am wondering if i can get the standard error of the factor scores. Thanks.
I am running a factor mixture model with three first-order factors obtained from 22 categorical (5-point Likert scale) items. If I run the FMM with either the 3 first-order factors or a second order factor describing the three first-order factors, the FMM takes an unreasonable amount of time (~19h) to produce output for 2 classes. To address this issue, I am thinking of two different options:
1) perform a CFA on the factor scores of the three-first order factors to obtain one second-order factor 2) run a EFA/CFA on the 22 items, then use the means of the items loading on each of the (three) factors as indicators for a CFA with 1-factor, for e.g. the pseudo-code would be: EFA/CFA: f1 by it1-it12; f2 by it13-it19; f3 by it20-it22; CFA: f by mean(it1-it12) mean(it13-it19) mean(it20-it22);
I think option 2 makes more sense, but I was wondering if you had any thoughts on the feasibility of option 1 and the drawbacks for both options in MPlus. Thanks for your help.
I don't love either of the 2 approximate approaches. Why not instead try to get the correct analysis going quicker. I assume you have 3 dimensions of integration and if so instead of using 15*15*15 = 3375 points you could use integration=montecarlo(500) - or 1000. Also, get rid of TECH11 and TECH14 until you've found the best solution to speed it up.
anonymous posted on Tuesday, October 23, 2018 - 6:38 am
I am testing an interaction between two LGM slopes. The interaction is significant, and now I want to plot the interaction. The DV is a latent variable.
I was originally going to export factor scores for each variable to make a plot in excel. However, I read that factor scores for DVs are biased.
You should handle this just like you would with an observed variables regression with an interaction between two observed predictors. So, label the parameters in Model and use Model Constraint in line with our Mediation web page: