Message/Author 

Anonymous posted on Monday, August 26, 2002  6:38 pm



My topic for a thesis is about Confirmatory factor analysis.Can you suggest the best type of data where I applied and in what field of interest. 

bmuthen posted on Tuesday, August 27, 2002  10:15 am



There are so many application areas. I think you should study the literature and explore the area that you are most interested in and that is also agreeable to your mentor and your department. 

Anonymous posted on Tuesday, August 27, 2002  10:12 pm



Thank you very much! 

Anonymous posted on Monday, September 02, 2002  11:11 am



I have conducted a multigroup factor analysis in Mplus (using categorical indicator variables). I want to output the Mplus factor scores (FSs) to a file, and the match them to my original data set. I'm having a great deal of difficulty because Mplus does not save a CASE ID to its output files. Furthermore Mplus appears to resort the input data by GROUP ID and by other criteria before producing the FS output file. I know this because even after I resort my input data by GROUP ID and CASE ID, the weights for the input file are ordered differently than in the Mplus output file. Is there anyway to sort Mplus FS output file so that I can reliably patch the FSs back into my original data set ? 


Mplus Version 2.0 and up does allow the inclusion of an ID variable. The IDVARIABLE option is part of the VARIABLE command. 

Anonymous posted on Tuesday, September 03, 2002  10:50 am



Perfect. I'd consulted the wrong part of the manual. Works great. 

Anonymous posted on Tuesday, September 10, 2002  12:03 am



Is it possible to include an indirect effect when examing measurement invariance of a singlefactor measure in a multiple group model? Thanks! 

bmuthen posted on Tuesday, September 10, 2002  8:02 am



Yes. I assume you mean that you have an x variable that influences the factor and therefore the indicators indirectly. 

Hervé CACI posted on Monday, February 24, 2003  2:13 am



In some recent exchanges on SEMNET, Stan Mulaik argued that his parsimony ratio should be taken into consideration for fit testing. I don't see how it can work with WLSMV since the number of degrees of freedom reflect both the number of parameters to be estimated and the data. Stan nor anybody on the list answered my question. Is it a worthless thought? Thanks. 

bmuthen posted on Tuesday, February 25, 2003  9:42 am



I think you might want to use WLS for this. 

Anonymous posted on Tuesday, June 07, 2005  6:44 am



What is the maximum number of dichtotmous items Mplus can handle when doing the CFA? When I run 147 dichtotmous items, it kept running. Thanks! 


The maximum number of variables allowed in Mplus is 500. With categorical outcomes, the analysis can take some time with 147 items depending on the speed of your computer. 

Eric Buhi posted on Tuesday, January 31, 2006  10:38 am



Accorinding to the APA manual, I need to report means/SDs for all the variables I include in my modeling. I get variable means with SAMPSTAT, but how do I produce the standard deviations? Thanks! 


Take the square root of the variances that are also reported in the sample statistics. 

Eric Buhi posted on Tuesday, January 31, 2006  11:16 am



Thank you for your reply. Do you mean the covariances on the diagonal (following the means results)? 


The variances are on the diagonal of a variance/covariance matrix. The offdiagonal elements are covariances. 

anonymous posted on Wednesday, January 10, 2007  11:42 am



Hi, I have performed a CFA for 7 factors. In the output is it possible get Eigenvalues for each of these factors? Something similar to an SPSS or STATA output for factor analyses? Thanks 


Short answer is no. A longer answer is as follows. Mplus gives eigenvalues for exploratory factor analysis and these eigenvalues are for the sample correlation matrix, used to guide in choosing number of factors. Many researchers in the past have used the amount of variance explained in the observed variables by a factor as a descriptive of the quality of the factor solution. This amount of variance is the sum of the squared loadings in a column (for a factor) when the factors are uncorrelated. This amount is related to the eigenvalue  would be the eigenvalue if the estimation method was principal component analysis (which is not a great estimator for factor models). Also, one could compute the eigenvalues for the modelestimated correlation matrix. However, I would question the value of eigenvalue information for factor analysis beyond the EFA purpose of guiding the choice of number of factors. To decide on a wellfitting model in CFA we have better fit measure alternatives (and eigenvalues are not fit measures anyhow). And since factor analysis is not designed to maximize variance explained (but capturing correlation structure), the descriptive value of an eigenvalue is also not clear. 

anonymous posted on Tuesday, January 16, 2007  8:21 pm



Does this mean that the value of variance for each factor in the output is the variance explained by the factor? I am a little confused as to what does it represent? Thanks 


No, the factor variance is how much variability there is in the factor. Variance explained refers to how much variance of the factor indicators is explained by the factor. You can find this by looking the the Rsquare values of the factor indicators. 

Reetu Kumra posted on Monday, February 26, 2007  10:53 am



Hello, I have a few questions: 1. In a confirmatory factor analysis output, the column that is labeled StdYX (last column)...how is this interpreted? Is this the correlation between the latent construct and the actual variable? Please help. 2. When doing a CFA on two groups within a sample, what is the difference in doing a multigroup analysis and doing a CFA on these two groups separately? Thanks, 


1. This is a raw coefficient standardized using both latent variable and observed variable variances. 2. If you analyze both groups together with all parameters free across groups, you will obtain the same estimates as if you analyzed the two groups separately. Usually, the two groups are analyzed together so that equality constraints can be used to test for measurement invariance. 

Reetu Kumra posted on Tuesday, February 27, 2007  11:09 am



Thanks Linda! One last question: Once the CFA is complete, is there a way to make the latent constructs created into a measurable variable? (i.e. Can we somehow get something equivalent to data for the latent constructs?) 


Are you asking if you can obtain factor scores? If so, you can do this using the FSCORES option of the SAVEDATA command. 

Reetu Kumra posted on Tuesday, February 27, 2007  12:08 pm



Hi Linda. I have two more questions: 1. How exactly is the StdYX derived? When you say standardized, please clarify how the raw coefficients are standardized. 2. How exactly are the factor scores created? Is this an overall measure of the raw data that go into the factor? Thanks! 


1. See Technical Appendix 3 which is on the website. 2. See Technical Appendix 11 which is on the website. 

Derek Kosty posted on Wednesday, August 20, 2008  10:25 am



Hello, I have noticed that the number of free parameters between Mplus version 4 and 5.1 disagree. When running the model: MODEL: intern by LMDD4 LDYS4 LDPD4 LGOA4 LPTS4 LSPE4 LSOC4 LPAN4 LOBC4; version 4 counts 9 free parameters and version 5.1 counts 18. What is the reason behind this? Thanks! 


With Version 5, TYPE=MEANSTRUCTURE became the default. This is the cause. You can add MODEL=NOMEANSTRUCTURE; to the ANALYSIS command to override this default. 


I was trying to save the standardized output in configural, metric , scalar and complete invariance tests. But Mplus does not save the standardized output of factor loadings in metric invariance, intercepts and factor loadings in scalar invariance, and intercepts, factor loadings & residual variances in complete invariance tests. Instead of saving standardized output of parameters mentioned above, Mplus save 999 (missing). Any help about how to save these standardized outputs would be appreciated. 


Mplus does not save standardized parameter estimates that are constrained to be equal. 

ehsan malek posted on Wednesday, July 14, 2010  11:17 am



hi, i have a CFA model with two latent variables. i calculated average variance extracted for each of the two variables and it is around .3 for each. composite reliability is around .7 for each of the two latent variables. i have around 500 cases. model fit indices are ok (almost ok, chi square is not and i think it is because of the big sample size). what can i do for the AVE (as its recommended value is >.5)? does it have something to do with the sample size? as other things are ok with the model can i accept it? thanks! 


I would look at factor determinacy. It is probably correlated with AVE. Can you give a reference for AVE? I would also not discount chisquare with a sample size of 500. This is not large. 


Linda, AVE is average variance extracted in factor analysis. (It would be great if Mplus could compute AVE...) Chris B. 


Hello Dr. Muthen, Is there a reason why a model would run without errors in one sample and not in another irrespective of sample size? I am trying to run a fourfactor model in four independent samples of N = 234, 296, 334, and 568. It returned errors for samples 296 and 334. F1 by sgl3 sgl17 sgl25 sgl68; F2 by sgl41 sgl42 sgl67 sgl76 sgl100; F3 by sgl5 sgls8 sgl78 sgl84 sgl94 sgl96 sgl98; F4 by sgl30 sgl40 sgl55 sgl83 sgl92 sgl97 sgl102; Output: Sampstat standardized mod tech4; WARNING: The latent variable covariance matrix (psi) is not positive definite. This could indicate a negative variance/residual variance for a latent variable, a correlation greater or equal to one between two latent variables, or a linear dependency among more than two latent variables. Check the tech4 output for more information. Problem involving variable F2. I did observe a correlation greater than 1 for two latent variables (F2 & F4). Is there anything way of fixing this problem? Thank you 


The same model might not be correct for different data sets. It sounds like that is the case. A correlation greater than one means the model is inadmissible. You need to change the model. 

Back to top 