Anonymous posted on Monday, August 26, 2002 - 6:38 pm
My topic for a thesis is about Confirmatory factor analysis.Can you suggest the best type of data where I applied and in what field of interest.
bmuthen posted on Tuesday, August 27, 2002 - 10:15 am
There are so many application areas. I think you should study the literature and explore the area that you are most interested in and that is also agreeable to your mentor and your department.
Anonymous posted on Tuesday, August 27, 2002 - 10:12 pm
Thank you very much!
Anonymous posted on Monday, September 02, 2002 - 11:11 am
I have conducted a multigroup factor analysis in Mplus (using categorical indicator variables).
I want to output the Mplus factor scores (FSs) to a file, and the match them to my original data set.
I'm having a great deal of difficulty because Mplus does not save a CASE ID to its output files. Furthermore Mplus appears to resort the input data by GROUP ID and by other criteria before producing the FS output file. I know this because even after I resort my input data by GROUP ID and CASE ID, the weights for the input file are ordered differently than in the Mplus output file.
Is there anyway to sort Mplus FS output file so that I can reliably patch the FSs back into my original data set ?
Mplus Version 2.0 and up does allow the inclusion of an ID variable. The IDVARIABLE option is part of the VARIABLE command.
Anonymous posted on Tuesday, September 03, 2002 - 10:50 am
Perfect. I'd consulted the wrong part of the manual. Works great.
Anonymous posted on Tuesday, September 10, 2002 - 12:03 am
Is it possible to include an indirect effect when examing measurement invariance of a single-factor measure in a multiple group model? Thanks!
bmuthen posted on Tuesday, September 10, 2002 - 8:02 am
Yes. I assume you mean that you have an x variable that influences the factor and therefore the indicators indirectly.
Hervé CACI posted on Monday, February 24, 2003 - 2:13 am
In some recent exchanges on SEMNET, Stan Mulaik argued that his parsimony ratio should be taken into consideration for fit testing. I don't see how it can work with WLSMV since the number of degrees of freedom reflect both the number of parameters to be estimated and the data. Stan nor anybody on the list answered my question. Is it a worthless thought?
bmuthen posted on Tuesday, February 25, 2003 - 9:42 am
I think you might want to use WLS for this.
Anonymous posted on Tuesday, June 07, 2005 - 6:44 am
What is the maximum number of dichtotmous items Mplus can handle when doing the CFA? When I run 147 dichtotmous items, it kept running. Thanks!
Short answer is no. A longer answer is as follows.
Mplus gives eigenvalues for exploratory factor analysis and these eigenvalues are for the sample correlation matrix, used to guide in choosing number of factors. Many researchers in the past have used the amount of variance explained in the observed variables by a factor as a descriptive of the quality of the factor solution. This amount of variance is the sum of the squared loadings in a column (for a factor) when the factors are uncorrelated. This amount is related to the eigenvalue - would be the eigenvalue if the estimation method was principal component analysis (which is not a great estimator for factor models). Also, one could compute the eigenvalues for the model-estimated correlation matrix.
However, I would question the value of eigenvalue information for factor analysis beyond the EFA purpose of guiding the choice of number of factors. To decide on a well-fitting model in CFA we have better fit measure alternatives (and eigenvalues are not fit measures anyhow). And since factor analysis is not designed to maximize variance explained (but capturing correlation structure), the descriptive value of an eigenvalue is also not clear.
anonymous posted on Tuesday, January 16, 2007 - 8:21 pm
Does this mean that the value of variance for each factor in the output is the variance explained by the factor? I am a little confused as to what does it represent?
No, the factor variance is how much variability there is in the factor. Variance explained refers to how much variance of the factor indicators is explained by the factor. You can find this by looking the the R-square values of the factor indicators.
Reetu Kumra posted on Monday, February 26, 2007 - 10:53 am
I have a few questions:
1. In a confirmatory factor analysis output, the column that is labeled StdYX (last column)...how is this interpreted? Is this the correlation between the latent construct and the actual variable? Please help.
2. When doing a CFA on two groups within a sample, what is the difference in doing a multi-group analysis and doing a CFA on these two groups separately?
1. This is a raw coefficient standardized using both latent variable and observed variable variances.
2. If you analyze both groups together with all parameters free across groups, you will obtain the same estimates as if you analyzed the two groups separately. Usually, the two groups are analyzed together so that equality constraints can be used to test for measurement invariance.
Reetu Kumra posted on Tuesday, February 27, 2007 - 11:09 am
One last question: Once the CFA is complete, is there a way to make the latent constructs created into a measurable variable? (i.e. Can we somehow get something equivalent to data for the latent constructs?)
I was trying to save the standardized output in configural, metric , scalar and complete invariance tests. But Mplus does not save the standardized output of factor loadings in metric invariance, intercepts and factor loadings in scalar invariance, and intercepts, factor loadings & residual variances in complete invariance tests. Instead of saving standardized output of parameters mentioned above, Mplus save 999 (missing). Any help about how to save these standardized outputs would be appreciated.
Mplus does not save standardized parameter estimates that are constrained to be equal.
ehsan malek posted on Wednesday, July 14, 2010 - 11:17 am
i have a CFA model with two latent variables. i calculated average variance extracted for each of the two variables and it is around .3 for each. composite reliability is around .7 for each of the two latent variables. i have around 500 cases. model fit indices are ok (almost ok, chi square is not and i think it is because of the big sample size). what can i do for the AVE (as its recommended value is >.5)? does it have something to do with the sample size? as other things are ok with the model can i accept it?
Is there a reason why a model would run without errors in one sample and not in another irrespective of sample size? I am trying to run a four-factor model in four independent samples of N = 234, 296, 334, and 568. It returned errors for samples 296 and 334.
F1 by sgl3 sgl17 sgl25 sgl68; F2 by sgl41 sgl42 sgl67 sgl76 sgl100; F3 by sgl5 sgls8 sgl78 sgl84 sgl94 sgl96 sgl98; F4 by sgl30 sgl40 sgl55 sgl83 sgl92 sgl97 sgl102;
Output: Sampstat standardized mod tech4;
WARNING: The latent variable covariance matrix (psi) is not positive definite. This could indicate a negative variance/residual variance for a latent variable, a correlation greater or equal to one between two latent variables, or a linear dependency among more than two latent variables. Check the tech4 output for more information. Problem involving variable F2.
I did observe a correlation greater than 1 for two latent variables (F2 & F4). Is there anything way of fixing this problem?