Message/Author |
|
Student posted on Sunday, July 13, 2008 - 2:01 am
|
|
|
I am doing a study on developing a new instrument: This new instrument had 73 continuous (5-point likert, though) items, but using Principal Component Analysis in SPSS, I removed bad items and finally got 53 items which had 15 factors. I have a concern if this factor structure is biased due to the many missing data. Since SPSS uses listwise deletion, only 474 cases out of the total 644 were used for this PCA analysis. I thought that ML method in Mplus EFA may solve this missing data problem, and I wanted to compare the factor structures between using the PCA in SPSS and ML in Mplus. So I used the following command: Analysis: estimator = ml; Type = efa 1 15 missing; But, the program is running for hours without ending. I guess there is something wrong in the model, but I am stuck here. Please help! |
|
|
ML involves heavy computations with 53 variables and 15 factors, and you are doing 15 analyses with 1-15 factors. I would use a simpler approach such as ULS to hone in on a more limited range of factors for which you do ML. |
|
Student posted on Monday, July 14, 2008 - 8:57 am
|
|
|
Thank you for your answer. I am sorry, but I am not familiar with ULS. Could you give me more direction so that I can look up some materials on this. Thanks! |
|
Student posted on Monday, July 14, 2008 - 9:07 am
|
|
|
Oh, I tried "estimator=uls" and the program finished with convergence. Did you mean that? I see only RMSEA as goodnees of fix index in the output. Is there a way to get chi-square values and others? Thanks |
|
Student posted on Monday, July 14, 2008 - 9:13 am
|
|
|
Sorry about posting separately. It seems that I cannot edit my previous message... I wish so. I have only RMSEA info in the output, but anyways based on that, 15 factor model gives bad RMSEA like 0.5. I tried even 20 factors, but RMSEA goes down to 0.2. Does this mean that my principal component analysis in SPSS with listwise deletion of missing values resulted in a wrong factor structure? What would you suggested as a next step? Thanks so much! |
|
|
It's hard to understand without more information. Please send your input, data, output, and license number to support@statmodel.com. |
|
|
PCA is not a great approach to doing factor analysis. Listwise deletion may hurt as well. But the most likely source of your misfit is that you are exploring a "new instrument" as you say. Not all of the items may follow a neat factor model no matter how many factors you add. This is the time to explore by deleting, modifying, and adding items until what you hypothesize to measure is well measured. In terms of doing better EFA, you may also want to take a look at Fabrigar, L.R., Wegener, D.T., MacCallum, R.C. & Strahan, E.J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272-299. |
|
Student posted on Tuesday, July 15, 2008 - 9:28 am
|
|
|
Thank you so much for your help and references. I will try more and get back to you. Thanks. |
|
|
I did EFA for one of my study variables with four factors but for all factors, chi-square is always significant. What should I do? |
|
|
How many factors were the items developed for? Look at solutions starting with two less and two more. |
|
|
Hi, I am doing an EFA with 12 continuous indicatiors that they are not distributed normally. What is my choice for ESTIMATOR? Thanks a lot |
|
|
I think the MLR estimator is your best choice. |
|
Mahdi posted on Saturday, March 29, 2014 - 12:27 pm
|
|
|
Do not we need multivariate normal distribution for MLR approach? |
|
|
Ml parameter estimates are robust to multivariate non-normality as are MLR standard errors |
|
toby hopp posted on Monday, October 10, 2016 - 11:45 am
|
|
|
I'm using a fairly simple set of models model to explore a large dataset (n = ~83,000). I have ten variables, all of which are count measures. I'm looking at 3,4, and 5 factor models. If I treat these measures as continuous, the analyses complete very quickly. However, if I declare the variables as count measures and use MLR estimation, the analyses take an extraordinary time to run (8 hours and counting). My computational resources seem adequate, so I'm wondering if the large n is simply overwhelming Mplus. |
|
|
Your model requires numerical integration. And with numerical integration, the entire data set must be read at each iteration. This is why it takes so long. You can try INTEGRATION = MONTECARLO (5000); instead of the default. You could also consider a random sample of your full data set. |
|
Back to top |