Binary data and factor analysis PreviousNext
Mplus Discussion > Exploratory Factor Analysis >
Message/Author
 Wong Su Luan posted on Wednesday, January 12, 2000 - 2:47 pm
I am looking for a software that can help in the below problem, I am not
sure if your product suits my needs. Please advice.

I am constructing an instrument to measure IT skills and knowledge. I want
to find out the common factors that
might exist.

The IT skills are measured as such:
Each participant is given a set of tasks to execute for 5 productivity tools
(word processor, database, spreadsheet, presentation and internet).

Example of tasks:
1). Change the text of paragraph one to bold.
2). Insert page numbering for page one of your document.
...................etc.

To measure knowledge about IT:
A thirty multiple choice question test will be administered to the
participant.

Example:
1. What does PC mean?
a. personal computer b. personal contact c. personal connection d.
personal cache


For skills, one point is awarded if the participant is able to execute each
task and zero point otherwise.For knowledge, each correct answer is awarded
one point and zero point for the incorrect answer.

Is it possible to conduct factor analysis for both skills and knowledge
since my data will be in ones and zeroes (dichotomous) using your product?
Regards,
Su Luan
 Linda K. Muthen posted on Thursday, January 13, 2000 - 9:55 am
Mplus does exploratory and confirmatory factor analysis of dichotomous items. So it sounds like it would be suitable for the analysis that you have in mind.
 Matthew Schulz posted on Friday, February 25, 2000 - 12:34 pm
I'm wondering if MPlus will be adqeuate for exploratory f.a. of ordinal categorical data with missing values. I want to avoid artifacts such as 'difficulty factors'. How does MPlus compare to NOHARM in solving the problem of nonlinearity in the relationship between latent and indicator variables when the indicator variables are dichotomous or ordered polytomous?
 Bengt O. Muthen posted on Saturday, February 26, 2000 - 9:06 am
Mplus does not handle missing data for categorical outcomes. Mplus uses a probit regression of the item on the factor, thereby allowing for the non-linear relationship. I don't recall how NOHARM models this.
 Eric van Schooten posted on Wednesday, September 13, 2000 - 6:04 am
I'm a researcher of the University of Amsterdam and used Mplus in a research requested by the government, aimed at verifying the equivalence over two different years of notionwide secondary education exams for the reading comprehension in the foreign languages German, English and French. For part of this research I conducted exploratory and confirmative factor analyses with Mplus in order to check whether two exams for the same language and type of education of two different years may be seen as unidimensional or not, and if not, whether it was possible to distinguish different interpretable ability factors. In each analysis I analysed the factorial structure of 100 (efa) or 50 (conf.fa) m.c. items (wright or wrong answer, 4 choices on average) coming from two exams from two different years (94 & 97). For instance 100 or 50 items from the exams in French reading comprehension of 1994 and 1997 for higher general education.
The results of cfa's (100 items, sample sizes 126-730) show that the exams have 34 to 39 factors with an eigenvalue greater than 1. First factors for the nine different (combinations of) exams analysed explain 12 to 18% of the variance, second factors 5% or less.
The cfa's (18 analyses on 50 items each) show that 9 out of 18 times the one-factor model is significant at 5% (same sample sizes), but the ratio of Chi-square and df is never greater than 2.
Based on these results we decided to analyse the population data (sample sizes of 10.000 to 30.000 students) to detect differences in the reading comprehension in the foreign languages of the population of examinees from 1994 and 1997. To do this we used an Item response model (the OPL-model, an extension of the Rasch model) and used the sample data to link the population data. This OPL-model presumes unidimensional data. The fit of the OPL-model was all 9 times (three types of education and three different languages) very good (and often almost perfect). Rasch fitted only one out of nine times (on 10%).
The government asked independent methodologists and statisticians to verify the validity of our conclusions about trends in the proficiency of the populations of 1994 and 1997. One of these reviewers claims that factor analyses with categorical data are not reliable, because of the 'use' of tetrachorical correlation coefficients. Reading the Manual of Mplus I couldnot find an answer although I get the impression Mplus does not use tetrachoric correlation coefficients with dichotomous data. My question is: how does Mplus handle categorical data and is this criticism of the reviewer valid?

A second question, although not directly related to Mplus, is whether guessing the right answer could distort the facture structure we found and if so, whether there is a solution to the guessing problem (for instance would the guessing result in a guessing factor in an efa on which the more difficult items have higher loadings than the easy items or could we model a guessing factor in a cfa)?

I would be very grateful if you could give me your opinion on these questions.

sincerily,

Eric
 bmuthen posted on Thursday, September 21, 2000 - 8:30 am
Mplus uses tetrachoric correlations with binary outcomes in an EFA. With multiple-choice items and only 4 answer alternatives, the guessing probabilities can be quite high and the tetrachorics are then to some extent distorted. Psychometricians have written about this and tried corrections of the tetrachorics (work by Carroll?), but these are seldom used now and Mplus does not incorporate them. I don't know how guessing would likely distort the solution.

It is probably better to use the RMSR measure to decide on the number of factors rather than using number of eigenvalues greater than one.
 Anonymous posted on Tuesday, October 31, 2000 - 3:57 pm
I have read some of Bengt's work on EFA with binary and ordered categorical data in Psychometrika and elsewhere and am preparing to analyze a large data set with a modest (~10) number of 5-level indicator items. My questions concern the ways in which one should best prepare ordered categorical data for EFA and CFA in Mplus as well as model fit and violations of assumptions in the Muthen/Mplus framework.

In one article, Bengt's recommendation seems to be that frequency %'s for each of the indicator variables' levels should should be greater than 5%. Elsewhere I get the impression that problems with the procedure are minimized if the skew of the variables are similar (same direction and magnitude). Which, if either of these procedures, is correct ?

Is the Chi-square test provided by the WLSMV estimator preferable to either the RMSR or Muthen's suggested Descriptive Fit Value (DFV) in judging model fit ?

I am concerned about violating the assumptions underlying EFA for categorical data. In an SMR article from 1989 Bengt suggests that LISCOMP has a feature to test for the non-normality of underlying indicator variables. Does Mplus have a similar feature or are the validity of these assumptions easily verified using the Mplus output ?

Finally, is higher order factor analysis generally used to "correct" for the non-normality of underlying categorical variables or to account for the strong correlation between factors (or both) ?
 bmuthen posted on Wednesday, November 01, 2000 - 10:02 am
My 1989 SM&R paper (see Mplus reference list) gives some advice for binary outcomes, pointing to the need to have sufficient numbers of observations in each 2 x 2 table for pairs of outcomes. With samples of say 5,000 one can even allow bivariate proportions as low as 0.01. Your situation, however, is for 5-category outcomes and here I don't know how much many observations are needed in each category. Also, parameter standard errors and chi-square usually require more observations than estimates. Your reference to skewness is more relevant when analyzing categorical data as if they were continuous; skewness in opposite directions for positively correlated variables gives the strongest correlational attenuation. To get more specific knowledge for your situation, a simulation study could be conducted within Mplus.

The WLSMV chi-square test is rather strict (i.e. has high power to reject a false model). RMSR is a good descriptive measure although I haven't yet seen simulation studies of it for categorical outcomes. DFV is more ad hoc.

Mplus does not include the "triplet" test for underlying normality that LISCOMP had, nor do we test for underlying normality in bivariate tables with polytomous outcomes. We felt that these normality tests are perhaps unnecessarily strict. Our approach builds on the idea that underlying normality may not hold exactly, but that it is better to act as if this is true than the alternative of ignoring the categorical nature of the outcomes and treat them as continuous.

Higher-order FA does not correct for non-normality as I see it, but to do that you would need non-normal covariates. Higher-order FA can be used to explain strong correlation between factors.
 Anonymous posted on Wednesday, November 01, 2000 - 4:50 pm
After I read your comment above that the normality assumptions of categorical indicators might be unnecessarily strict, I noticed that you state in your 1989 SMR article that the assumption is a crucial limitation of the EFA/CFA with categorical indicator approach. I assume therefore that if nothing else one would want to be able to at least comment on the appropriateness / limitations of a given model or exclude offending items during the EFA stage, hence my continued interest in the normality assumption issue.

Am I correct in understanding that performing simultaneous probit regressions on the categorical items using the raw score as the x-variable can provide insight into the validity of the normality assumption ? If one then wants to gauge the validity of the normality assumption for various N-factor models, could one proceed by performing N simultaneous probit regressions on the categorical items using the individual factor items' raw scores as x-variables (I assume in the SMR piece that you used N factors suggested by an EFA using "normal tetrachorics") ? Is the ultimate convergence / nonconvergence of solutions (simultaneous raw score probits vs. EFA on indicator variables) a worthwhile "test" of underlying normality ?

Is it possible to use any information contained in the residuals from a "normal tetrachoric" EFA to gauge the validity of the underlying normality assumptions ?

In the 1989 SMR piece you reference a paper entitled "A Simple Approach for Estimating and Testing Non-normal Latent Variable Correlations" that had a "forthcoming" publication date. I've performed searches of various on-line archives and have been unable to locate the piece. Would you provide a citation ?

Would you clarify your statement: "Higher-order FA does not correct for non-normality as I see it, but to do that you would need non-normal covariates". In general, is the specification of higher order factors preferable to imposing a correlation term in the SEM only if it contributes to model clarity ?

Lastly, am I correct in interpreting your comment above about performing "simulation studies in Mplus" to mean that I can bootstrap my sample using Mplus' Monte Carlo feature. Consistent estimates of model parameters would provide evidence that skewness is likely not a concern in my situation.

My apologies for the lengthy message. I appreciate your help.
 bmuthen posted on Thursday, November 02, 2000 - 10:59 am
Considering the plausibility of underlying non-normality is appropriate. As with the use of tetrachorics, non-normality is often not considered in the related field of IRT, where single-factor models are used to analyze binary items. As in IRT, underlying normality comes about with a normal factor and normal residuals, where non-normality is typically thought of as a function of a non-normal factor (this reasoning also relates to second-order FA below).

The "Simple Approach.." paper was never finished. Its idea is given on page 30 of my SM&R article. It results in "non-normal tetrachorics" that can be used instead of the regular ones. Convergence or not of the probit regressions is not the concern here, but the use of an alternative set of correlations and seeing if that makes a difference in the interpretations.

I don't see how residuals from a normal tetrachoric EFA would be useful for what you are interested in.

Higher-order FA also involves the question about the (second-order) factor distribution being normal or not, so including such factors don't avoid the normality assumption. And, yes on your question about model clarity.

No, by simulation studies I did not mean bootstraps, but simulating new data with features similar to yours.
 ziv shkedy posted on Thursday, November 09, 2000 - 1:56 am
Hello,
I'm using Plus to conduct exploratory factor analysis for 45 binary indicators, each one on them represents present (or absence) of different medical or stress symptom.
I saw in Muthen 1989 (Dichotomous FA of symptom data) two figures eigenvalues obtained from the tetrachorics correlation matrix, is it possible to obtain these eigenvalues form the Mplus output? If not, the eigenvalues of the sample correlation matrix are reported in the Mplus output, how much this eigenvalues are different from the tetrachorics eigenvalues?
Similar to Muthen 89 I observed negative factor loadings for a 3 and 4 factors solution, What is the interoperation of the negative values ? can one conclude that the indicator is negatively correlated with the factor?? Should I use the same criterion with positive and negative loading, i.e. indicators are associated with the factor for which they have very large positive loading or very small negative loading ?
In order to select the appropriate number of factors, which criterion is more reliable the RMSR or chi-square/d.f (which obtained using the WLSMV method) ?
I'll highly appreciate any help.
Best wishes, Ziv.
 Linda K. Muthen posted on Thursday, November 09, 2000 - 3:38 pm
The eigenvalues that you get in Mplus when binary indicators are analyzed are the eigenvalues of the sample tetrachoric correlation matrix. This is what we use. Whether the factor loading is positive or negative, the interpretation is the same. A negative loading indicates that the item correlates with the factor negatively. RMSR is probably better than chi-square/df. An important factor in deciding on the number of the factors is also a non-statistical one--their interpretability.
 dwasch posted on Thursday, April 04, 2002 - 6:30 am
Hi,

In one of the above postings, you stated: "It is probably better to use the RMSR measure to decide on the number of factors rather than using number of eigenvalues greater than one."

I am new to this area and would like to find out more. Could you explain a bit further how to use RMSR to determine number of factors in EFA (or provide a reference where you do the same)?

Many thanks.
Dan Waschbusch
 Linda K. Muthen posted on Thursday, April 04, 2002 - 10:03 am
We recommend looking at several things to decide on the number of factors in EFA: eigenvalues (number greater than one and a scree plot), chi-square (p greater than .05), RMSEA (less than .06), RMSR (less than .05), checking for negative residual variances (shouldn't be any), pattern of factor loadings (do indicators load on several factors contrary to what is epxected) and interpretability (are the factors what they were expected to be). I don't believe any one reference states all of these.
 Andrew F posted on Tuesday, May 14, 2002 - 5:41 pm
Hi,

I'm a new MPlus user and am just testing EFA on some sample data with 60 4-level ordinal indicators and about 1500 subjects. I am getting the sample tetrachoric correltion matrix being non-positive definite (as described in an earlier post), with about negative 10 eigenvalues. My question is how to report the percentage of total variance explained by the first (eg) 4 factors, when there are negative eigenvalues. Does it lose meaning entirely (as sum of eigenvalues no longer equals trace of matrix when not pos def)? Is there an alternative? The % variance explained is, I'm sure, something that will be asked about the model.

Best wishes,

Andrew
 bmuthen posted on Wednesday, May 15, 2002 - 8:34 am
As you say, the non-pos def message is for the sample correlation matrix. Your estimated factor model, however, gives a pos def correlation matrix as long as the factor covariance matrix is pos def and the residuals are positive. The usual percentage variance explained that is used with orthogonal solutions should therefore be applicable.
 Anonymous posted on Sunday, June 16, 2002 - 2:50 am
Hello,

A followup to a previous message from andrewf and response from mid-May. You mention that for non positive-definite sample tectrachoric correlation matrices the "usual percentage of variance explained with orthogonal solutions" is applicable when the *estimated* correlation matrix from the model is positive definite.

Could you possibly outline how this percentage explained is computed?

Thanks in advance.
 bmuthen posted on Sunday, June 16, 2002 - 11:33 am
The sum of squared loadings in a column (that is, for a factor) is the total variance in all variables explained by that factor.
 PhilWood posted on Tuesday, January 28, 2003 - 12:04 pm
I have a question about EFA extraction of more than one factor. I have a data set composed of 421 observations on 20 variables, which are categorical with three response options. Specifying these as categorical and using the EFA option in Mplus (with loss function equal to LS), I find that there is one negative eigenvalue less than zero, four eigenvalues greater than 1, and a general pattern of probably three factors in the data.
EFA 1 2
gives me one factor, but then says that it can't continue on to consider 2 factors because:
THE PROGRAM HAD PROBLEMS WHEN MINIMIZING THE FUNCTION
TOO MANY FACTORS HAVE PROBABLY BEEN USED

This puzzles me- why would it not be able to extract any more factors?

thanks for any insight!
 bmuthen posted on Tuesday, January 28, 2003 - 12:21 pm
This is usually a result of a Heywood case, where one residual variance (which is computed as 1- communality) goes to a large negative value. So there may be more factors, but the algorithm doesn't get past this inadmissible section of the parameter space. You can try the WLSMV estimator. Or, try to pinpoint if there is a specific item causing the Heywood case - this may be difficult to sort out, however. Or, do the analysis as "an EFA within a CFA framework", where you have control over each parameter.
 MinHua Huang posted on Friday, June 13, 2003 - 9:59 pm
I am using five survey items to test whether the dimensionality confirms my thoughts(one latent trait can explan all of them). All of the five items are responded with a 5-point ordinal scale. There are 1282 observations.

When I use the polytomized data form to do EFA, the result suggest me a two-factor model is better than one-factor model to describe the data.

When I recode the data and use the dichotomized data form (positive or negative) to do EFA, the result is unavailable since it is a Heywood case.

However, I use the same data and apply SYSTAT to do the binary-data factor analysis, the result suggests me a two-factor model is better. This result is in accord with Mplus when I use the polytomized data form to do EFA.

My question is: How come SYSTAT can do EFA with the binary data in this case but Mplus can't?
Is the difference due to different methodology? Can I trust the result from SYSTAT?
 Linda K. Muthen posted on Saturday, June 14, 2003 - 7:43 am
If I understand you correctly, you have analyzed your items as polytomous and dichotomous, in both cases including the CATEGORICAL statement in your input file. If you are comparing to STATA, I do not believe you should put the CATEGORICAL statement in. I don't believe that STATA has a special factor analysis for categorical outcomes like Mplus does. I believe that the variables are being treated as if they are continuous in STATA.

It seems strange to me that the polytomous case would run and the dichotomous case would not. It would be more likely for the opposite to occur. I suspect that when you changed your data, you might have done something wrong. You can send your input and the polytomous data to www.statmodel.com and I can look at it.
 MinHua Huang posted on Saturday, June 14, 2003 - 8:12 am
Well, probabily I didn't explain it fully regarding what I did. I would like to resume my question with a little bit detail.

The other program I run for factor analysis with binary data is "SYSTAT", not STATA. In SYSTAT, there is a function for you to calculate tetrachoric correlations and save this tetrachoric correlation matrix as a file. Then you can open this file to do factor analysis. Some scholars in other disciplines might use this way to do factor analysis with binary data without using Mplus.

As to the recoding, what I do is to combine two (e.g. very important, somewhat important)categories into a "Positive Response" category and code the netural response like "neither agree nor disagree" as missing. In this way, the original five-point categorial variable will become a binary variable with only "Positive" or "Negative" response to the survey items.

So, I still wonder, if Mplus is using tetrachoric correlation coefficients to do factor analysis, SYSTAT is doing the same thing. I use the same dataset but the two programs tell me different results. I just want to know why?
 Linda K. Muthen posted on Saturday, June 14, 2003 - 9:03 am
I'm sorry. I just saw the S and thought STATA.

What SYSTAT does is what we do with the ULS estimator. With our WLS, WLSM, and WLSMV you cannot use just the tetrachoric correlations. You also need a weight matrix.

The default estimator for EFA in Mplus is ULS. I assume that this is what you are using. So you should get the same results with STATA or Mplus.

I still think you are doing something wrong with the recoding of your data. You should recode the data such that neither agree or disagree is zero and very important and somethat important is one. I worry when you say your recoded neither agree or disagree as missing. There is really not much more I can say without looking at your input/output and data which I am more than happy to do.
 Minhua Huang posted on Saturday, June 14, 2003 - 10:29 am
Thanks for your quick reply. I am very appreciated for your explanation since I didn't notice what the estimator is used in SYSTAT. (I use WLSMV running Mplus)

Yes, re-coding "neither disagree nor agree" as missing is losing information. But the reason doing it is that I have "strongly agree" and "agree" in one end (coded as 1), and "strongly disagree" and "disagree" in the other end(coded as 0). So what I want to do is to differentiate whether their attitude toward each item is positive or negative. Then I can do classical item analysis by TESTFACT (or SYSTAT), and also 2- or 3- parameter IRT analysis (PARSCALE 4) to decide which item should be included in measuring the latent traits and finally use these good-enough items to do the measurement work, by CTT and IRT repsectively.

Therefore, I have several options.

1. Using dichotimous data form
a.Use Mplus and SYSTAT to test dimensionality.
b.Use TESTFACT to do classical item analysis and so-called "Full Information item factor analysis".
Formulating the latent trait as a continuous variable.
c.Also use SYSTAT to do classical item analysis. Formulating the latent trait as a countinuous variable.
d.Use Mplus to do latent class analysis and formulate the latent-trait variable as ordinally categorical.

2. Using polytomous data form
a. Using Mplus to test dimensionality. (SYSTAT can't do it!)
b. Using PARSCALE to do polytomous IRT analysis. (TESTFACT can't do it). Formulating the latent trait as a continuous variable.
c. Using Mplus to do latent class analysis and formulate the latent-trait variable as ordinally categorical.

That is the reason I want to use both dichotomous and polytomous data-forms to have multiple measurements. The purpose is to increase the reliability of my measurements since it(the latent trait I measure) is the major dependent variable in my research. In fact, 5 items to measure a latent trait is too few, especially when uni-dimensionality assumption is not met.

But some latent concepts in my franework may have 25 items for measurement. So I can afford losing the intensity information in each item (the way each respondent perceives the concept of intensity may be different), and trade it by using the power of quantity(more items) and quality(less measurement error regarding intensity). I don't know whether my treatment is making sense to you. I hope it's resaonable and defendable.

Thanks again for your reply. I am really appreciated.

Min-Hua
 Linda K. Muthen posted on Saturday, June 14, 2003 - 10:59 am
Okay, so Mplus and SYSTAT are using different estimators. I think we have now concluded that. Do they get the same tetrachoric correlations for the same set of observations? If yes, then the problem you are having in Mplus is because of different estimators. If no, then this may be due to the programs handling missing data differently.

In case you are not aware of this, the Mplus model is the same as the two-parameter normal ogive IRT model for dichomomous items. Only the estimators differ. Version 3 of Mplus will have the maximum likelihood estimator used in IRT for both unidimensional and multidimensional models and for both binary and polytomous outcomes, also with missing data. Therefore, Version 3 of Mplus will be able to do all of the analyses that you suggest above.
 Minhua Huang posted on Saturday, June 14, 2003 - 10:00 pm
Just update some information for the above discussion.

When I use the same dichotomous dataset to compare the result that Mplus and SYSTAT produce. I find the result of Mplus by the ULS estimator is equivalent to the result of SYSTAT by the Iterated Principle Axis(IPA) estimator. Indeed, as I know, IPA is a kind of the least squares method, which is basically the same with ULS in Mplus, though in my case it takes 33 iterations to converge (the default is 25 iterations in SYSTAT).

However, this result causes another question to me. That is, in page 38 of Mplus User's Guide, it says the appropriate estimators for categorical EFA are WLS,WLSM,WLSMV, and ULS. What is the reason behind this statement? Therefore, when I use SYSTAT to do factor analysis with binary data, are PCA and MLA not the right methods to estimate the factor loadings? In fact, in this case, if you use PCA to do dichotomous EFA, then you will find the fator loadings are significantly inflated and that might mislead you to think the assumption of unidimensionality holds.

Can you simply explain the above question? Or you may give me some references about how to choose appropriate estimators in different situations and why?

Thanks again!
 Linda K. Muthen posted on Sunday, June 15, 2003 - 7:17 am
The table on page 38 shows which estimators are available for EFA with categorical outcomes in Mplus. I don't believe appropriateness is discussed there. There are other estimators that one could use. In fact, we will be adding maximum likelihood for categorical outcomes in Version 3. Some of the other estimators are good and some may not be so good. PCA is known to give inconsistent estimates of a factor analysis model because it assumes that the residual variances are zero. This is described in: Joreskog, K.G., & Sorbom, D. (1979). Advances in factor analysis and structural equation models. Cambridge, MA: Abt Books. As you say, IPA is a reasonable method.
 Jennie Jester posted on Monday, July 28, 2003 - 1:55 pm
Hello Bengt and Linda,
I am working on an instrument we used to measure expectancy for young children of alcoholics. I want to do a factor analysis, to form scales so that I can look at the relationship of these scales with other things about the kids. We have around 340 interviews with the kids when they are aged 9-11. The data are very skewed, as few of the children endorse a lot of the items. Specifically, there is less than 5% endorsement for over half of the items (counting "Agree completely" and "Agree somewhat" as endorsing that item). The original scale was 5 categories, "Agree completely" to "Disagree completely". I was planning to collapse "Agree completely" and "Agree somewhat" as well as "Disagree completely" and "Disagree somewhat". I'm not sure what to do with "Not sure". I have run factor analysis, both using the items as continuous and using them as categorical. I do find factors that generally make some sense. However, in reading your 1989 SMR paper, I feel that the data I have is ill-suited for a factor analysis with tetrachoric correlations. Do you have any advice as to how to proceed with this analysis?

Thanks,

Jennie
 bmuthen posted on Monday, July 28, 2003 - 3:47 pm
I'm not sure.

Seems like you can try a 3-category approach with the middle category being Not sure - if that category is used. Otherwise, dichotomizing seems fine. Treating the variables as continuous would attenuate correlations given the strong skewness.
 Anonymous posted on Thursday, September 25, 2003 - 9:44 pm
Hi, I have a question for limitation of number of items when I do EFA with categorical data. I have 200 MC items(code 0 and 1 as wrong and right),but I can't get it work because the probelm of no enough memory. Besides, how many subjects is appropriate for this analysis.
 Linda K. Muthen posted on Friday, September 26, 2003 - 6:12 am
Mplus has a limit of 500 variables for an analysis as long as you have enough memory on your computer. I have 1Gbyte of RAM and have run an analysis with 150 items that I think were polytomous. All binary will take less memory. I suspect you are using the WLSMV estimator. You should use ULS as a first step.

Regarding sample size, only a simulation study can really guide you here. It depends on so many things. See the Muthen and Muthen paper on power and sample size referred to on our homepage.
 Dustin posted on Tuesday, November 11, 2003 - 6:33 am
I have data representing likert-scale ratings of post-concussive symptoms for a number of athletes. I am interested in understanding if there are symptoms that tend to hang together. Because the data for several of the variables is extremely skewed (not many people experience the symptom), I was considering using binary variables (0 = symptom not present, 1=symptom present at any level) and doing an EFA with categorical variables. A colleague had suggested that Latent Class Analysis might be a better way to look at the data because the construct underlying each symptom is continuous and I am artificially making them dichotomous. I am interested in hearing opinions regarding this issue.
 Linda K. Muthen posted on Tuesday, November 11, 2003 - 6:41 am
Because you want to look at the relationships among variables I would use factor analysis. And I would do the EFA on the likert-scaled items without dichotomizing them. If you run into computational problems, you can consider collapsing categories at that time. LCA which groups people can be considered as a complimentary analysis, but I don't think that is your primary concern from what you say.
 paul-valentin posted on Monday, February 16, 2004 - 6:30 pm
Hi
I would like to know if it is possible to represent graphically the factors coming out of the Exploratory factor analysis (with numerical and/or categorical data)... just like in traditional EFA analysis ... I want to graphically show the association between different products of the company ... but don't know I to get it.
Thanks in advance
 Linda K. Muthen posted on Tuesday, February 17, 2004 - 7:25 am
The current version of Mplus does not have a graphics module. Version 3 will. You cannot get factor scores for EFA. You would need to do an EFA in a CFA framework to get factors scores to plot.
 paul-valentin posted on Tuesday, February 17, 2004 - 12:31 pm
We've just bought the current version. When do you expect to launch version 3 and how will we be able to update to version 3?
Thanks in advance
 Linda K. Muthen posted on Tuesday, February 17, 2004 - 2:43 pm
You can find information about Version 3 at www.statmodel.com. The launch date is looking like late March if all continues to go well.
 Anonymous posted on Tuesday, March 23, 2004 - 6:33 am
I have data on 18 dichotomous variables with 303 observations. I conducted an exploratory factor analysis in MPlus using WLSMV for the estimator and obtained some factor loadings greater than 1. What causes factor loadings greater than 1? When should WLSMV be used instead of WLS? Thank you.
 bmuthen posted on Tuesday, March 23, 2004 - 7:04 am
Factor loadings greater than one can happen if a Heywood case has been encountered (negative error variance). WLSMV has better performance than WLS when samples are not large.
 Anonymous posted on Tuesday, March 23, 2004 - 7:25 am
Thanks for your quick response! Is there a way to deal with Heywood cases in MPlus, so I can obtain factor loadings of 1 or less?
 Linda K. Muthen posted on Tuesday, March 23, 2004 - 8:33 am
When you have a Heywood case, you need to change your model. A Heywood case is a reason to reject the two factor solution, for example, if that is the one with a Heywood case.
 Anonymous posted on Monday, July 12, 2004 - 11:51 am
Hi,

As I'm a novice in using Mplus, I have a few questions concerning exploratory factor analysis when the indicators are all dichotomous:

1-How do you choose the better estimator? ULS estimator is the default, but what does it make you decide to select WLS, or WLSMV estimator?
2-I noticed that when using WLSMV, the chi-square test, RMSR and RMSEA are available. Which one do I have to choose to know if the model fit is good?
3-As you did say here, it seems that RSMR measure is quite a good measure to decide about the model fit (in the case of EFA for dichotomous indicators). You also said that RSMR must be <.05. If I don't have a RMSR<.05, does that mean that the model is really bad? Because I have tried for a higher and lower number of factors, and I never get RSMR<.05.
4-I did compare results from ULS and WLSMV. Why is there so much difference between RSMR in ULS and RSMR in WLSMV? (for example in ULS, I get RSMR=0.4696; and in WLSMV I get RSMR=0.6385)
5-If I need to use a weight variable, does that mean I only can select WLS or WLSMV estimator? Or can I select ULS estimator because it seems to use a 'full weight matrix' (see p 402 in the Mplus user's guide)


My apologies if my questions seems trivial for you but as i told you I'm a novice in using Mplus. Thank you for your help.
 Linda K. Muthen posted on Monday, July 12, 2004 - 3:54 pm
1. ULS is the default because it is very fast. WLSMV is the default for CFA and has the advantage that it gives fit statistics.
2. You can read about how to assess model fit in the Yu dissertation which you can find on our website under Mplus Papers.
3. Once again, you can read model fit recommendations in the Yu dissertation.
4. I would have to look at the two outputs to answer that. You can send them to support@statmodel.com.
5. The weight matrix had nothing to do with using a sampling weight. You can use a sampling weight with any of the three estimators that you mention.
 sadia haider posted on Saturday, October 23, 2004 - 1:03 pm
Hi, I am learning about EFA with binary data using the demo version and am writing to ask if you have any examples that go step-by-step through the output i.e. describe the output and what it means? This would really help my learning. Many thanks in advance.
 Linda K. Muthen posted on Sunday, October 24, 2004 - 10:00 am
I'm afraid we don't have anything that does what you are asking. The Day 1 handout from our short courses describes EFA for continuous outcomes. You might also look at an introductory reference for EFA. As far as categorical outcomes, you can look at Web Note 4.
 Anonymous posted on Thursday, February 24, 2005 - 3:37 pm
I am new to Mplus and am exploring the use of it for my analysis. Does Mplus v.3 have a limit on the number of dichotomous variables and number of cases for EFA?
 Thuy Nguyen posted on Friday, February 25, 2005 - 9:16 am
Mplus has a limitation of 500 observed variables in the analysis (this can be any type of variables). There is no limitation on the number of cases.
 Holmes Finch posted on Friday, April 01, 2005 - 5:17 am
Hi,

I'm using Mplus to do exploratory fa with dichotomous items with the wlsmv estimation method. I'd like to get thresholds for each item, but can't seem to find a way to get them in this context. I've searched the discussion archives and can't find any reference to being able to do that there. So, my question is, can I get the item thresholds in the EFA context? Thanks.

Holmes
 Holmes Finch posted on Friday, April 01, 2005 - 6:08 am
I ran into a second question related to the first. I'm trying to do repeated execution of a program with the RUNALL.BAT file, and getting the warning that the SAVEDATA command can't be used with EFA. I just wanted to verify that this is so, and that there's no way to save the info. from an EFA from Mplus. Thanks.

Holmes
 Thuy Nguyen posted on Friday, April 01, 2005 - 9:42 am
This is correct. There is currently no way to save the results and get the thresholds from an EFA run. You would have set up the EFA in a CFA framework.
 Holmes Finch posted on Friday, April 01, 2005 - 11:25 am
Thuy,

Thanks very much for the info. I appreciate that you all are very helpful and very quick to respond.

Holmes
 Cintia posted on Saturday, January 27, 2007 - 10:33 am
Hello,

I have data on 31 dichotomous variables. After performing an EFA using Mplus (ULS estimator) I obtained a 5-factor solution. I would like to know if there is a way to estimate the correlation of these factors with other variables: gender, age, educational level and scores on another test.

Thank you.
 Linda K. Muthen posted on Sunday, January 28, 2007 - 9:01 am
You can do this only within a CFA or an EFA in a CFA framework.
 Thomas Olino posted on Sunday, March 04, 2007 - 6:43 pm
First, I am interested in examining a number of measures using IRT/CFA with categorical indicators, which includes an assumption of unidimensionality. From some things that I have seen, a ratio of 4 to 1 of the first to the second eigenvalue would demonstrate unidimensionality. More recent IRT papers have utilized parallel analysis. Would a parallel analysis be implemented in Mplus by doing a small monte carlo simulation?

Second, in examining differential item functioning in a CFA framework, could one take the same approach of examining measurement invariance with covariates in a MIMIC model?

Thanks!
 Linda K. Muthen posted on Monday, March 05, 2007 - 8:44 am
I need to know more about what parallel analysis is. I am not familiar with the term.

You can use a MIMIC model to test measurement invariance of intercepts/thresholds but not factor loadings and residual variances. You need multiple group analysis to test the measurement invariance of factor loadings and residual variances.
 Rudolf Uher posted on Monday, April 16, 2007 - 3:26 am
I have also been trying to run a parallel analysis in MPlus. Parallel analysis is a series of exploratory factor analyses on simulated data to guide the number of factors to be extracted. Mean eigenvalues from the simulated data are compared to eigenvalues from the real data: the number of factors is determined as the number of real-eigenvalues that are significantly larger than the simulation eigenvalues.
It seems that my data simulations have produced adequate datasets, but I have not been able to safe the eigenvalues from the series of 100 EFA on the simulated datasets. Is there a way to do this? I would really prefer not to run EFA 100 or 1000 times one at a time to do this.

Thanks in advance for any suggestions!
 Linda K. Muthen posted on Monday, April 16, 2007 - 7:55 am
We don't save results for EFA.
 Bruce A. Cooper posted on Wednesday, November 07, 2007 - 4:24 pm
Great set of questions and responses above -- Thank you!

But, I didn't find anything about the basis for deciding between ULS or (say) WLSMV as the estimator for an EFA with all dichotomous items. Could you suggest a reference also?

Thanks!
bac
 Linda K. Muthen posted on Wednesday, November 07, 2007 - 5:55 pm
It depends on the number of items and factors. If you have a lot of both, I would start with ULS and go to WLSMV once I have settled on a few solutions.
 Jon Elhai posted on Wednesday, November 07, 2007 - 6:50 pm
Bruce,
A couple of relatively recent papers on using WLSMV-related estimation with binary and other forms of categorical data in factor analysis are:

Flora, D. B., & Curran, P. J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods, 9, 466-491.

Wirth, R. J., & Edwards, M. C. (2007). Item factor analysis: Current approaches and future directions. Psychological Methods, 12, 58-79.
 Bruce A. Cooper posted on Thursday, November 08, 2007 - 4:27 pm
Thanks Linda and Jon -

I downloaded and read the two articles you recommended, Jon, and found them useful for knowing more about the benefits of robust WLS vs full WLS, but both articles focused on ordinal indicators, and neither addressed ULS at all as a first or only step in estimation. (FYI, a nice chapter I read that describes the usefulness of WLSM & WLSMV is Finney, S. J., & DiStefano, C. (2006). Nonnormal and categorical data in structural equation modeling. In G. R. Hancock & R. O. Mueller (Eds.), Structural Equation Modeling: A Second Course (pp. 269-314). Greenwich, CT: Information Age Pub.)

Assuming that I've settled on a relatively small model with 12 to 15 binary indicators representing 2 or 3 factors, with proportions no worse than .2 for the smaller category, and a small sample size of -- say -- 80 to 150, I still don't know and haven't found guidance on whether to use ULS or WLSM or WLSMV. From your note, Linda, I take it that robust WLS would be preferred. If so, why is ULS the Mplus default for all binary indicators? Any further guidance/refs would be appreciated!

Thanks,
bac
 Bengt O. Muthen posted on Thursday, November 08, 2007 - 4:42 pm
ULS is the Mplus default with EFA because there one typically has many items for which WLSMV would be slow. Muthen, Du Toit, Spisic finds that WLMSV works well for n as small as 200, but your sample is even smaller. The first things to go wrong are the chi-square and SEs, less so the parameter estimates. So since ULS does not give SEs or chi-square, it would seem that at your low sample sizes ULS would be just as good. I assume that you have an EFA model where you only work with correlations so the equal unit weighting of ULS doesn't hurt (mixing thresholds and correlations might be worse with ULS). A simulation study in Mplus could show how well or poorly ULS performs here wrt parameter estimates.
 Bruce A. Cooper posted on Thursday, November 08, 2007 - 5:26 pm
Thanks, Bengt!

If I could test your patience a bit further...

1. In choosing the "best fitting" solution, then, as a pragmatic step, would I go astray to eyeball the estimates for the loadings for the ULS vs WLSMV to see if they are similar? And if so, then the
WLSMV output could also give me the RMSEA to help with model choice.

2. If not, or in any case, I see that smaller values of the RMSR are better, but I can't find cutoffs/rules of thumb in my references, as for RMSEA. How to decide which model is better with only the RMSR from an ULS analysis? Just choose the one with the smallest RMSR?

3. Even if I had 150-200 or more cases with binary data as previously noted, I'd likely still get lots of these:

THE INPUT SAMPLE CORRELATION MATRIX IS NOT POSITIVE DEFINITE.
THE ESTIMATES GIVEN BELOW ARE STILL VALID.

Doesn't this indicate that the tetrachorics are problematic (say, > |1.0|), and that the solution isn't "admissable?"

4. Any guidance about how robust the estimates are likely to be from ULS (or from WLSMV) with many dichotomies closer to .2/.8 than to .5/.5? (This is common with incidence/prevalence symptom data, like these.)

Again, references are welcome! And thanks for your help!

- bac
 Bengt O. Muthen posted on Friday, November 09, 2007 - 3:56 pm
1. Checking loading similarity is good - they are generally close.

2. Go by interpretability of the loading matrix - with too many factors you'll find a factor with hardly any items loading on it. RMSR < .05 might be a useful goal, but it depends on many factors.

3. That message is ok. It doesn't usually mean corr > 1, but that a set of corrs create non-pos-definiteness. The npd may not be "significant", that is if your fitted model is pos def (no neg variances, no factor corrs >=1) and close to the sample matrix.

4. I haven't studied ULS in such settings - Monte Carlo simulations in Mplus (which is easy) would tell.

Don't know references on these topics.
 Bruce A. Cooper posted on Thursday, December 13, 2007 - 11:40 am
Thank you very much for your reply, Bengt. It was very helpful! Sorry to take so long to say so.

Of course, now I have another question re EFA in V5. You have changed two defaults in EFA of interest for the analyses I'm running. Now, the default estimator is WLS rather than ULS, and the default rotation is MAXIMIN instead of PROMAX. From the new User Guide, I found your unpub paper #75 about robust inference with the kind of data I have, and that was very helpful -- especially the part about WLS not doing well with N=200. (My N=152.) So, ULS it is for me!

The other new default puzzles me, because I haven't found any positive support for MAXIMIN (and, it is not recommended), but there are many positive comments about PROMAX. I know that I can (and I have) specified PROMAX for my analyses, but why change the default to an oblique method that has been shown to produce factors that have too high correlations, from one that performs well?

Thanks,
bac
 Bengt O. Muthen posted on Thursday, December 13, 2007 - 2:26 pm
The default rotation in Mplus Version 5 is direct Quartimin, and it has gotten good press as far as I know. In contrast, Promax is a bit outdated and also does not produce SEs. See the references to say Michael Browne's and Jennrich's work in the Version 5 User's Guide, chapter 4. Quartimin is recommended as default by Jennrich. Quartimin is also close to CF-Varimax in oblique form (see the 2001 MBR article by Browne) which is the method Browne recommends - unless the number of variables is very small these 2 methods are almost identical. So the new features should be an improvement. If you have any contrary articles about quartimin, please let me know.
 Alison Riddle posted on Thursday, January 31, 2008 - 6:21 am
Hi,

Do you have a reference to explain the appropriateness of using the WLSMV estimator for an EFA with dichotomous data? Thanks.

Alison
 Linda K. Muthen posted on Thursday, January 31, 2008 - 10:46 am
WLSMV is discussed in:

Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Accepted for publication in Psychometrika.

which is available on Bengt's UCLA website.
 Alison Riddle posted on Friday, February 01, 2008 - 9:07 am
Excellent. Thank you.
 aleksandar posted on Thursday, February 28, 2008 - 2:27 pm
Hi,

I would like to know how can I calculate correlatiot between bynary variables.

For example:

var1 var 2 var3
1 1 1
0 0 1
1 1 0
0 1 0

Thank you.
 Maggie Chun posted on Thursday, February 28, 2008 - 3:55 pm
For a 2-group CFA with categorical variables, I am getting the error
report"Based on Group 1: Group 2 contains inconsistent categorical value for CBEHGNW: 4". Group 1 does in fact have 12 people who picked 4 (on 0 to 4
likert scale) while group 2's people never picked this most extreme category. Need using WLSMV for the severe floor-effect data.

How could I deal with this problem without dichotomize data?

Thank you very much!!
 Linda K. Muthen posted on Thursday, February 28, 2008 - 4:41 pm
Aleksandar: There is no explicit formula for calculating a tetrachoric correlation between two binary variables. It is an iterative procedure. See, for example, the following paper:

Muthén, B. (1978). Contributions to factor analysis of dichotomous variables. Psychometrika, 43, 551-560.
 Linda K. Muthen posted on Thursday, February 28, 2008 - 4:43 pm
Maggie: If you do not wish to collapse categories so all groups have the same values, you can use the * setting of the CATEGORICAL option and maximum likelihood estimation.
 Maggie Chun posted on Friday, February 29, 2008 - 6:25 am
Linda,

Thank you very much for you prompt reply!!

Have a nice day! I learned a lot from your webpage!

Maggie
 Maggie Chun posted on Friday, February 29, 2008 - 4:05 pm
Sorry for asking again, Linda,

I tried to find related reference for that WLSMV is the best way to deal with extremely skewed data, like floor effect.

Is it Ok for using Maximum likelihood estimation to deal with skewed data?

Thank you very much!

Maggie
 Linda K. Muthen posted on Friday, February 29, 2008 - 4:17 pm
If you specify the variables are categorical, you can use either maximum likelihood or weighted least squares. Both will handle the data appropriately. You may run into a problem with maximum likelihood if your model has too many dimensions of integration. If that is the case, then you will need to collapse categories.
 aleksandar posted on Sunday, March 02, 2008 - 1:12 pm
Hi,

me again.

I would like to know what is difference among categorical, bynary and dichotomous variables.

Thank you very much.
 Maggie Chun posted on Monday, March 03, 2008 - 6:25 am
Linda,

Thank you very much! I will try the method you recommended later.

Have a nice week!

Maggie
 Linda K. Muthen posted on Monday, March 03, 2008 - 7:30 am
Aleksandar: Following are my opinion on these terms. Others may disagree. Binary and dichotomous are the same. They refer to a variable with two categories. Categorical covers binary, ordered categorical variables with more than two categories (ordinal), and unordered categorical variables with more than two categories (nominal).
 Academic Research & Statistical Con posted on Sunday, May 25, 2008 - 7:32 pm
I have a very large database of binary items (medical symptoms) collected from a number of different studies. I would like to run factor analyses on the whole dataset but have quite a problem with missing data (i.e. not all items were used in each study). I have read your discussion board and noticed a posting from 2000 which stated: "Mplus does not handle missing data for categorical outcomes. Mplus uses a probit regression of the item on the factor, thereby allowing for the non-linear relationship". I was wondering if the newer version of Mplus (version 5) might now be able to deal with such missing data?
 Linda K. Muthen posted on Monday, May 26, 2008 - 8:06 am
Mplus has several options for the estimation of models with missing data. Mplus provides maximum likelihood estimation under MCAR (missing completely at random) and MAR (missing at random; Little & Rubin, 2002) for continuous, censored, binary, ordered categorical (ordinal), unordered categorical (nominal), counts, or combinations of these variable types. MAR means that missingness can be a function of observed covariates and observed outcomes. For censored and categorical outcomes using weighted least squares estimation, missingness is allowed to be a function of the observed covariates but not the observed outcomes. When there are no covariates in the model, this is analogous to pairwise present analysis. Non-ignorable missing data modeling is possible using maximum likelihood estimation where categorical outcomes are indicators of missingness and where missingness can be predicted by continuous and categorical latent variables (Muthén, Jo, & Brown, 2003).

Multiple data sets generated using multiple imputation (Schafer, 1997) can be analyzed using a special feature of Mplus. Parameter estimates are averaged over the set of analyses, and standard errors are computed using the average of the standard errors over the set of analyses and the between analysis parameter estimate variation.
 aleksandar posted on Sunday, August 24, 2008 - 3:25 pm
Dear,

I would like to ask.
Before I run factor analysis (principal components) I calculated Kaiser-Mayer-Olkin Measure of Sampling Adequacy, and Bartlett's Test of Sphericity. My confusion is whether I can calculate this when I have binary dates. Is it relevant measure for binary dates.

Thank you very much.
 Bengt O. Muthen posted on Sunday, August 24, 2008 - 5:54 pm
I think the original test is for continuous variables, but testing of variables being uncorrelated could be applied in the binary case as well. But I don't see why you would want to test uncorrelatedness - if you are interested in factor analysis one would assume there is a measurement instrument with correlated items. - Also, a 1-factor model would show if that is the case by getting insignificant loadings. I don't know the sampling adequacy measure.

Also, if you want to do factor analysis you don't want to use principal component analysis.
 aleksandar posted on Monday, August 25, 2008 - 2:39 pm
Dear,

I would like to extract valid factors from my binary dates but I don't know whether I should use factor analysis or principal components. I didn't see differences between
this two solution. I am confused.

Whether I can called principal components - factors?

Best Regards
 Bengt O. Muthen posted on Monday, August 25, 2008 - 6:15 pm
Factors are extracted via factor analysis, not principal components. See e.g.

Fabrigar, L.R., Wegener, D.T., MacCallum, R.C. & Strahan, E.J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods , 4, 272-299.
 aleksandar posted on Wednesday, August 27, 2008 - 2:07 pm
But how can I called the principal components which are extracted from principal components analysis.

Just principal components?

Thanks.
 Bengt O. Muthen posted on Wednesday, August 27, 2008 - 3:07 pm
Yes, PCA gives principal components.
 aleksandar posted on Friday, August 29, 2008 - 1:50 am
Me again,

I have 10 variable which are presented with binary dates.

I would like to exctract grupes which are represented from this binary dates. But I don't know whether I should do factor analysis or pricipal components.

I am more then grateful.
 Linda K. Muthen posted on Friday, August 29, 2008 - 7:18 am
I would use factor analysis.
 Kihan Kim posted on Saturday, October 11, 2008 - 2:34 pm
Hi, I purchased MPlus 5.1 after confirming that it would produce factor scores in exploratory factor analysis. However, I just found out that the following command does not work.

...

Analysis: Type = EFA 10 10;

Savedata: file is output.txt;
save is fscores;


I also found that the Mplus User's Guide (for 5.1) pp. 605-606 shows that factor scores are not available for Type = EFA... Then, how do I get factors scores while performing EFA?
 Bengt O. Muthen posted on Saturday, October 11, 2008 - 2:45 pm
You can use "ESEM" to get EFA factor scores, e.g.:

Model:
f1-f10 by y1-y20 (*1);

For more information, see the version 5 user's guide addendum and also the ESEM paper under technical appendices:

http://www.statmodel.com/download/EFACFA84.pdf
 Kihan Kim posted on Saturday, October 11, 2008 - 3:13 pm
Hi..

This is the syntax that I used for EFA.. How would I transform this syntax in ESEM context and get factor score?

Is it simply:

Model: F1-F10 by "SPECIFY VARIABLES HERE" (*1);

Savedata: filename is XXX;
Save is fscores;


Following is my origianl EFA syntax:

Title: EFA 13
Data: File is MplusData2.txt;
Variable: Names are v1-v108;

Usevariables are v5 v10 v15-v17 v20 v26 v28 v37
v51 v61 v63 v64 v67
v73 v75 v77 v78 v80 v82 v84 v86 v87 v89 v93
v96 v105 v107;

Categorical are v5 v10 v15-v17 v20 v26 v28 v37
v51 v61 v63 v64 v67
v73 v75 v77 v78 v80 v82 v84 v86 v87 v89 v93
v96 v107;

Analysis: Type = EFA 10 10;
 Bengt O. Muthen posted on Saturday, October 11, 2008 - 3:35 pm
You just list all variables:

Model: f1-f10 by v5-v107 (*1);

- simple, isn't it?
 Kihan Kim posted on Saturday, October 11, 2008 - 6:12 pm
Hi.. thank you. By using "Model: f1-f10 by v5-v107(*1)" I was able to get the same factor loadings as I did from the "Analysis: Type = EFA 10 10;" So, I think I'm getting the right results.

But, still I'm having some problem with obtaining the factor scores. I used the following command:

Savedata: filename is output.txt;
Save is fscores;

And an output file named "output.txt" was created, as commanded, but the entries of the file didn't look like factor scores. Is this the right command to get factor scores? Or should I use some other commands.. Thank you in advance.
 Bengt O. Muthen posted on Saturday, October 11, 2008 - 6:25 pm
Please send input, output, data and license number to support@statmodel.com.
 Nuroudene O. Aweda posted on Saturday, December 27, 2008 - 11:06 am
I am a research student currently conducting a study that involves the use of Factor Analysis on categorical variables. I need a literature review on factor analysis on categorical variables. Can anybody help? Your prompt response will greatly appreciated. Thanks.
 Linda K. Muthen posted on Sunday, December 28, 2008 - 4:48 pm
See the References and Papers sections of our website.
 Alison Riddle posted on Friday, February 20, 2009 - 9:20 am
Hi,

I ran an EFA with 17 dichotomous variables with TYPE:IMPUTATION and TYPE:COMPLEX. It ran successfully but I could use some assistance clarifying some of the results.

1) The chi-square test of model fit did not provide a p-value. Why is that? What is the best way to assess model fit? Should I just use RMSEA?

2) How can I assess reliability given that FSDETERMINANCY doesn't work for TYPE:IMPUTATION?

3) How concerned should I be that the output notes "errors in replication" given that the analysis did terminate successfully?

Thanks for your assistance.
 J.D. Haltigan posted on Monday, April 26, 2010 - 11:34 am
Hi all,
good to be back to the board and asking questions. My prior attempt at modeling variance components in a G-theory framework ended successfully after the gracious input of Linda.

After reading this thread I am still not certain: when proceeding with an FA on dichotomous data, does Mplus need a covariance matrix or the raw data itself (I believe I read above the tetrachoric correlations were used?

I am currently using Mplus 4.2 to conduct an exploratory FA on binary data coded as presence/absence (6 variables).

Best
JD
 Linda K. Muthen posted on Monday, April 26, 2010 - 11:46 am
You need raw data. The program computes the necessary sample statistics for model estimation.
 J.D. Haltigan posted on Monday, April 26, 2010 - 6:28 pm
Thanks Linda. In the case where I am purely doing an exploratory FA, will the program itself detect the number of factors represented in the data or is it necessary to specify how many I am looking for a priori?
 Linda K. Muthen posted on Monday, April 26, 2010 - 7:01 pm
See the TYPE option of the ANALYSIS command in the user's guide.
 J.D. Haltigan posted on Monday, April 26, 2010 - 8:01 pm
The Estimator reference was perfect. Again, thank you.

J
 J.D. Haltigan posted on Tuesday, April 27, 2010 - 10:14 am
Hi all,

I was able to run the EFA successfully. What I am having trouble doing is understanding the output. Essentially I specified up to 9 potential latent variables to be extracted (as there are 9 indicators--all binary). How does one go about determining which set of factor extractions is the one that best explains the data (obviously theoretical backdrop plays some in this I am just unsure how to interpret the SUMMARY OF CATEGORICAL DATA PROPORTIONS)?

Are those essentially factor loadings?
 Linda K. Muthen posted on Tuesday, April 27, 2010 - 11:10 am
No, they are not the factor loadings. The SUMMARY OF CATEGORICAL DATA PROPORTIONS gives for each variable the proportion of observations with 0 and the proportion of observations with 1.
 J.D. Haltigan posted on Tuesday, April 27, 2010 - 7:29 pm
Apologies for my last post-did not intend to violate board policy.

Is it the case that in an EFA framework you do not get a chi-square test of model fit since you are not specifying a given number of factors? As this is my first time running an EFA on binary data I am a bit lost as to the interpretation of the latent factor structure.
 Linda K. Muthen posted on Wednesday, April 28, 2010 - 5:08 am
Chi-square and related fit statistics are given with EFA. EFA is discussed in the Topic 1 course video and handout.
 J.D. Haltigan posted on Wednesday, April 28, 2010 - 8:58 am
Thanks. I have been reading through this detailed thread and think I answered my question in that with the default of ULS for binary indicators, you get the RMSR rather than chi-square fit indices. I could change it to WLMSV, but I do not think this makes sense with my data (1191 cases, 7 binary indicators).
 Emily Yeend posted on Monday, August 30, 2010 - 2:46 am
Hi,

I have a mixture of binary and ordered categorical variables and am running the appropriate program for factor analysis. What type of correlations will Mplus be using for this? Tetrachoric, polychoric, Rank-Biserial - a mixture of the three?

Many Thanks,
Emily
 Linda K. Muthen posted on Monday, August 30, 2010 - 8:50 am
With weighted least squares estimation and categorical outcomes, tetrachoric correlations are used for pairs of binary variables, polychoric correlations are used for pairs of ordered categorical variables, and polyserial correlations are used for pairs of binary and ordered categorical variables.
 Emily Yeend posted on Tuesday, August 31, 2010 - 2:14 am
Hey Linda,

Thanks. I have a collection of binary and ordered categorical variables and have obtained a correlation Matrix from Mplus as part of my Factor Analysis output. I've spot checked a few correlations using R, a package I'm more familiar with, to check I'm doing it right, and I notice that the Mplus correlations match those produced using the tetra/polchoric method in R. When I use the polyserial method my correlations differ from that of Mplus's. I notice that in the R help manual it mentions that the polyserial correlations are for use between quantitative and ordinal variables. Do you think this is just a discrepancy between how the two software packages do things?

Here's a link to the pdf so you can see what I mean if that helps.
www.cran.r-project.org/web/packages/polycor/polycor.pdf

Many Thanks,
Emily
 Linda K. Muthen posted on Tuesday, August 31, 2010 - 9:03 am
I mistyped. The correlation between and binary and ordered categorical is a polychoric.
 Mariska Bot posted on Thursday, September 23, 2010 - 7:37 am
Hello,

We are using mplus for an exploratory factor analysis with 31 dichotomous items. I have some questions about the output.

1. We found high tetrachorical correlations (above 0.9) between some items. In regular FA, high multicollinearity can be a problem. Is this also the case for tetrachorical correlations?
2. Following my previous question: if high multicollinearity is a problem, some researchers advise to use PCA. Is it possible to run a PCA with dichotomous variables in mplus? What are the input instructions I need?
3. The output of the EFA gives us a warning: "WARNING: THE BIVARIATE TABLE OF V3 AND V2 HAS AN EMPTY CELL." Are the results in the remainder of the output still valid?

Many thanks!
 Linda K. Muthen posted on Thursday, September 23, 2010 - 2:34 pm
1. I don't think a correlation of .9 among factor indicators is too high. I believe that multicollinearity refers to high correlations among covariates.

2. Mplus does not do PCA.

3. An empty cell implies a correlation of plus or minus one. Both items should not be used as they do not contribute any unique information.
 Mariska Bot posted on Wednesday, September 29, 2010 - 7:21 am
Many thanks for your quick reply.
I am not sure if I understand the reply to question 3 correctly.
I have dichotomous data (0 or 1) and no missing values. When I make a cross-tabulation of variable 2 (v2) and variable 3 (v3), I have the following number of persons per situation.

v2=0 and v3=0: n=3768
v2=1 and v3=0: n=226
v2=0 and v3=1: n=57
v2=1 and v3=1: n=0

I assume the warning message about the empty cell refers to the last situation. As can be derived from this output, v2 and v3 are not identical. Therefore, I don't understand why the correlation between v2 and v3 will be 1 or -1. Could you explain this? Thank you!
 Bengt O. Muthen posted on Wednesday, September 29, 2010 - 8:38 am
It is a function of the assumption of underlying bivariate normal variables for which the correlation is estimated. Having a zero cell is best fit by an extreme correlation value. You can think of this shortcoming as there being no information on how many endorse both items (v2=1, v3=1) in the population.

It may not hurt the overall factor analysis much at all given that you have 31 items, that is 465 pairs, for which it sounds like you only exprience this once.
 Tracy Johnson posted on Wednesday, November 10, 2010 - 4:51 am
To follow-up on the previous question about the empty cells, I am working on an EFA with 41 binary variables, over 200,000 records and no missing data. The challenge with the dataset is that it is sparse – each of the 41 items is endorsed in less than 10% of records (i.e., has a value of 1). I have gotten the warning: “Warning: The bivariate table of VX and VY has an empty cell” for 42 of the variable pairs. These 42 warnings reference 18 of the variables.

BTW - I don’t think these data are necessarily appropriate for a factor analysis, given these issues that I was aware of, but reviewers continually suggest that an EFA may be interesting; thus we thought it would be worth a try.

Any feedback/advice would be appreciated.
 Linda K. Muthen posted on Wednesday, November 10, 2010 - 10:39 am
I would run it with and without the 18 variables to see if it makes a big difference.
 Joel Williams posted on Tuesday, February 01, 2011 - 8:54 am
We have the following data & assumed constructs: 8 food preference items with a binary response "healthy" = 1 or "unhealthy" = 0; 36 psychosocail statements: Each self-efficacy statement has a three-level response set: "Not sure I can" = 1, "A little sure I can" = 2, "Sure I can" = 3. Each outcome expectation statement has a three-level response set: "Disagree" = 1, "Not sure" = 2, "Agree" = 3); 14 knowledge questions related to nutrition or food safety. Each question has four multiple choice responses: one correct answer =1 and three distractors = 0.

I am not sure if we can do EFA on all 58 items together given the three different formats and types of items. So, I wonder if we should run separate factor analyses, one for each on the three types of questions/items (food preference, psychosocial, and knowledge) or can you confirm that it is appropriate for us to run analyses on all items simultaneously?

I have conducted factor analysis (using SAS and SPSS) on psychosocial items but I have never done this on knowledge items. Is it appropriate for one to do EFA on knowledge items? Are there other analyses avaliable through MPlus that you recommend for those items?

Based on your responses to my questions are there any specific videos you recommend we view under the training tab of the MPlus homepage or are there any specific papers you would recommend we read?
 Linda K. Muthen posted on Tuesday, February 01, 2011 - 2:10 pm
You could analyze them together. However, it sounds like the items were developed separately and your question is about the unidimensionality of each set of items. In this case, it might be quicker to analyze each set separately to see if unidimensionality holds.
 Joel Williams posted on Friday, April 08, 2011 - 9:56 am
Can you help us with an error message? We are tryting to run an EFA and are getting this error message: *** ERROR in VARIABLE command Unknown variable in CATEGORICAL option: Q1. We get this for all variables Q1-Q58.

This is our syntax...

TITLE: Practice Run;

DATA: FILE IS "C:\Data\Dissertation_MPlus.dat";
FORMAT IS FREE;
TYPE IS INDIVIDUAL;
NGROUPS=1;

VARIABLE: NAMES ARE Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15 Q16
Q17 Q18 Q19 Q20 Q21 Q22 Q23 Q24 Q25 Q26 Q27 Q28 Q29 Q30 Q31 Q32
Q33 Q34 Q35 Q36 Q37 Q38 Q39 Q40 Q41 Q42 Q43 Q44 Q45 Q46 Q47 Q48
Q49 Q50 Q51 Q52 Q53 Q54 Q55 Q56 Q57 Q58;
CATEGORICAL ARE Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15 Q16
Q17 Q18 Q19 Q20 Q21 Q22 Q23 Q24 Q25 Q26 Q27 Q28 Q29 Q30 Q31 Q32
Q33 Q34 Q35 Q36 Q37 Q38 Q39 Q40 Q41 Q42 Q43 Q44 Q45 Q46 Q47 Q48
Q49 Q50 Q51 Q52 Q53 Q54 Q55 Q56 Q57 Q58;
MISSING ARE all (-9);
USEVARIABLES = Q9-Q19;

ANALYSIS: TYPE = EFA 1 2;
ESTIMATOR = WLSMV;
 Linda K. Muthen posted on Friday, April 08, 2011 - 11:26 am
The problem is that only variables on the USEVARIABLES list can be on the CATEGORICAL list.

You can save yourself a lot of typing by saying:

NAMES ARE q1-q58;
 Joel Williams posted on Friday, April 08, 2011 - 1:30 pm
Linda - We knew it was a simple fix... We're brand new to the MPlus environment so thank you for helping out (and being patient with) us as MPlus syntax novices. By the way, the turnaround time on issues posted to the Discussion Board is fantastic.
 Melanie Wall posted on Tuesday, November 01, 2011 - 12:58 pm
We are performing EFA with dichotomous items using the NESARC which includes complex sampling weights, we are using Maximum likelihood so we can obtain BIC.  A colleague recently said they heard "maximum likelihood cannot be use for complex survey data in Mplus".  I could not pin the colleague down on details but they thought they heard this at a Mplus Short course.  Can you please verify if there is any reason we should not trust the ML results from Mplus when using complex sampling weights (also with dichotomous outcomes).
 Linda K. Muthen posted on Tuesday, November 01, 2011 - 1:47 pm
I think the person was confused. You can't use the WEIGHTS option with the ML estimator. But it can be used with other maximum likelihood estimators like MLR, MLM, and MLMV and for ML when the BOOTSTRAP option is used. Weights are not available for ML because with weights the standard errors would not be ML. They would be pseudo ML.

My concern with EFA and categorical outcomes is that each factor is one dimension of integration so extracting more than 3 or 4 factors will become computationally heavy.
 Melanie Wall posted on Tuesday, November 01, 2011 - 2:07 pm
Linda, thank you for the prompt reply,

With our dichotomous outcomes and the EFA, we are using weights, and cluster, and stratification and even subpopulation. Do you have a recommendation of which ML method (MLR, MLM, MLMV or MLwithBootstrap) we should use?

Note, we want to use maximum likelihood (rather than one of the WLS methods) so we can get BIC values for model comparison.
 Linda K. Muthen posted on Tuesday, November 01, 2011 - 2:37 pm
Our default choice is MLR.
 Stata posted on Tuesday, March 13, 2012 - 1:56 pm
Is it possible to obtain AIC, BIC, and aBIC with categorical data EFA?

Thanks.
 Linda K. Muthen posted on Tuesday, March 13, 2012 - 2:53 pm
Yes, if you use maximum likelihood estimation. This requires numerical integration so you would not want too many factors.
 Stata posted on Tuesday, March 13, 2012 - 5:58 pm
Hi Linda,

Thank you. This is very helpful.
 Kelly Lundstrom posted on Saturday, April 14, 2012 - 9:00 pm
I am trying to understand how the WLSMV method works for categorical data. I have 26 binary indicators, a sample size of 559, and have run EFA on the data with WLSMV. Is there a source/paper you can suggest to help me understand how WLSMV works? i.e. Given my binary data matrix (dim = 559x26), how can I get the W matrix? What can I do to get the loadings? etc.
 Linda K. Muthen posted on Sunday, April 15, 2012 - 5:03 pm
See the following paper which is available on the website:

Muthén, B., du Toit, S.H.C., & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes.

We don't print the weight matrix. Factor loadings and other parameter estimates are printed in the results section.
 andrea lloyd posted on Friday, April 27, 2012 - 11:54 am
I am running an EFA on 31 binary items. I get a string of error messages that I do not understand. For example: "Warning: the bivariate table of v10 and v8 has an empty cell." Is this referring to the correlation matrix? If so, that cell is not empty.
 Linda K. Muthen posted on Friday, April 27, 2012 - 12:15 pm
It is referring to the observed variable crosstab between the two items v10 and v8. An empty cell in a two by two table implies a correlation of one. This means both items cannot be used in the analysis.
 Jan Zirk posted on Saturday, April 28, 2012 - 7:09 pm
Dear Linda or Bengt,

Do you know a reference which would imply that performing WLSMV CFA for ordered variables with a few categories ( 5 categories) for a small sample (n=80) and 6 indicators is more trustworthy than performing WLSMV CFA in the same sample for binary data (also 6 indicators)?
 Bengt O. Muthen posted on Saturday, April 28, 2012 - 7:19 pm
Short answer: No. Longer answer:

With polytomous items you do have more information, but you also have more parameters. If you talk about quality of factor loading estimates for instance, my guess would be that the former outweighs the latter. You could do a simple Monte Carlo simulation study to see; see UG chapter 12. And, perhaps ML would do better than WLSMV at that low sample size. Or, Bayes.
 Jan Zirk posted on Wednesday, May 02, 2012 - 8:02 am
Thanks very much Bengt!
 Bruno Figueiredo Damásio posted on Friday, January 18, 2013 - 6:30 pm
Dear all,

I am using MPLus to conduct an exploratory factor analysis with binary indicators. Although I have a theoretical model which explicits the number of factors to retain, I am not sure if this model is the best for the sample I have. So, my question is, how can I test for the best exploratory model using MPlus? Or, which is the available factor retention criteria for binary indicators?

I have ready many papers suggesting parallel analysis, hull method, MAP etc, but as far as I know, these methods are related only to ordinal indicators


Thanks in advance
 Bengt O. Muthen posted on Saturday, January 19, 2013 - 6:30 am
The Mplus WLSMV estimator gives you several fit statistics to guide the search for number of factors, such as chi-square, CFI etc.
 Tom Booth posted on Sunday, February 03, 2013 - 7:31 am
Linda/Bengt,

I have had a request from reviewers to conduct an item level factor analysis on data with a 3-point categorical (Likert-esq) response format. I have a total of 158 items, which are hypothesized to measure 15 factors in a sample of 10,000+.

I have previously tried to run models estimated with WLSMV on this data, but my computers just are not capable of running those models.

I have looked into the use of Bayesian estimation for which the Mplus documentation lists as a possible advantage a decrease in computational intensity/time. Does this sound to you like a situation where this would be a sensible next step? What I really need is to be able to see a complete set of EFA(ESEM) loadings to consider item complexity.

Thanks

Tom
 Linda K. Muthen posted on Sunday, February 03, 2013 - 9:02 am
I would suggest using the ULS estimator. You get only parameter estimates which makes the computations less heavy. This should give you the opportunity to see the pattern of factor loadings as a first step.
 Tom Booth posted on Sunday, February 03, 2013 - 11:22 am
Thanks Linda. It has certainly estimated quickly and done the job with respect to providing initial item loading estimates. Do you have any guidance/suggested reading on how reliable are these estimates going to be for a 3-point response format?

From these initial estimates I need to devise a full strategy for analysis.
 Bengt O. Muthen posted on Sunday, February 03, 2013 - 11:34 am
The estimates should be quite good and close to what you would have obtained with WLSMV if that could have been done.
 Hossein Karami posted on Saturday, December 07, 2013 - 3:58 pm
Hi,
I'm doing an EFA using Mplus on a data set with 70 binary items and 3,000 participants. The EFA is done with up to six dimensions. This will be followed by unidimensional and multidimensional IRT models. I wonder if there is any recent research on the appropriacy of EFA for binary data.Do you recommend such analyses?
Many thanks in advance.

Hossein
 Bengt O. Muthen posted on Sunday, December 08, 2013 - 10:29 am
Yes, I recommend EFA for binary data. A recent paper is on our website:

Muthén & Asparouhov (2013). Item response modeling in Mplus: A multi-dimensional, multi-level, and multi-timepoint example.

A 35 years older paper is

Muthén, B. (1978). Contributions to factor analysis of dichotomous variables. Psychometrika, 43, 551-560.
 Hossein Karami posted on Monday, December 09, 2013 - 4:37 am
Thank you Bengt. Got it.
 Nara Jang posted on Tuesday, April 08, 2014 - 11:39 am
Dear Dr. Muthen,

I conducted EFA using a half random sample from my data set.

Would you tell me if it is correct that I need to read "GEOMIN ROTATED LOADINGS (* significant at 5% level)" table.

Thank you so much for your time and expert advice in advance!
 Linda K. Muthen posted on Tuesday, April 08, 2014 - 11:57 am
This is where you will find the factor loadings.
 Nara Jang posted on Tuesday, April 08, 2014 - 12:54 pm
Thank you so much!!
 Nara Jang posted on Wednesday, April 16, 2014 - 2:09 pm
Dear Dr. Muthen,

I would like to conduct EFA and the variables are binary, ordinary, and continuous. Would you tell me if I can use both SPSS and Mplus?

Thank you so much!
 Bengt O. Muthen posted on Wednesday, April 16, 2014 - 5:42 pm
Mplus can do this; I don't know what SPSS does (you may want to ask on SEMNET).
 Hakan Atýlgan posted on Friday, February 19, 2016 - 3:48 am
Dear Dr. Muthen
I am planning a research on EFA with dichotomous item. I'm using Mplus for data analysis. I have two questions below. Thank you in advance for your help and for your response.

1.When I define my categorical data as continuous, Mplus uses estimator is ML. When i define my categorical data as categorical, MPlus uses estimator is WLSMV. It is ok, this is normal and as expected. My question is that, if categorical data defined as categorical and Mplus us estimator both ML and WLSMV, do the fit indices (RMSEA, SRME, CFI, TLI) change?

2.I know that RMSEA and CFI to assume multivariate normal distribution. However, when I use Mplus to conduct an EFA with categorical indicator (dichotomous items) Mplus output reported both RMSEA and CFI. How do I interpret it? Is there something escaped from my eye?
 Bengt O. Muthen posted on Friday, February 19, 2016 - 6:13 am
1. The ML estimator is available when the variables are categorical but the usual fit indices are not available because no longer are the means, variances, and covariances sufficient statistics to which the model is fitted. You can use TECH10 bivariate fit tests.


2. WLSMV tests the fit to the sample correlations among the latent response variables which are assumed normal. It is a valuable test but it is not testing against the observed data (as TECH10 is).
 Avril Kaplan posted on Thursday, February 02, 2017 - 8:06 am
Just a quick point of clarification when using MLR vs WLSMV for binary multi-level factor analysis:

1) When all items are binary, WLSMV runs CFA using a tetrachoric correlation matrix that splits the within group correlation matrix (which explore individual deviations from the respective group mean) from the between group correlation matrix (which explores group deviations from the grand mean). Is this correct?

2) When all items are binary and I use MLR, the output does not produce a within and between correlation matrix. Why is this the case? How is MLR estimating a multi-level CFA?
 Bengt O. Muthen posted on Thursday, February 02, 2017 - 5:30 pm
1) Yes.

2) ML uses raw data while WLSMV uses only second-order information (pairwise info). ML maximizes the likelihood whereas WLSMV minimizes the difference between observed and estimated correlations.
 Robyn Borgman posted on Tuesday, January 30, 2018 - 11:37 am
I am currently trying to run an EFA on some tetrachoric correlations. However, I am receiving the error pasted below (variable PAPS_20 and many other variables have a tetra. corr. of 1 with other variables). I am not sure how to address this issue. We are trying to develop a measure and are hoping to use this EFA as another means of item trimming/subscale development (all items are dichotomous, hence the tetra. corr.). Does the tetrachoric correlation of 1 mean that the two items sharing that tetra. correlation of 1 are too similar? Can we use this as a reason for scale trimming? Should we trim some items and then try to run the EFA on the remaining non +/-1 correlations? Please advise.
*** FATAL ERROR

THE SAMPLE COVARIANCE MATRIX COULD NOT BE INVERTED. THIS CAN OCCUR IF A VARIABLE HAS NO VARIATION, OR IF TWO VARIABLES ARE PERFECTLY CORRELATED, OR IF THE NUMBER OF OBSERVATIONS IS NOT GREATER THAN THE NUMBER OF VARIABLES.
CHECK YOUR DATA. THIS PROBLEM IS DUE TO:
VARIABLE : PAPS_20


Here's my syntax, excluding variable names because there are many.
INPUT INSTRUCTIONS

DATA:
FILE IS VVAWStetra2mplusJan.dat;
TYPE=CORRELATION;
NObservations= 350;

VARIABLE:
NAMES ARE
PAPS_08, etc.

ANALYSIS:
TYPE = EFA 1 4;
ESTIMATOR=ML;
 Bengt O. Muthen posted on Tuesday, January 30, 2018 - 5:38 pm
It is better to start from raw data than a tetrachoric correlation matrix. The ML estimator requires a pos def matrix, i.e. even if no correlations are 1 you may still not have a pos def matrix. The WLSMV estimator which starts from raw data avoids this problem. Starting from a correlation matrix you could instead try the ULS estimator. Deleting offending items may still not give pos definiteness - and pos def is not a requirement for good results.
 Stephen Leach posted on Wednesday, February 28, 2018 - 8:39 am
Dr. Muthen,

I am comparing a four-factor EFA model with a five-factor orthogonal EFA model using WLSMV estimator on binary data. It's my understanding that Mplus runs DIFFTEST in the background on such cases. Can I simply compare the chi-square values given for these two models since the four-factor model is nested within the bifactor model?

Thanks,

Steve
 Stephen Leach posted on Wednesday, February 28, 2018 - 8:46 am
I meant to say that the five factor model is a bifactor model with one general and four specific factors, using the bi-geomin(orthogonal) rotation.
 Bengt O. Muthen posted on Wednesday, February 28, 2018 - 3:51 pm
These models are not nested when there are more than 3 correlated factors. See our FAQ

Bi-factor compared to correlated factors model
 Stephen Leach posted on Wednesday, February 28, 2018 - 7:38 pm
Thank you.
 Emanuela Botta posted on Wednesday, March 21, 2018 - 6:33 am
Hi, I'm a phd student at Uniroma1. I'm using Mplus v.4.1 to do an EFA with categorical (0,1) variables and estimator wlsmv. In my version of Mplus I have in output only RMSR index. Is it the same that SRMSR? The cutoff value of .08 is correct?
Thanks
 Bengt O. Muthen posted on Thursday, March 22, 2018 - 12:17 pm
Answered.
 Ting Dai posted on Wednesday, April 29, 2020 - 11:13 am
Hi, Drs. Muthen.
Based on Mplus example 12.5 I was able to do EFA with Monte Carlo Simulation. I have two questions:

1. The ex12.5 output does not provide CFI. Is there a way to get Mplus to also generate CFI results?

2. If I would like to do EFA with Monte Carlo given k variables and their bivariate correlations, what is the likely factor structure, should I repeat the ex12.5 syntax and specify various numbers of factors with k observed variables? (E.g., given 10 observed variables and correlations of .50 across all, do specify 1 factor, 2 factors and 3 factors, run these 3 simulations, and then do comparisons?)

3. If #2 is a correct approach, what result(s) should i use to compare the models? Chi-square difference test?

Thanks in advance!
 Tihomir Asparouhov posted on Wednesday, April 29, 2020 - 12:45 pm
1. Update to Mplus 8.4
2. Sounds reasonable
3. See page 30,
https://www.statmodel.com/download/EFACFA810.pdf
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: