I am running a fairly simple CFA (2 factors, 10 measures), and I get an error message which pops up in a window and says "Mplus has unexpectedly stopped running. The output may contain only partial output. Please try again. If you continue to have problems with this input file please email it to Mplus support". However, the output seems to contain everything I expect it to. Should I be worried by this message?
You are most likely reading your data incorrectly. Be sure that you do not have blanks in your data as a missing value flag. If you do and you are reading your data free format, then your data are not being read correctly. If you continue to have problems, send your output and data to email@example.com.
I run a CFA (Principal Axis Factoring, conducting oblique factor rotation [oblimin]) using SPSS and was faced with the following message: 'Attempted to extract 6 factors. In iteration 25, the communality of a variable exceeded 1.0.Extraction was terminated'. The strange thing is that when I re-run EFA I had no problems with factor loadings etc. Sample size is n=272; number of (ordinal) variables is 6. Any ideas why did this occur? Thanks a lot in advance.
It sounds like you are using two different estimators and that is why your results are different. Also, you don't mention whether all of your analyes are using SPSS. In addition, you cannot extract six factors from only six variables.
HWard posted on Thursday, December 15, 2005 - 1:55 pm
In trying to create a 3-factor latent measure of 'diet', I am getting an error message which indicates that only one factor can be used to define diet. is this because these variables should not be combined into a factor analytic model?
TITLE: measurement model first attempt
DATA: FILE IS "C:\Documents and Settings\Owner\My Documents\fruit.csv";
VARIABLE: NAMES ARE VEG FRUIT GRAINS SEXNUM; USEVARIABLES ARE VEG FRUIT GRAINS; MISSING ARE ALL (-999);
ANALYSIS: TYPE IS EFA 2 3 ; ESTIMATOR = ML; ITERATIONS = 1000;
MODEL: DIET by fruit veg grains;
*** WARNING Too many factors were requested for EFA. The maximum number of factors is set to 1. 1 WARNING(S) FOUND IN THE INPUT INSTRUCTIONS
I'm doing an exploratory factor analysis with SPSS, using the principal axis factoring (for a construct measured with 3 items). I got the following message: "in iteration 25 the communality of a variable exceeded 1.0. extraction was terminated". what does it mean? how can I get the factor?
bmuthen posted on Tuesday, December 20, 2005 - 9:30 am
You should contact the SPSS customer support, but it sounds like you have a "Heywood case", which means that you have a negative residual variance, which in turn suggests either that your sample is quite small or that the factor model is not suitable for these data.
It sounds like you are not reading your data correctly. You either have blanks in your data which is not allowed with free format or the number of variable names does not match the number of variables in the data set. If you cannot figure this out, send your input, data, output, and license number to firstname.lastname@example.org.
The program isn't letting me run a confirmatory factor analysis, instead it terminates in a fatal error, saying that the degrees of freedom in my model is negative. Where would this be specified in my data set and how can I fix it?
Please pardon my ignorance, here. Won't there always be some zero cells if my CFA includes items that are rarely endorsed?
For instance, suppose I want to do a CFA on delinquency, positing a separate factor for aggression and property crimes. Included in my measure of aggression are the items "threatened with a knife" and "kidnapped someone." It is entirely possible that with two such extreme items no one in my sample will endorse both of them. Does that mean I have to throw one or both items out of my factor analysis?
Dear Drs. Muthen, I usually use raw items for all CFAs but in the case below I tried to use summary data and received an error that I've never seen before nor can if figure out what's wrong with the program.
TITLE: pclR H4 via cooke BJP covar HMP Sample
DATA: FILE IS cooke-BJP-covar2.txt; TYPE IS CORRELATION; NOBSERVATIONS = 827;
Factors will correlations greater than one are not statistically distinguishable. You will need to change your model.
Lois Downey posted on Wednesday, November 28, 2007 - 10:26 am
For a confirmatory factor analysis with dichotomous indicators, should my goal be to eliminate ALL occurrences of empty cells in the bivariate tables, or is it sufficient just to eliminate MOST occurrences?
You should have no empty cells. An empty cell implies a correlation of one.
Erika Wolf posted on Saturday, July 05, 2008 - 11:45 am
I've been running a series of twin analyses and had problems with multiple empty cells in the tetrachoric correlation matrix. I managed to reduce that error message down to just 1 error by eliminating psychiatric diagnoses that were very low base rate in the epidemiological sample I'm working with. However, the 1 empty cell that I'm left with is problematic because, conceptually, I can't eliminate either variable (they are both too important) and I can't combine them, because, again, conceptually, the disorders are quite different from one another (a pure anxiety disorder vs. anti-social behavior). The message I'm getting indicates that the empty cell is between Twin A on the anxiety disorder and Twin B on the antisocial disorder. How do you suggest I proceed? Is there any work around?
It might be worth thinking about this because it seems strange that two conceptually different disorders would be so closely tied as to give a zero cell in their crosstab - unless the sample size is too small relative to the rare outcomes so that it is likely to happen as a random event. You can ignore it and see if model estimates come out reasonably - perhaps with only one such problematic pair the distortion would not be big. Or you could try switching to ML which does not avoid the problem but may (or may not) suffer less. ML is however heavy with many dimensions.
Erika Wolf posted on Monday, July 07, 2008 - 9:24 am
Thanks for your response. The sample size is large (over 3000 pairs of twins), but the base rate of the disorders are low (< 5%) because it is an epidemiological sample. So I assumed the low base rate contributed to the problem. The model yields good results that are quite interpretable, so I'd like to be able to ignore the warning, but wasn't sure if that was really OK to do in this case.
I also have been running twin analyses and am getting messages about zero cells that I think I can ignore. The situation is that I have adapted Prescott's code to create latent variables and estimate A,C,E for this latent variable. One latent var is created for twin 1 and one for twin 2, and each latent variable is based on that twin's dichotomous item indicators. Zero cells emerge when, for example, dichotomous item 1 for twin 1 is correlated with dichotomous item 9 for twin 2. However, I don't want these correlations. I want to correlate twin 1's continuous latent variable with twin 2's continuous latent variable to estimate A,C,E. So, my question is... are the zero cells ignorable and how can I adapt my program to get around this problem? I used the IRT program example 7.29 and I didn't get the error messages about the zero cell problem. Why?
Zero cells refer to the observed data, the bivariate tables for each pair of observed data. A zero cell implies a correlation of one. This should not be ignored. It happens when the sample size is too small.
The message you can ignore is the one about a correlation of one since the model imposes that.
I am running a CFA and I have received this warning:
WARNING: THE RESIDUAL COVARIANCE MATRIX (THETA) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR AN OBSERVED VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO OBSERVED VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO OBSERVED VARIABLES. CHECK THE RESULTS SECTION FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE GDIF1_6.
Indeed, my GDIF1_6 variable has a correlation of more than 1, and a negative residual. It's one item in a two item scale that is very important in my model; the rest of the model fits relatively well.
Are there any solutions for dealing with these problem variables, other than taking them out? It has been suggested to me that I could attempt to classify the variable as a categorical var as opposed to continuous - what do you think about this?
It sounds like there is a misfit related to this factor. Perhaps the two indicators relate to other variables in the model differently from each other in a way that does not fit with their one-factor model. If GDIF1_6 correlates greater than one with another variable, it or the other variable should be removed from the analysis. The two items are not statistically distinguishable. Having a negative residual variance is one more strike against it. I would use the other indicator as an observed variable in the analysis. A factor with only two indicators is not identified without borrowing from other parts of the model. This makes it not very believable. If the variable is categorical, you can treat it as such but I don't think that is important.
Hi, Using the same number of variables, I am fitting models with different numbers of factors. My eight- and four-factor models terminate normally, but my one-factor solution says, "no convergence. number of iterations exceeded." I don't expect this model to fit better than the eight- or four-factor models, but can I just report "no convergence" in a journal article or do I have to report statistics for a model that converges? If so, how can I reach convergence (e.g., increase the number of iterations)?
All variables are continuous Estimator: ML Information matrix: OBSERVED Maximum number of iterations: 1000 Convergence criterion: 0.500D-04 Maximum number of steepest descent iterations: 20
I received the error message below. In this case, parameter 65 is the var's entry in the theta matrix.
Since I saw no unusual correlations, variances,residual variances, or value in the THETA matrix for the item in question, I removed the item to see if the problem was elsewhere.
Sure enough, the problem moved to another item. When I removed that item (without replacing the first item), the problem moved again to a different item.
If I keep going like this, I'll have a nulll model. Does anyone have any recommendations regarding this situation?
THE MODEL ESTIMATION TERMINATED NORMALLY
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS -0.167D-15. PROBLEM INVOLVING PARAMETER 65.
Kerry Lee posted on Monday, September 20, 2010 - 3:00 am
Dear Dr Muthen,
I am new to Mplus and am having some teething problems. I have a large data file with 400 odd variables. After spending the day recoding error codes and renaming variables, I am getting the following error message when i try to read the file:
There is no limit to the number of missing value flags.
We tried an example like yours and had no problem. Please send the full output and your license number to email@example.com.
Jen posted on Wednesday, September 22, 2010 - 9:02 am
I am working on a latent growth model with 13 time points (N=400+). One of the variables I hope to model is binary, indicating whether a participant has ever done a behavior. Therefore, once a participant becomes a "1", there is never a change back to "0", resulting in empty cells and many error messages. Eventually I hope to build a two-part LGM with the binary variable as well as a continuous variable (with the two growth processes correlated). Is it possible to model the binary data, or is this type of data inappropriate for LGM?
The list must have the same stem, for example, b1-b2 instead of b1-2.
Kerry Lee posted on Tuesday, October 04, 2011 - 8:01 pm
Dear Drs. Muthen,
I am running a modified multitrait-multitask CFA and am testing whether the data are better described by a unidimensional or 2/3 factor model.
Different age groups are involved and I am running them separately. For some groups, I am getting the following message.
THE MODEL ESTIMATION TERMINATED NORMALLY
WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE FLCNRTQ.
Inspection of TECH 4 shows that two of the latents are very strongly correlated (r = .806), but there was nothing equal to or greater than one. Furthermore, the variable FLCNRTQ is not a latent (but for some reason, it was included in the TECH 4 output). Its estimated correlations with other variables are also within bound.
I am puzzled by the cause of the error message and the inclusion of an observed variable in TECH 4. Would you have some suggestions?
An observed variable is included among the latents if for instance there is a variable predicting this variable - then a factor gets put behind it, for which the observed variable is taken to be a perfect indicator. So this is harmless. Note that correlations don't need to be 1 for non-pos definiteness, but the overall pattern of elements can create this problem.
If you like, you can send your output and license number to Support.
One variable in my 2-group cross-lagged (longitudinal) model is dichotomous in one group, but continuous in the other group.
In theory, the variable ranges from 0 to 10, and one group endorses a wide variety of values on this scale (0 to 8), but the other group endorses only 0's and 1's.
While this is an interesting finding in itself, it yields the following error message for my cross-lagged model:
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.407D-16. PROBLEM INVOLVING PARAMETER 29.
THIS IS MOST LIKELY DUE TO VARIABLE ANX30 BEING DICHOTOMOUS BUT DECLARED AS CONTINUOUS.
Is it possible to declare the variable as dichtomous in just one group? Will estimation problems arise from leaving it declared as continuous in both groups?
Or, is this an essential difference in the variables used, such that I should run the models in two entirely separate programs, rather than a two-group model in one program?
As long as you are certain that the message comes from the one group having on 0's and 1's, I would ignore the message. In this case, it is caused by the fact that the mean and variance of a binary variable are not orthogonal.
anonymous posted on Monday, September 23, 2013 - 7:51 am
I'm running a CFA with 15 categorical variables and I get the warning message "THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE." When I tried also including ALGORITHM=INTEGRATION and INTEGRATION=MONTECARLO (5000), the model successfully runs but then I no longer get the fit indices (e.g. CFI, RMSEA, etc.). Is there a way to both address the error message and get the fit indices to run?
Chi-square and related fit statistics are not available with ML and categorical outcomes. These are available for models where means, variances, and covariances are sufficient statistics for model estimation.
Dear Prof Muthén, I run CFA and I saw this message in output: WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE TC1. TC1 is one of my indicators. I recheck the TECH4 output and StdYX. The correlation between TC1 and Factor4 is 1.454 (I guessed there is dependency between our indicators), then I removed this indicator from CFA model and the output was as follow: (RMSEA=0.09, CFI/TLI=0.94/0.95, and SRMR=0.04) with no warning message. My question is: Is there another way for solving this problem? (without removing our indicator(s)?)
I am running a CFA and want to eliminate all items with small loadings on my factors.
Now I get the error message:
WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE INT_AKT.
Int_akt is one of my latent variables. All of my correlations are smaller than one and I have no negative residual variances. I know I have to change my model when I get this message. My question is, can I still look which items have small loadings and change my model by eliminating them? Or are the loadings not reliable when I get this error message?
I am conducting a CFA, and I'm running into the following error message: “THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS -0.167D-18. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 97, CETH_BLACK”
I only get this error message when I include our ethnicity variables (which are 4 dummy variables) in the WITH statement. When I run the identical analyses but without the ethnicity variables in the WITH statement, I do not get this error message.
I ran the TECH1 and TECH4 outputs, and parameter 97 is the PSI for ceth_black, but the covariance for this variable is .101, so I'm not sure what the problem is. Also, all of the correlations are below 1, and there are no negative residual variances.
We get the same error message when we run the same analyses using ethnicity coded as a single binary variable (when it is included in the WITH statements).
Is there something we can do to address this problem? Thanks in advance!
I assume ethnicity is an endogenous variable. The mean and variance of a binary variable are not orthogonal. This is what triggers the message. You can ignore this message if you brought ethnicity into the model to avoid losing cases because of missing data. Note that if you have more than one endogenous variable, you must either bring them all into the model or none of them into the model.
Thanks so much for your quick response to my question above!
In our case the ethnicity variables are exogenous. We are examining an SEM model with two latent endogenous variables and several measured predictor variables.
Ethnicity is one of the predictor variables, and we are examining in two different ways in separate model tests: (1) composed of one binary variable (white vs non-white) or (2) three dummy variables (representing four ethnic groups).
When we include the ethnicity variable(s) in the WITH statements, we get the error message that I referenced above (THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX...)
When you include these variables in the WITH statements, they are brought into the model and distributional assumptions are made about them. Comment out the WITH statements. If the error message goes away, it is caused by what I say above. You can them put them back and ignore the message.