It depends why the model won't converge. First of all, check Tech5 to see when the estimation stopped. If it just ran out of iterations and there are no negative variances in your results, then increase the number of iterations. It would speed things up if you used the preliminary estimates as starting values. If the iterations stopped before reaching the default number of iterations, the starting values are not appropriate for the data and new starting values should be tried. You should first check to make sure that your observed variables are not measured on very different scales. If they are on very different scales, you can rescale them using DEFINE to divide them by a constant. This may help convergence because it changes the starting values. If this does not work and you need to try different starting values, start with the variance parameters first as they are most often the problem. It is unlikely that starting values are needed for factor loadings or regression coefficients.
Anonymous posted on Monday, March 11, 2002 - 9:34 am
How does one go about choosing starting values for variance parameters?
I am analyzing different CFA models with Mplus. I have complex sample data (students nested within classes), high intraclass correlations and so used the TYPE = TWOLEVEL option in Mplus. The models work relatively well if they are very small (1 or 2 factors). However, when I try more complex models I almost always receive an error message telling me that the estimated between covariance matrix is not positiv definite. Could you please give me some hints what is most likely the cause of this problem and what I could do about it? Thank you very much in advance!
Thank you for your reply. But even when I specify an unrestricted model for the between level (all observed variables simply correlated with each other, no latent variables), I get the same error message.
The unrestricted model you refer to is the most complex model in that it has many random effects.
Regarding the failure of your factor model, it may be that in the factor model, you need to fix some between-level residual variances that are very small to zero. You can send the output and data for the factor model to email@example.com if you want a more definitive answer.
Anonymous posted on Thursday, July 07, 2005 - 3:08 pm
Hello- although I have experience with one-level SEMs, I am new to MPLUS and two-level SEMs. I have read through the new manual (v 3.12) and all the workshop handouts, as well as this discussion board. Despite all of this, I am having a lot of problems getting my model to converge.
I have complete data from 75 couples, and all the variables are evenly distributed. All are on the same scale except one, and that is defined to be on a similar scale in comparison to the measures in the rest of the model. It is a modified health behavior change model looking at factors that predict condom use. All items are continuous. There are 2 latent variables (motivation, a mediator, and behavioral skills, which is exogenous) and 2 other measured variables (hivknow which is exogenous and cndpvp, which is the outcome variable and is condom use frequency). There are several measured variables for each latent variable as well.
I want to show that the couple level model predicts condom use well, if not better than the individual level model that is typically used in research. There are several couple level variables (e.g., intimacy) that I want to try out, once I get the model to run.
In trying to follow the 4 steps outlined in the Muthen (1994) article, the first overall model identified and converged.
When the between-model was run, however, it wouldn't converge. There were no negative error variances, so I first increased the iterations. These increases didn't help, so I set the starting values to the preliminary values obtained in the first run. It still didn't converge. Then, I set the starting values to being 1/2 the sample variance for each variable. This also didn't get the model to converge.
The intraclass correlation between the variables are all high, so I definately want to model this on a between dyads level. (step 2).
I then tried to respecify the model at the dyadic level, making all the items load on a single latent variable, and having one of the items be a within-only item. I didn't use any starting values here because I wanted to see if changing the model would work. This still didn't converge, and there is now a negative error variance with one of the latent indicator variables. I tried fixing the starting values for this variable using the same strategies I described above, and this didn't help. I am now at a loss about what to do. Are there any other strategies that you might suggest?
I can email you the output/input and data if you would like. I have been using "free" input from a txt file and not any matrices. I tried to get the program to give me matrices using the save data functions, but no data appears in the files.
Sorry this is so long! I figured the more details you had the easier it would be to diagnose the problem. Thanks-
bmuthen posted on Thursday, July 07, 2005 - 3:32 pm
It sounds like we have to know more details about your 2-level modeling efforts to diagnose this. Please send your input, output, data, and license number to firstname.lastname@example.org.
Marc posted on Wednesday, October 26, 2005 - 1:46 am
I would like to use the "TYPE=TWOLEVEL"-option in order to create a pooled-within-group-correlation-matrix. This works fine with my total sample of n=525 observations within k=38 clusters. However, I would like to divide the sample in order to conduct EFA and CFA on different data sets. The resulting data sets still have k=38 clusters but only n=262 observations. With these data sets, the estimation of the pooled-within-group-correlation-matrix doesn´t converge due to a nonpositive covariance-matrix. I tried to use the variance-estimations of the complete data set as starting values for the smaller data sets, but there is still no convergence.
With 21 variables, you are trying to estimate 231 parameters. That is probably the problem. If you send your input, data, output, and license number to email@example.com, I can take a look at this.
Naomi Dyer posted on Tuesday, August 22, 2006 - 10:22 am
I am having the same issue when trying to model 13 latent variables with 3-4 observed variables for each latent variable. I have set may of the error variances to .02. Before I continue setting error variances or other parameters, I didn't know if it is unlikely to help given how many indicators (about 51) and latent variables I have. And I need to allow for covariances between the latent factors. In sum, should I break the model up into 4, 4, and 5 latent variables in order for it to converge? Thanks
This is a little hard to diagnose without seeing the output. When you say that you are setting error variances to 0.02, it sounds like you talk about between-level error variances which are often small and can be set at zero. This certainly helps computations if outcomes are categorical. If the outcomes are not continuous, but say categorical, having many factors make estimation (which is by ML) computationally intractable. The best approach here is to send input, output, data, and license number to firstname.lastname@example.org.
I am attemping to estimate a fairly simple mediational model from an intervention that was run in groups. There are 2 exogenous variables, 5 mediators, and 2 outcome variables. I am using the type=complex and mlr estimator to account for clustering due to group membership. We have estimated such models before in previous versions of Mplus with no problems. But with version 5.1, the model will not converge and I get the following message regarding nonconvergence:
THIS IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF CLUSTERS.
I have tried variations on the model, but continue to get this message. Could you advise regarding this error message?
I am trying to run a SEM model with many variables (140+). These form a sensible 27 factor solution. These 27 factors are suppose to form a 7 additional constructs and then those constructs will be regressed on a categorical variable. I have 750+ respondents. When I try to fit the whole model, it doesn't converge, even after reducing convergence criterion and adding iterations. So using User's Guide, Chapt 13, page 382, I ran separate models and found solution for each of the constructs. I do get answers for that. Now how can I specify these as starting values in a bigger model? Even if I cannot have all 27 factors, since some of them will have no real impact on the dependent variable, at least 12 of these do play a role.
I don't think starting values is the answer. I think it is more likely that you have some variables with large variances. If this is the case, recode the variables by using DEFINE to divide them by a constant such that their variances are between one and ten. Another problem may be that the first factor indicator which is fixed at one to set the metric may not actually have a factor loading close to one. If you free the first factor loadings and fix the factor variances to one, you can see if this is the case.
yin fu posted on Thursday, September 29, 2011 - 2:07 am
Dear Drs Muthen,
I have a simple model with two independent variables and one dependent on level one. To simplify things I created indices from the items and treated the latent variables as manifest ones. Now I would like include a fixed error variance. e.g. %within% x_with by ind_x; email@example.com, %between% x_bet by ind_x; ind_x@0;
This works fine for the dependent variable, but when I implement it for the independent variable, the iterations stop immdiately, without any error message.
What did I do wrong?
I read in the Mplus Code from the Marsh et al. (2009) paper the analysis command GHFIML=OFF; when implementing latent variables for twolevel random. What does it mean? I've tried it, but then my model cannot be identified.
Hello, I ran a two level model with fixed effects with no problems. I am now trying to run the same model with random effects but I can't get the model to converge, at least I believe that is the problem. The output file indicates "input reading terminated normally" which I believe means that I have no syntax errors. But then no results are shown. I listed TECH1 TECH3 TECH5 TECH8 to be shown as output. Does this mean the model won't converge or another problem? Thank you in advance.
I'm new to MPlus and two-level modelling and am trying to specify a two-level mediation model, and it keeps failing to converge.
These variables have all been standardized in SPSS. The model attempted to test whether there is a ZAGENCY-mediated relationship between ZFUSION and ZLEADING. The GROUP variable delineates experimental groups.
I keep receiving the error message that:
THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
TITLE: two-level CFA with continuous factor indicators DATA: FILE IS ROMdata-missing.dat; VARIABLE: NAMES ARE g1 clus y1-y25; USEVARIABLES ARE clus y1-y25; Missing = *; CLUSTER = clus; ANALYSIS:TYPE = TWOLEVEL RANDOM; ALGORITHM=EM; MODEL: %WITHIN% fw1 BY y1-y10; fw2 BY y11-y25; %BETWEEN% fb1 BY y1-y10; fb2 BY y11-y25;
But the output does not converge
THE ESTIMATED BETWEEN COVARIANCE MATRIX IS NOT POSITIVE DEFINITE AS IT SHOULD BE. COMPUTATION COULD NOT BE COMPLETED. PROBLEM INVOLVING VARIABLE Y11. THE CORRELATION BETWEEN Y11 AND Y3 IS 1.000 THE CORRELATION BETWEEN Y12 AND Y3 IS 1.003 THE CORRELATION BETWEEN Y24 AND Y3 IS 1.006 THE RESIDUAL CORRELATION BETWEEN FB2 AND FB1 IS 1.006 THE PROBLEM MAY BE RESOLVED BY SETTING ALGORITHM=EM AND MCONVERGENCE TO A LARGE VALUE.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
THE H1 MODEL ESTIMATION DID NOT CONVERGE. SAMPLE STATISTICS COULD NOT BE COMPUTED. INCREASE THE NUMBER OF H1ITERATIONS.
Can you please tell me how to resolve the problem?
I am running a multilevel CFA (49 variables and 6 factors). When I run the input file, is looks like the analysis is in progress but it never shows an output file. I this a sign of non-convergence? If so, what could be the reason? Thank you
Model: %Within% S | AggrJust on Tid ; %Between% AggrJust ; S ; AggrJust with S@0 ; ![AggrJust] ; !(aa) ; ![S] (bb) ;
!Model constraint: ! New (MeanI MeanS) ; ! MeanS = exp(bb) ; ! MeanI = exp(aa) ;
When I label the mean intercept and slope in order to use model constraint I run into some problems. Naming the mean slope is fine and the exp(BB) under model constraint works fine. However, when a use the label for the mean aggression level [AggrJust], the model run into convergence problems with error messages and the model estimates changes. The number of free parameters is still the same.
The problem persist without the model constraint commands, with only the [AggrJust] ; command.
The following model wouldn't converge. The model is based on the #7 LMS model from Preacher, Zhang & Zyphur (2016). I've made sure it's not a data problem and even changed the number of iterations but still wouldn't work. Can you help me with this?
title: nurse test data: file is "c:\b.csv"; variable: names = idnum day x y z; missing = all (999); USEVARIABLES ARE idnum x y z; CLUSTER IS idnum; ANALYSIS: TYPE IS TWOLEVEL RANDOM; ESTIMATOR IS MLR; ALGORITHM IS INTEGRATION; INTEGRATION IS 5; MODEL: %WITHIN% xw BY x@1; xw*.7; x@.01; zw BY z@1; zw*.7; z@.01; xzw | xw XWITH zw; xw WITH zw*.1; y ON xw*.1 zw*.3; y*.7; ywx BY; ywx ON xzw@1; ywx@0; s | y ON ywx; %BETWEEN% xb BY x@1; xb*.7; x@.01; zb BY z@1; zb*.7; z@.01; y ON xb*.2 zb*.2; xb WITH zb*.1; y*.7; [x@0z@0 y*.1 xb*0 zb*0 s*.2]; s*.2; s WITH y*0 xb*0 zb*0;
I have a question regarding computation times for a multilevel CFA model. It seems that the model is converging, but it is taking a very long time. I'm currently in the 24th hour of running the model, and still only just starting the bivariate estimation part.
It is a 3 factor model where all the variables are categorical. No covariates at either level. There are about 70,000 level 1 units (individuals) and about 1,380 level 2 units (groups).
Is this computation time typical, or have I used an efficient approach?
Here is the syntax I ran:
DATA FILE IS mei_cfa_fullv3.csv VARIABLE: NAMES ARE u1-u24 clus; CATEGORICAL = u1-u24; CLUSTER = clus; MISSING = ALL (900); ANALYSIS: TYPE = TWOLEVEL; ESTIMATOR = WLSMV; INTEGRATION = MONTECARLO(500); MODEL: %WITHIN% fw1 BY u1-u4; fw2 BY u5-u16; fw3 BY u17-u24; %BETWEEN% fb1 BY u1-u4; fb2 BY u5-u16; fb3 BY u17-u24; OUTPUT: STAND;
One reason it takes a long time is because of the 70,000 individuals taken together with the necessary numerical integration. You could start off with a random subsample to give you better starting values (saving estimates using SVALUES).
You should ask for TECH5 and TECH8 output so you get screen printing that tells you how the iterations progress. Also, why did you choose MonteCarlo integration?
Hi, I conducted multi-level analysis by using continuous outcome. First I conducted CFA and then using score from CFA to be one of independent variables. I found my model is converged, which means the estimate of the effects on dependent outcomes shows 0.0000. However, when I added main outcome in within level. It is successfully reported the coefficient. I am not sure it is depends on the variation of outcome or other problems. Best regards, Jintana
Please send relevant outputs to Support along with your license number.
S REN posted on Saturday, February 24, 2018 - 10:11 am
Hi Although I have the following messages, I still have the parameter estimates results related to my hypothesised relationships between study variables. In this case, should I still worry about these messages? Thank you.
BTW, I did fix the starting value of the parameter 35 to a smaller one, but that did not eliminate getting again the warnings.
WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH. AN ADJUSTMENT TO THE ESTIMATION OF THE INFORMATION MATRIX HAS BEEN MADE. THE CONDITION NUMBER IS -0.389D-01. THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE MCONVERGENCE OR LOGCRITERION OPTIONS OR BY CHANGING THE STARTING VALUES OR BY USING THE MLF ESTIMATOR.
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS -0.564D-18. PROBLEM INVOLVING PARAMETER 35.
THE NONIDENTIFICATION IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF CLUSTERS. REDUCE THE NUMBER OF PARAMETERS.
We want to conduct a 2-level analysis with random intercepts and random slopes in Mplus 8 (i.e., individuals nested in countries). All variables are continuous and latent and we are interested in 3-way interactions between 2 within-level variables and 1 between-level variable (i.e., cross-level latent variables interactions). Unfortunately, we received the following error message:
FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. THE ANALYSIS REQUIRES 9 DIMENSIONS OF INTEGRATION RESULTING IN A TOTAL OF 0.38443E+11 INTEGRATION POINTS. THIS MAY BE THE CAUSE OF THE MEMORY SHORTAGE. YOU CAN TRY TO REDUCE THE NUMBER OF DIMENSIONS OF INTEGRATION OR THE NUMBER OF INTEGRATION POINTS OR USE INTEGRATION=MONTECARLO WITH FEWER NUMBER OF INTEGRATION POINTS SUCH AS 500 OR 5000.
We unsuccessfully tried all proposed solutions except for reducing the number of integration dimensions (i.e., delete random slopes) because we are interested in cross-level interactions and the model should be identical to our original model with observed variables.
Are there any other things we can do to get the model with multilevel latent moderated structural equations running? Would increasing main storage from 64GB to 128GB be a solution? We very much appreciate your help.