Message/Author 


Input statements for a model with multiple interactions is below. When run, I get the error message: ERROR in Model command The model specified with the following set of MODEL statements is not supported for TYPE = RANDOM: F6XFSUP  F6 XWITH FSUP F6 ON F1XFSUP NPARPRO ON F6XFSUP APARPRO ON F6XFSUP Can you tell me more about why the statements aren't supported and whether there is an alternative way to estimate the model? Thank you! Analysis: Type = Random; Algorithm = Integration; Integration = Monte; Model: Fsup by t1emotsu t1instsu t1infosu; F1 by q21a q21b q21c q21d; F6 by F3 F4 F5; F6 on F1 Fsup; F6 on lifevent Fsup; nparpro on F6 Fsup; aparpro on F6 Fsup; f1xfsup  F1 xwith Fsup; F6 on f1xfsup; lexfsup  lifevent xwith Fsup; F6 on lexfsup; f6Xfsup  F6 xwith Fsup; nparpro on f6Xfsup; aparpro on f6xfsup; 


Please send the full output and data if possible to support@statmodel.com. 


Thank you for the assistance with my previous question. I obtained estimates for the first two interactions, but notice that the condition number is .384E07. According to the Mplus manual, this suggests that the model is not identified. Does this mean that the estimates of the interactions are not reliable? 


I'm afraid I need you to send your output to support@statmodel.com. I need to see the entire output to answer a question like this. 


On the topic of latent variable interactions and numerical integration  the Mplus manual states that the default is 15 integration points per dimension. I have 2 dimensions but 225 integration points (as default). Is it that a special default for latent interaction models? Also, the Mplus manual states (p. 327) that large negative values in the ABS Change column indicates that I should increase the number of integration points. What is considered "large" here? I have a few that are 700, but most vary between 200 and +200. If I should increase the # on integration points, by how much would you recommend? Thank you! Scott 


To add to the previous post  I am now noticing that the program is running through 233+ integration points although the MS dos window (through tech 8) states that the total number of integration points is set at 225. Is this normal? When should I expect the program to finish running? Scott 


Sorry  never mind the previous posts (except for perhaps the ABS related question) from me  I was confusing the number of iterations for the EM algorithm with the number of integration points. Now  I have received this (see below)in the output. Any advice? Parameter 51 refers to the covariance between the two exogenous variables that are specified to interact in my model. I do have a small negative residual variance for one variable. MAXIMUM LOGLIKELIHOOD VALUE FOR THE UNRESTRICTED (H1) MODEL IS 7042.899 THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A CHANGE IN THE LOGLIKELIHOOD DURING THE LAST E STEP. AN INSUFFICENT NUMBER OF E STEP ITERATIONS MAY HAVE BEEN USED. INCREASE THE NUMBER OF MITERATIONS OR INCREASE THE MCONVERGENCE VALUE. ESTIMATES CANNOT BE TRUSTED. SLOW CONVERGENCE DUE TO PARAMETER 51. THE LOGLIKELIHOOD DERIVATIVE FOR THIS PARAMETER IS 0.35729705D+03 

bmuthen posted on Sunday, June 27, 2004  4:12 pm



Please send your input, output and data to support@statmodel.com 


Hi, I would like to know the technical side of how Mplus is combining multiitem contructs when latent interaction variables are being created. Like for instance, there are some mehtods proposed by Kenny and Judd 84, Ping 95.. etc. Can you refer me a few papers on this? 

bmuthen posted on Friday, March 25, 2005  9:00 am



See the KleinMoosbrugger Pa article on the Mplus web site. 


Thanks Dr. Muthen. 

Son K. Lam posted on Sunday, August 17, 2008  8:09 am



I'd like to know the norms for setting MITERATIONS. I realize the default is 500, but can it be set lower? Can we use REL CHANGE provided in mPLUS to determine the acceptable number of iterations? Thanks. 


Mplus considers first if the absolute change falls below a small value and when that is fulfilled Mplus checks if the derivatives of the parameters are close enough to zero. Only using relative change may not be a sufficiently stringent criterion for convergence. Setting Miter lower will result in Mplus complaining about nonnonconvergence, that is, the two criteria above have not been fulfilled. The settings for these two criteria can however be changed to less stringent values  see the UG. 


I am working on some SEM models with interaction using MPLUS. I am new to testing interaction effects with MPLUS. I find that the loglikelihood numbers are not necessarily comparable with the specification of Aggorithm=integration and a straightforwad ML estimation. So in order to perform the likelihood ratio test, I am using the Algorithm = integration specification for both the noninteraction and interaction models. Is that a right thing to do ? Next, I find some minor differences in standardard error computation when I specify Algorithm = Integration or in the ML estimation. Could you shed light on why this happens ? Any input will be highly appreciated. 


I think using TYPE=RANDOM for both gives you comparable loglikelihoods. The small differences in standard errors is most likely due to the fact the the convergence criteria differ between integration and no integration. 


Yes. Thank you I think that solves it. Is there a specific difference between implementations of TYpe=RANDOM and other specifications on why there are differences in results. This is for my own understanding. Is there any paper that I can read to understand the computational methodology that you may have written. I appreciate your help. 


The different convergence criteria are documented in the user's guide and can be changed. There is no paper describing this. 

Hans Leto posted on Monday, April 02, 2012  11:06 am



Hello. I just want to know if Mplus 5.21 can perform triple interactions? if so, would be the command as follows? f1xf2xf3  f1 XWITH F2 XWITH f3; f4 on f1xf2xf3 Thank you very much for your attention. 


If XWITH is available in Version 5.21, a threeway interaction is specified: f1xf2  f1 XWITH F2; f1f2f3  f1xf2 XWITH f3; 

Hans Leto posted on Tuesday, April 03, 2012  1:00 pm



Thank you for your response. I need to test the effect of the threeway interaction but it gives me an error "An interaction variable defined using XWITH must be used at least once on the righthand side of an ON statement. No valid reference of: f1Xf2" I used the following command: f1xf2  f1 XWITH F2; f1f2f3  f1xf2 XWITH f3; f4 ON f1f2f3; I think is because f1xf2 is not specified with an "ON" statement, but I am interested in the threeway interaction (f4 ON f1f2f3) Thank you in advance. 


You need to include the twoway interaction on the righthand side of ON in addition to the threeway interaction. I think this is what you would want to do. 

Hans Leto posted on Wednesday, April 04, 2012  3:14 am



If I include the twoway interaction on the righthand side of ON in addition to the threeway interaction. The result shows me the effect of the two way + the three way in the in the factor in the lefthand side of the ON. I am only interested in the threeway effect. 


You should include the main effects, both twoway interactions, and the threeway interaction. 


I am trying to test interactions between a continuous latent variable and a categorical observed variable in a probit regression using the WLSMV estimator. I understand I can't use TYPE=RANDOM with this estimator. Is there another way of testing interactions with WLSMV? 


You can't test an interaction between an observed and latent variable using WLSMV. This requires the XWITH option and TYPE=RANDOM. 


Hi, I am running a model with two latent variable interactions, and it seems taking forever to run. I tried to reduce the integration to 10. It has been two hours, and it is still running. Any tips? Is it conceptually ok to run the model with one interaction at a time (two different interactions in the model run twice) and report the results? 


Just to add: It says dimensions of integration = 4 and number of integration points = 50625. 


I left it to run overnight, and it is still running. any advice? 


Even though not optimal, I think it is a reasonable approximation to check for the significance of one interaction at a time before settling on the final model. Many interactions are not significant. If after this process you still have a model that doesn't converge, please send your input, output, and data to Support. 


Hi Bengt, I checked the significance of one interaction at a time, and these two interactions I am trying to fit into the model are both significant when assessed separately. I think it is very computational heavy when both interactions are assessed. The model seems to run forever. I left it to run overnight, so it would have been 1215 hours. It still struggled to converge. I will send the input, output, and data to Support. 


Your model needs 4 dimensions of numerical integration as it says in the TECH8 screen printing. With the default of 15 integration points per dimension, you get over 50,000 points which gives very slow computations as the screen printing warns about. The remedy is to use integration = montecarlo(x); where x=500 gives a solution in 9 minutes on my computer and x=5000 takes 37 minutes and gives a bit more precise estimates. Note also that speed is substantially improved by using the parallel computing feature of processors = y; where my computer allowed y = 8. Your run, however, gave insignificant interaction effects when both were included. I don't know why the data gives this result. 


Hi Bengt, Many thanks for looking at my model for me. These tips have been very useful. I think the reason that those two interactions aren't significant is that they are fighting for variance from the same outcome variable. I have change the model slightly, particularly they are now related to different outcome variables. I used: integration = montecarlo (5000); processors = 4; Two hours later when I checked, It somehow hang at the iteration 245, and would not progress. I then tried integration = montecarlo (500), and it did ran with an error message to say that increase miteration and mconvergence might help the model to run fully. I tried miterations = 1000, and it gave the same message. I am wondering whether I should try mconvergence = 0.01, while montecarlo (5000)? Thanks. 


I have tried the following: montecarlo (1000); mconvergence = 0.01; It gave the following message: THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A CHANGE IN THE LOGLIKELIHOOD DURING THE LAST E STEP. AN INSUFFICENT NUMBER OF E STEP ITERATIONS MAY HAVE BEEN USED. INCREASE THE NUMBER OF MITERATIONS OR INCREASE THE MCONVERGENCE VALUE. ESTIMATES CANNOT BE TRUSTED. SLOW CONVERGENCE DUE TO PARAMETER 65. THE LOGLIKELIHOOD DERIVATIVE FOR THIS PARAMETER IS 0.51554927D+00. Can you suggest what I should do? 


You should read in the UG about numerical integration. Page 473 says "If the TECH8 output shows large negative values in the column labeled ABS CHANGE, increase the number of integration points to improve the precision of the numerical integration and resolve convergence problems." Your TECH8 output shows negative ABS changes which should not happen because that implies a decrease instead of an increase in the loglikelihood. It happens a lot in your run, which means that you won't get convergence as the large derivative in your error message indicates. So take the UG advice and increase to integration = montecarlo(x); where x should be chosen large enough (larger than the 500 you have there) so that you don't get negative ABS changes. It may need x=5000 and you just have to wait for it given that you don't have a really powerful computer for this type of challenging analysis. 


Hi Bengt, Many thanks. I left it to run over night, and it did converge this time with the following settings: integration = montecarlo (5000); miterations = 1500; processors = 4; However, it did say in the output that: WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH. AN ADJUSTMENT TO THE ESTIMATION OF THE INFORMATION MATRIX HAS BEEN MADE. THE CONDITION NUMBER IS 0.681D+00. THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE MCONVERGENCE OR LOGCRITERION OPTIONS OR BY CHANGING THE STARTING VALUES OR BY INCREASING THE NUMBER OF INTEGRATION POINTS OR BY USING THE MLF ESTIMATOR. Does this mean that the results can not be trusted? Are you suggesting that 4 processors aren't enough for this type of run? 


If you obtain standard errors, the results can be trusted. The more processors you have the faster the analysis. 

Back to top 