Multiple Interactions PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Sandra Lyons posted on Tuesday, May 18, 2004 - 8:18 pm
Input statements for a model with multiple interactions is below. When run, I get the error message: ERROR in Model command
The model specified with the following set of MODEL statements is not
supported for TYPE = RANDOM:
F6XFSUP | F6 XWITH FSUP
F6 ON F1XFSUP
NPARPRO ON F6XFSUP
APARPRO ON F6XFSUP

Can you tell me more about why the statements aren't supported and whether there is an alternative way to estimate the model?

Thank you!

Analysis:
Type = Random;
Algorithm = Integration;
Integration = Monte;
Model:
Fsup by t1emotsu t1instsu t1infosu;
F1 by q21a q21b q21c q21d;
F6 by F3 F4 F5;
F6 on F1 Fsup;
F6 on lifevent Fsup;
nparpro on F6 Fsup;
aparpro on F6 Fsup;
f1xfsup | F1 xwith Fsup;
F6 on f1xfsup;
lexfsup | lifevent xwith Fsup;
F6 on lexfsup;
f6Xfsup | F6 xwith Fsup;
nparpro on f6Xfsup;
aparpro on f6xfsup;
 Linda K. Muthen posted on Tuesday, May 18, 2004 - 8:33 pm
Please send the full output and data if possible to support@statmodel.com.
 Sandra Lyons posted on Wednesday, June 09, 2004 - 5:26 pm
Thank you for the assistance with my previous question. I obtained estimates for the first two interactions, but notice that the condition number is .384E-07. According to the Mplus manual, this suggests that the model is not identified. Does this mean that the estimates of the interactions are not reliable?
 Linda K. Muthen posted on Wednesday, June 09, 2004 - 9:36 pm
I'm afraid I need you to send your output to support@statmodel.com. I need to see the entire output to answer a question like this.
 Scott Weaver posted on Saturday, June 26, 2004 - 7:14 pm
On the topic of latent variable interactions and numerical integration - the Mplus manual states that the default is 15 integration points per dimension. I have 2 dimensions but 225 integration points (as default). Is it that a special default for latent interaction models?
Also, the Mplus manual states (p. 327) that large negative values in the ABS Change column indicates that I should increase the number of integration points. What is considered "large" here? I have a few that are -700, but most vary between -200 and +200. If I should increase the # on integration points, by how much would you recommend?
Thank you!
Scott
 Scott Weaver posted on Saturday, June 26, 2004 - 7:35 pm
To add to the previous post - I am now noticing that the program is running through 233+ integration points although the MS dos window (through tech 8) states that the total number of integration points is set at 225. Is this normal? When should I expect the program to finish running?
Scott
 Scott Weaver posted on Sunday, June 27, 2004 - 12:23 am
Sorry - never mind the previous posts (except for perhaps the ABS related question) from me --- I was confusing the number of iterations for the EM algorithm with the number of integration points.

Now - I have received this (see below)in the output. Any advice? Parameter 51 refers to the covariance between the two exogenous variables that are specified to interact in my model. I do have a small negative residual variance for one variable.

MAXIMUM LOG-LIKELIHOOD VALUE FOR THE UNRESTRICTED (H1) MODEL IS -7042.899

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A CHANGE IN THE
LOGLIKELIHOOD DURING THE LAST E STEP.

AN INSUFFICENT NUMBER OF E STEP ITERATIONS MAY HAVE BEEN USED. INCREASE
THE NUMBER OF MITERATIONS OR INCREASE THE MCONVERGENCE VALUE. ESTIMATES
CANNOT BE TRUSTED.
SLOW CONVERGENCE DUE TO PARAMETER 51.
THE LOGLIKELIHOOD DERIVATIVE FOR THIS PARAMETER IS -0.35729705D+03
 bmuthen posted on Sunday, June 27, 2004 - 10:12 pm
Please send your input, output and data to support@statmodel.com
 Girish Mallapragada posted on Friday, March 25, 2005 - 2:54 pm
Hi,

I would like to know the technical side of how Mplus is combining multi-item contructs when latent interaction variables are being created.
Like for instance, there are some mehtods proposed by Kenny and Judd 84, Ping 95.. etc.

Can you refer me a few papers on this?
 bmuthen posted on Friday, March 25, 2005 - 3:00 pm
See the Klein-Moosbrugger P-a article on the Mplus web site.
 Girish Mallapragada posted on Sunday, March 27, 2005 - 12:45 pm
Thanks Dr. Muthen.
 Son K. Lam posted on Sunday, August 17, 2008 - 2:09 pm
I'd like to know the norms for setting MITERATIONS. I realize the default is 500, but can it be set lower? Can we use REL CHANGE provided in mPLUS to determine the acceptable number of iterations? Thanks.
 Bengt O. Muthen posted on Sunday, August 17, 2008 - 5:38 pm
Mplus considers first if the absolute change falls below a small value and when that is fulfilled Mplus checks if the derivatives of the parameters are close enough to zero. Only using relative change may not be a sufficiently stringent criterion for convergence. Setting Miter lower will result in Mplus complaining about non-nonconvergence, that is, the two criteria above have not been fulfilled. The settings for these two criteria can however be changed to less stringent values - see the UG.
 Narasimhan Sowmyanarayanan posted on Monday, July 19, 2010 - 1:48 pm
I am working on some SEM models with interaction using MPLUS. I am new to testing interaction effects with MPLUS. I find that the log-likelihood numbers are not necessarily comparable with the specification of Aggorithm=integration and a straightforwad ML estimation. So in order to perform the likelihood ratio test, I am using the Algorithm = integration specification for both the non-interaction and interaction models. Is that a right thing to do ?

Next, I find some minor differences in standardard error computation when I specify Algorithm = Integration or in the ML estimation. Could you shed light on why this happens ?

Any input will be highly appreciated.
 Linda K. Muthen posted on Monday, July 19, 2010 - 2:19 pm
I think using TYPE=RANDOM for both gives you comparable loglikelihoods.

The small differences in standard errors is most likely due to the fact the the convergence criteria differ between integration and no integration.
 Narasimhan Sowmyanarayanan posted on Monday, July 19, 2010 - 4:41 pm
Yes. Thank you I think that solves it. Is there a specific difference between implementations of TYpe=RANDOM and other specifications on why there are differences in results. This is for my own understanding. Is there any paper that I can read to understand the computational methodology that you may have written. I appreciate your help.
 Linda K. Muthen posted on Tuesday, July 20, 2010 - 2:23 pm
The different convergence criteria are documented in the user's guide and can be changed. There is no paper describing this.
 Hans Leto posted on Monday, April 02, 2012 - 5:06 pm
Hello. I just want to know if Mplus 5.21 can perform triple interactions? if so, would be the command as follows?

f1xf2xf3 | f1 XWITH F2 XWITH f3;
f4 on f1xf2xf3

Thank you very much for your attention.
 Linda K. Muthen posted on Monday, April 02, 2012 - 6:23 pm
If XWITH is available in Version 5.21, a three-way interaction is specified:

f1xf2 | f1 XWITH F2;
f1f2f3 | f1xf2 XWITH f3;
 Hans Leto posted on Tuesday, April 03, 2012 - 7:00 pm
Thank you for your response.

I need to test the effect of the three-way interaction but it gives me an error "An interaction variable defined using XWITH must be used at least once on the right-hand side of an ON statement. No valid reference of: f1Xf2"

I used the following command:

f1xf2 | f1 XWITH F2;
f1f2f3 | f1xf2 XWITH f3;
f4 ON f1f2f3;

I think is because f1xf2 is not specified with an "ON" statement, but I am interested in the three-way interaction (f4 ON f1f2f3)

Thank you in advance.
 Linda K. Muthen posted on Tuesday, April 03, 2012 - 10:10 pm
You need to include the two-way interaction on the right-hand side of ON in addition to the three-way interaction. I think this is what you would want to do.
 Hans Leto posted on Wednesday, April 04, 2012 - 9:14 am
If I include the two-way interaction on the right-hand side of ON in addition to the three-way interaction. The result shows me the effect of the two way + the three way in the in the factor in the left-hand side of the ON.

I am only interested in the three-way effect.
 Linda K. Muthen posted on Wednesday, April 04, 2012 - 7:26 pm
You should include the main effects, both two-way interactions, and the three-way interaction.
 Emily Midouhas posted on Thursday, December 06, 2012 - 4:18 pm
I am trying to test interactions between a continuous latent variable and a categorical observed variable in a probit regression using the WLSMV estimator. I understand I can't use TYPE=RANDOM with this estimator. Is there another way of testing interactions with WLSMV?
 Linda K. Muthen posted on Thursday, December 06, 2012 - 7:36 pm
You can't test an interaction between an observed and latent variable using WLSMV. This requires the XWITH option and TYPE=RANDOM.
 Sabrina Thornton posted on Friday, November 22, 2013 - 6:03 pm
Hi,

I am running a model with two latent variable interactions, and it seems taking forever to run. I tried to reduce the integration to 10. It has been two hours, and it is still running. Any tips?

Is it conceptually ok to run the model with one interaction at a time (two different interactions in the model run twice) and report the results?
 Sabrina Thornton posted on Friday, November 22, 2013 - 7:08 pm
Just to add:

It says dimensions of integration = 4 and number of integration points = 50625.
 Sabrina Thornton posted on Saturday, November 23, 2013 - 7:35 am
I left it to run overnight, and it is still running. any advice?
 Bengt O. Muthen posted on Saturday, November 23, 2013 - 12:02 pm
Even though not optimal, I think it is a reasonable approximation to check for the significance of one interaction at a time before settling on the final model. Many interactions are not significant.

If after this process you still have a model that doesn't converge, please send your input, output, and data to Support.
 Sabrina Thornton posted on Saturday, November 23, 2013 - 12:50 pm
Hi Bengt,

I checked the significance of one interaction at a time, and these two interactions I am trying to fit into the model are both significant when assessed separately. I think it is very computational heavy when both interactions are assessed. The model seems to run forever. I left it to run overnight, so it would have been 12-15 hours. It still struggled to converge.

I will send the input, output, and data to Support.
 Bengt O. Muthen posted on Sunday, November 24, 2013 - 11:49 pm
Your model needs 4 dimensions of numerical integration as it says in the TECH8 screen printing. With the default of 15 integration points per dimension, you get over 50,000 points which gives very slow computations as the screen printing warns about. The remedy is to use

integration = montecarlo(x);

where x=500 gives a solution in 9 minutes on my computer and x=5000 takes 37 minutes and gives a bit more precise estimates. Note also that speed is substantially improved by using the parallel computing feature of

processors = y;

where my computer allowed y = 8.

Your run, however, gave insignificant interaction effects when both were included. I don't know why the data gives this result.
 Sabrina Thornton posted on Monday, November 25, 2013 - 6:52 pm
Hi Bengt,

Many thanks for looking at my model for me. These tips have been very useful. I think the reason that those two interactions aren't significant is that they are fighting for variance from the same outcome variable. I have change the model slightly, particularly they are now related to different outcome variables. I used:

integration = montecarlo (5000);
processors = 4;

Two hours later when I checked, It somehow hang at the iteration 245, and would not progress. I then tried integration = montecarlo (500), and it did ran with an error message to say that increase miteration and mconvergence might help the model to run fully. I tried miterations = 1000, and it gave the same message. I am wondering whether I should try mconvergence = 0.01, while montecarlo (5000)?

Thanks.
 Sabrina Thornton posted on Monday, November 25, 2013 - 8:12 pm
I have tried the following:
montecarlo (1000);
mconvergence = 0.01;

It gave the following message:

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A CHANGE IN THE
LOGLIKELIHOOD DURING THE LAST E STEP.

AN INSUFFICENT NUMBER OF E STEP ITERATIONS MAY HAVE BEEN USED. INCREASE
THE NUMBER OF MITERATIONS OR INCREASE THE MCONVERGENCE VALUE. ESTIMATES
CANNOT BE TRUSTED.
SLOW CONVERGENCE DUE TO PARAMETER 65.
THE LOGLIKELIHOOD DERIVATIVE FOR THIS PARAMETER IS 0.51554927D+00.

Can you suggest what I should do?
 Bengt O. Muthen posted on Tuesday, November 26, 2013 - 4:39 pm
You should read in the UG about numerical integration. Page 473 says

"If the TECH8 output shows large negative values in the column labeled ABS CHANGE, increase the number of integration points to improve the precision of the numerical integration and resolve convergence problems."

Your TECH8 output shows negative ABS changes which should not happen because that implies a decrease instead of an increase in the loglikelihood. It happens a lot in your run, which means that you won't get convergence as the large derivative in your error message indicates. So take the UG advice and increase to

integration = montecarlo(x);

where x should be chosen large enough (larger than the 500 you have there) so that you don't get negative ABS changes. It may need x=5000 and you just have to wait for it given that you don't have a really powerful computer for this type of challenging analysis.
 Sabrina Thornton posted on Tuesday, November 26, 2013 - 7:54 pm
Hi Bengt,

Many thanks. I left it to run over night, and it did converge this time with the following settings:

integration = montecarlo (5000);
miterations = 1500;
processors = 4;

However, it did say in the output that:

WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE
OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH.
AN ADJUSTMENT TO THE ESTIMATION OF THE INFORMATION MATRIX HAS BEEN MADE.
THE CONDITION NUMBER IS -0.681D+00.
THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE
MCONVERGENCE OR LOGCRITERION OPTIONS OR BY CHANGING THE STARTING VALUES
OR BY INCREASING THE NUMBER OF INTEGRATION POINTS OR BY USING THE MLF ESTIMATOR.

Does this mean that the results can not be trusted?

Are you suggesting that 4 processors aren't enough for this type of run?
 Linda K. Muthen posted on Tuesday, November 26, 2013 - 8:11 pm
If you obtain standard errors, the results can be trusted.

The more processors you have the faster the analysis.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: