Anonymous posted on Wednesday, October 27, 2004 - 10:46 am
I have a latent continous construct with indicators i treat as categorical (f1 by ...). I would like to examine "interactions" of variables like "education" with a variable "region" (binary: 0 and 1. Is it important if "region" is 0-1 or 1-2 for checking the effect?) In the syntax, i have: ... Usevariables:...xed (the regressor); ... Define: xed=x1(region)*x3(education); ... Model: f1 by y1-y4; f1 on x1 x3 xed;
I´m wondering whether this is everything to check the interaction?
Another question is: if i check the main effect of x1 on f1 without the other variable, by modeling: f1 on x1; I get an expected "positive" effect. If i do the model in the interaction, the main effect f1 on x1; suddenly becomes negative. Can you imagine why this can be the case?
Last question: How can i manage it if i have a second latent construct with categorical indicators (f2 by...), and want to check the interaction between f2 and x1 in predicting f1? Is there something special to do in handling f2 in the syntax?
I think it is most common and easier to interpret when both variables are coded 0/1. The issues here are the same as coding issues in regular regression.
If you have a significant interaction, the main effect needs to be interpreted with the interaction. The main effect alone should not be interpreted when there is a significant interaction.
You would use the XWITH command for an interaction between an observed and latent variable.
Anonymous posted on Friday, October 29, 2004 - 3:40 am
May be you can tell me something about the usage of this XWITH command? So i do not use the Define Command as i do for other interactions, and say: Define: xint=x1*f2;? How do i prepare (transform) my latent variable (f2 by y1 y2 y3;) for an interaction with x1? So that it is possible to see if their is an interaction effect on the other latent variable?Like: f1 on x1 f2 xint;
If i code x1 (0-1) the effects are the same as for code (1-2). Is this possible and just for the easier interpretation to use code 0-1, or is there something wrong?
Regarding XWITH, I suggest reading what is in the Mplus User's Guide. There is a table that shows various types of interactions. DEFINE is not used for interactions unless both variables are observed.
Regarding coding, I suggest going to a regression text to read about the various types of coding available. Coding issues are the same as in regular regression.
Anonymous posted on Monday, November 01, 2004 - 8:37 pm
I was playing around with my new MPlus Version 3.1 software and successfully ran a structural model with an interaction term (two continuous latent variables). However, I noted that the output did not include general fit statistics such as RMSEA, SRMR CFI, etc. - only the comparative fit statistics (BIC, etc.).
Am I doing something wrong here? I had my Output command set at STANDARDIZED and SAMPSTAT, as I had in my previous model without the interaction.
bmuthen posted on Monday, November 01, 2004 - 8:55 pm
The lack of overall model fit statistics with latent variable interactions in Mplus is due to the fact that they haven't been invented yet. For example, it is not clear what the "unrestricted model" should be in this case. In regular SEM models you have an unrestricted covariance matrix as H1. But this is only because regular models concern covariance matrix fitting which this is not the case with interactions since they give rise to non-normal outcomes where sample covariance matrices are not sufficient statistics. There is however work at the research frontier on developing fit statistics, but it is not here yet. In the meanwhile we have to do what statisticians mostly do, namely compare adjacent nested models using loglikelihood difference chi-square tests.
Anonymous posted on Tuesday, November 02, 2004 - 10:42 am
I did what Linda K. Muthen wrote at Oct. 28. 9:03: I used the XWITH command to check the interaction between x1(region) and the latent construct in ma one-group model. In the analysis command only Type is Random seems to be possible? So Standardized and Residual are not available. Is there any way to get these values? In the Tests of Model Fit there is a Loglikelihood H0 Value and some Information Criteria. ..How do i interpret these values and see whether the model is "fit" or not?
bmuthen posted on Tuesday, November 02, 2004 - 12:05 pm
See my answer above (Monday, Mov 01, 2004 - 8:55pm) - these matters are not yet resolved. Also read the Klein-Moosbrugger(2000) Psychometrika article and the Marsh et al (2004) Psych Methods article.
Anonymous posted on Tuesday, November 02, 2004 - 2:14 pm
Thank you for the articles. I noticed your answer from Nov.01-20004 8:55 but ask myself how to "compare adjacent nested models using loglikelihood difference chi-square test".
-2 times the loglikelihood difference is distributed as chi-square.
Anonymous posted on Friday, November 05, 2004 - 2:05 am
I have a question to the example from Oct.27-10:46am. if i want to check the interaction between education and region and want to use for example 'age' as control-variable, should i define the effect also in interaction with region or only model its 'main effect' on f1? So: DEFINE: xed=x1*x3; MODEL: f1 by y1-y4; f1 on x1 x3 x4(age) xed;
or: DEFINE: xage=x1*x4; xed=x1*x3; MODEL: f1 by y1-y4; f1 on x1 x3 x4 xed xage;
to control 'age' in examining the interaction: education and region
LMuthen posted on Friday, November 05, 2004 - 7:47 am
I would run the model with the age interaction and include it if it is significant. There is really no rule that I know of. In ANOVA, all interactions are automatically examined.
Anonymous posted on Sunday, November 07, 2004 - 10:00 am
I tried to compute the the effect on f1(y1-y4) of the interaction from the latent construct f2(y5-y7) with x1. Without control variables it was ok. like in my message from Nov.2. The computation takes much longer than other models. If i want to use control-variables my Model is: MODEL: f1 by y1 y2 y3 y5; f2 by y9 y10 y11; f2 on x1 x4 x15; f1 on x1 x4 x15 f2; x1xf2 | x1 XWITH f2; f1 on x1xf2; First it is shown that INPUT IS TERMINATED NORMALLY. But after the categories of the variables y1-y7 an ERROR appears:
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-ZERO DERIVATIVE OF THE OBSERVED-DATA LOGLIKELIHOOD.
CONVERGENCE CRITERION FOR THE LATENT VARIABLE MIXTURE MODEL IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR PARAMETER 1 IS -0.86936334D-01.
1. Can you imagine what is wrong here? 2. Is it possible to examine the model in another way and not in a mixture model?
Anonymous posted on Monday, November 08, 2004 - 4:24 am
If i examine an interaction with an involved latent variable and i treat the indicators of the latent variable as categorical, in the XWITH command with Type is Random the Estimator is MLR. Does MPlus nevertheless computes the model with the indicators treated as categorical? I tried to change the Estimator in WLSMV, but it does not work.
Anonymous posted on Monday, November 08, 2004 - 9:42 am
I've a question to the example of Nov.05=> 2:05am: How should i treat a mediator f2 with y5-y7 (treated as categorical) in predicting f1:
Should the interactions xed and xage also run on this construct or only the variables x3 and x4? So, MODEL: f1 by y1-y4; f2 by y5-y7; f2 on x1 x3 x4 xage xed; x1xf2 | x1 XWITH f2; f1 on x1 x3 x4 xage xed f2 x1xf2;
Or without xage and xed on f2?
bmuthen posted on Sunday, November 14, 2004 - 11:38 am
Answer to Nov 08 - 04:24am.
Yes, the indicators will be treated as categorical here. Type = random is only available with ML estimators, not WLS estimators.
bmuthen posted on Sunday, November 14, 2004 - 11:41 am
Answer to Nov 08 - 09:42.
This is a substantive choice in the modeling that you have to decide, not a choice ruled by statistics. Try it both ways.
Anonymous posted on Monday, November 15, 2004 - 5:07 am
Thank you very much for your help. Do you have any suggestions to solve the problem described on Nov 07 - 10:00?
bmuthen posted on Monday, November 15, 2004 - 7:08 am
The problem described in the question of Nov 07-10:00 indicates that the estimate of parameter number 1 is hard to determine in the sense that a solution has not been found within the default number of iterations (derivatives are zero at the solution). You should check which parameter this is by looking at Tech1. You should also see what the parameter's value is when the iterations stopped - this might indicate that the parameter is moving towards an extreme value. The outcome you describe is often an indication that the model should be respecified with respect to this parameter.
Anonymous posted on Tuesday, November 16, 2004 - 9:12 am
The parameter was the factor loading from y2. I tried to fix it to one and free the loading of y1: ANALYSIS: TYPE IS RANDOM; MODEL: f1 by y1* y2@1 y3 y5; f2 by y9 y10 y11; f2 on x1 x4 x15; f1 on x1 x4 x15 f2; x1xf2 | x1 XWITH f2; f1 on x1xf2; OUTPUT: TECH1
The errors that appeared were:
THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL,STARTING VALUES AND/OR THE NUMBER OF INTEGRATION POINTS.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
What i tried after that was to give starting values to the indicators. I used the loading from the construct-validation. But it appears the problem described Nov.07-10:00,with different parameters, dependent on which loading is fixed. What i do not really understand is why it computes an interaction-effect x1 XWITH f2, if there are no other variables (like x4 or x15...)...
Anonymous posted on Wednesday, November 17, 2004 - 10:38 am
Thank you for the supply to send my input etc. But with my last try i take new hope (maybe at some time i gonna come back to your offer). My last try was to include the interactions of x4 (xed=x1*x4) and x15(xage=x1*x15) into the model even though i expected problems cause without this interactions the errors from Nov-7 appeared. So my syntax was: (...) DEFINE: xsbr=x1*x4; xal=x1*x14; ANALYSIS: TYPE IS RANDOM; MODEL: f1 by y1 y2 y3 y5; f2 by y9 y10 y11; x1xf2 | x1 XWITH f2; f1 on x1 x4 x15 f2 xed xage x1xf2; f2 on x1 x4 x14 xed xal;
OUTPUT: TECH1 TECH8
With this syntax an output appeared. But because i do a one-group analysis with interactions with x1, it would be better for an interpretation to get the standardized coefficients. With the inclusion of x1xf2 however only Type is Random (and no standadized Output) is possible. How can i use the unstandardized effects for an interpretation?
Thank you once again for your help and in total for your the discussion area with many helpful informations.
bmuthen posted on Wednesday, November 17, 2004 - 3:21 pm
I think the unstandardized coefficients are easily interpretable in line with regular linear regression: "for a one-unit change in x, the coefficient tells us how much y changes". This is an interpretation that is convenient when expressing the interaction results in terms of "moderator effects" - see our Day 5 handout and the example of interactions in math growth analysis.
You can also yourself standardize the coefficients. It is for example convenient to standardize with respect to the exogenous variables which as in regular regression means that we multiply a coefficient by the standard deviation of the exogenous variable in question. In this case, a unit change in the exogenous variable is a 1 SD change in the exogenous variable which is perhaps more readily interpretable.
Anonymous posted on Monday, November 22, 2004 - 6:20 pm
I tried to estimate an interaction model: f1 by y*1@1; (y*1 is a single categorical variable and I am fixing its value for identification)
I used TYPE = RANDOM since the program seems to require that statment. y*1 is set to categorical in the DATA section.
I tried to define int = f1*w1 in the DEFINE section, but since f1 is calculated it would not work. The error message I received was:
"An interaction variable defined using XWITH must be used at least once on the right-hand side of an ON statement that is not part of a | statement. No valid reference of: INT." I thought I had done this on my last line. Am I doing something wrong in setting up this model?
I think it is the same thing as I posted. You named the interaction f1xw1 and he had named it int.
Anonymous posted on Tuesday, November 23, 2004 - 4:13 pm
Thank you for the suggestions. I don't see how putting int in the second equation twice (y2 on f1 w1 int w2 w3 w4 int). Even if I get rid of the first stage equation (f1 on x1 x2 x3 x4) and just run the model with the single variable latent variable interacted with the observed continuous variable, I still receive the same error message. I must be setting up the problem incorrectly, but I just don't see where.
You have int on your USEV list. This list is for observed variables in the analysis. When you form an interaction between a latent and observed variable, it is not an observed variable. DEFINE can only be used to create interactions between observed variables.
Anonymous posted on Friday, November 26, 2004 - 8:00 am
What do you think about trying to have interaction variables in the Data? So what i did was: I have the variables like example Nov-7. f1(y1-y4) and f2(y5-y7), the variable x1(region) and i build interaction variables(x1 with the variables of f2) which are aleady in the data: y8=x1*y5, y9=x1*y6 and y10=x1*y7. With the other interactions from Nov-17(xed=x1*x4 and xage=x1*x14) i have the model: MODEL: f1 by y1-y4; f2 by y5-y7; f3 by y8-y10; f2 on x1 x4 x14 xed xage; f1 on x1 x4 x14 xed xage f2 f3;
OUTPUT: TECH1 TECH8
The computed values are not that much different and significant are the same effects as when i did:
x1xf2 | x1 XWITH f2; (instead of having f3 now).
The difference is mainly: 1. that it was possible to take other variables into the model with the f3 way and 2. in the elapsed time for the computation: with x1xf2=> nearly 7min; with f3=> nearly 2hours
Is this nevertheless an alternative in computing interactions between an observed and a latent variable?
This is an ad hoc approach. The problem with it is that there will be estimation errors as in any stepwise procedure, but you will not know how well it works in any given case without comparing it to the one step approach.
Anonymous posted on Friday, November 26, 2004 - 10:12 am
Thank you for your response. My conclusion is that it does not make sense to compute a model with such an approach that does not work with the one step approach (which seems the XWITH one). And if you have to compute every time also with the one step approach ,the ad hoc one seems to be senseless.
Anonymous posted on Thursday, December 02, 2004 - 3:42 am
If i look to my output, i have a negative effect of x2(coded 1-6) on f1, a positive effect of x1(coded 0-1) on f1 and a negative interaction effect x1*x2 on f1. 1. Can i say that the negative effect of x2 on f1 is the effect if x1=0? 2. Is the interaction effect to interpret that way, that the negative effect of x2 on f1 becomes stronger if x1 changes to 1? Namely +(interaction effect*x1 effect)?
bmuthen posted on Thursday, December 02, 2004 - 8:17 am
A useful way to interpret interaction is by a "moderator" approach. In your case you would write your equation
f1 = b1*x1 + b2*x2 + b3*x1*x2
f1 = b1*x1 + (b2 + b3*x1)*x2
where you have b2 < 0 and b3 < 0. This is written to indicate that x1 moderates the influence of x2 on f1.
This shows that if x1=0, b2 is the effect of x2 on f1 as you say. So yes on your question 1 and yes also on your question 2.
Anonymous posted on Friday, December 10, 2004 - 1:37 am
At Oct.28, Linda K. Muthen wrote that a main effect should not be interpreted if there is no significant interaction...
What can be done if we say, the "main" effect seems to be an effect at a special point, let`s say (if we take the example from Dec.02) the effect if x1=0.
Is it not possible to say, if this effect is significant but there is no significant interaction (x1 changes to 1), that the effect is independent from x1 and can be interpreted regarding this independence?
Some people would take it out to make the model more parsimonious. Others would not. I probably would not.
Anonymous posted on Sunday, March 20, 2005 - 9:15 pm
Dear Prof. Muthen
I have been using Mplus 3.1 for fitting some SEM models with latent interaction. As you posted in Mplus discussion list before, there is no goodness-of-fit measure for evaulating the interaction model itself, due to the lack of "unrestricted model". This is the approach without creating indicators for the latent interaction term. However, in the literature, there are other tedious approaches for fitting latent interaction models which require creating indicators for the latent interaction term and many parameter constraints (including Kenny and Judd appraoch, and the recent by Marsh et al. (2004 in Psychological methods, vol 9(3), 275-300). If we use these alternative approaches to fit interaction models, we can now obtain chi-square, RMSEA, CFI, etc. goodness-of-fit measure. Are these GOF measures valid? Did I miss some important link here? I am confused which approaches we should take...
I appreciate if you can help me to resolve this issue. Thanks a lot.
bmuthen posted on Monday, March 21, 2005 - 7:51 am
Good question. I am not sure we fully know the value of those fit indices when using these less efficient methods for lv interaction modeling. And, I think we could develop fit indices for the lv interaction modeling by ML that is used in Mplus. But until then, I think the best way to proceed is to do what is usually done in statistics - work with a series of nested models, looking at their log likelihood differences (2* logL diff is chi-square distributed). For instance, you can look at the regular fit indices for the model without interactions. Then you can look at the logL difference with the model where you add the interaction.
Anonymous posted on Monday, March 21, 2005 - 2:22 pm
Hi Prof. Muthen
Thank you for your comments. I have one follow-up question. I can see we can perform the log-likelihood difference test between interaction model and without-interaction model using the approach in Mplus (no indicator created).
For the other tedious approaches, we need to create product indicators for the latent interaction term, so the dimension of var-cov matrix increases (comparing to the model without interaction where we don't include the product indicators). In this case, can I compare the models by likelihood-hood ratio test, AIC or BIC (the var-cov matrices are different between the interaction vs. "main effects" models-- are they nested models )?
Thanks again for your help.
bmuthen posted on Monday, March 21, 2005 - 5:41 pm
The likelihood ratio test or AIC/BIC are not applicable for the "tedious approaches" since they do not give ML solutions.
Anonymous posted on Wednesday, June 08, 2005 - 12:17 pm
Hello. I am following your advice and am comparing two nested models using the log likelihood difference test. One question...I need the degrees of freedom for both models to calculate whether the chi-square is significant. However, I do not see the df's printed anywhere in the output. Can I just use the difference in the number of free parameters given in the output?
Anonymous posted on Thursday, July 07, 2005 - 9:07 pm
The path diagram for an SEM with an interaction between two continuous latent variables (example 5.13 in the user's guide) is different from examples of continuous latent variable interactions presented in other texts. Typically, the interaction is presented as an additional latent variable, whereas in your example it is represented only by an additional pathway stemming from the two main effects. This implies that a residual variance is not estimated for the interaction term. Similarly, the covariance of the interaction term with its constituent main effects is not estimated.
If I am estimating an SEM that includes an interaction between a latent exogenous variable and a latent endogenous variable, how should I label the interaction in the path diagram? I am using LISREL notation. It seems to me that I cannot label the interaction as "beta" or "gamma" since the interaction involves both endogenous and exogenous variables. I have thought of using something like iota_a,b.c, where "a" is a subscript denoting the enogenous variable influenced by the interaction and "b" and "c" are subsripts denoting the interacting variables. Does this sound reasonable, or do you have any other suggestions?
Because it involves an endogenous variable, I would think of it as endogenous, therefore beta, although what you call it doesn't really matter as long as you make it clear what your notation means.
Anonymous posted on Wednesday, July 20, 2005 - 2:16 pm
Thanks for your previous response (July 08, 2005 - 8:36 am). That was very helpful.
One further question regarding Klein and Moosbrugger's LMS approach for handling interactions involving latent variables. As I mentioned in my previous post, the interaction example in the Mplus User's Guide suggests that a residual variance is not estimated for the interaction term and that the covariance of the interaction term with each of its constituent main effects is similarly not estimated. This is consistent with discussions I have read on the LMS method. Is there a particular reason why the residual variance and covariances are not estimated? I may be wrong, but I thought that the Joreskog and Yang approach for interactions involved the estimation of these parameters.
I am not interested in the residual variance or covariances; however, I want to make sure that I am performing the estimation correctly. Klein and Moosbrugger used the Kenny-Judd model as an application, which involved an interaction between two latent exogenous variables. In my case, however, the interaction is between a latent exogenous variable and a latent endogenous variable. I do not believe that this should make any differemce, though I am uncertain.
bmuthen posted on Wednesday, July 20, 2005 - 2:46 pm
That's right, with ML the latent variable interaction term does not introduce a mean, variance or covariance with other variables - only a regression slope. This is because the interaction is not a new, distinct variable, but merely the product of two existing latent variables. With other non-ML approaches, such as the Joreskog-Yang approach, the interaction is instead seen as a new variable, but that is due to the JY approach drawing on information from observed product variables; in contrast, the ML approach does not change the original set of observed variables.
Does Mplus include some sort of graphing capability that would allow one to visually depict a latent variable interaction?
I suppose one could use factor scores and depict the interaction in typical ways, but this doesn't seem like that great a method given that it won't really show the same relations as will latent variables. Plus it's a problem when using FIML estimation with missing values b/c many subjects would not have observed data to form factor scores from.
Thanks for your input.
bmuthen posted on Saturday, October 08, 2005 - 1:57 pm
No, Mplus doesn't have a graphing capability for interactions. I agree that using factor scores would not be the best way to go. I would use the estimates from the interaction model to compute the effects of the key independent variable on the dependent variable when the moderator (the variable that the independent variable is interacting with) is at its mean and below and above the mean (say 1 SD) - this can also be graphed, outside Mplus.
Anonymous posted on Thursday, December 01, 2005 - 9:25 am
Dear Mplus folks:
What would you make of the following situation?
A model with two latent predictors and their interaction (entered simultaneously) has an interaction term that is tiny and way short of conventional significance.
However, when the above model with the interaction term is compared to a model where the path from the interaction to the DV is constrained to zero, the comparison of the two models (via the 2* logL diff method; 1df) yields an easily significant difference.
Reflecting the small interaction term, the plot using
f1 = b1*x1 + (b2 + b3*x1)*x2
modeled at high, med, and low values of x1 and x2 does not suggest interaction.
It doesn't appear to me that there's an interaction, but the model is happier with the interaction term in. I'm not sure what this means.
I was interested in modeling interactions between four latent variables, using complex sample data and about 15 total indicators. [I'm still learning exactly how to do this.]
About how long should I expect MPlus to finish running the analyses? I've had a few tries of 2-3 hours without terminating and have had to stop the program myself.
How long should I expect this to take?
bmuthen posted on Thursday, December 22, 2005 - 8:55 am
If you have 4 latent variables interacting, you can end up with 6 pairwise interaction variables (I assume you don't mean a 4-way interaction). This is a very demanding tasks computationally. I would start with doing only one pair at a time and see if the interaction has significant effects. You should request Tech8 which gives you screen output showing how many dimensions of integration you have. With 3 or more dimensions, I would use integration = montecarlo. The Tech8 screen output will tell you how slowly the computations are proceeding.
David Bard posted on Sunday, October 01, 2006 - 8:12 pm
I'm running a monte carlo study for a GE interaction twin model with ACE VC's and manifest environmental (X) variables (uncorrelated with one another and all VCs). I've tried numerous approaches, 2 shown below. I get 1 of 2 error messages. Most often: Cov matrix not pos def...; but occassionally: reciprocal interaction. All models are nonlinear regression equations, but the corr A and C terms wreak havoc. I could simulate the cov matrix of the phenotype alone, but the estimates for the full regression model are easier to guess at than this cov matrix (how compute corr between main effect A and GE int term?). Can you help?
David Bard posted on Sunday, October 01, 2006 - 8:14 pm
Syntax for above: 1 approach might be: MODEL POPULATION: A1 by y1*.32 (a);A2 by y2*.32 (a);C1 by y1*.63 (c);C2 by y2*.63 (c);E1 by y1*.71 (e);E2 by y2*.71 (e); A1-E2@1; [A1-E2@0];A1 with A2@1;C1 with C2@1;A1 with C1-E2@0;A2 with C1-E2@0;C1 with E1-E2@0;C2 with E1-E2@0;E1 with E2@0; X1-X2*1 (vx);[X1-X2*0] (mx);X1 with A1-E2@0;X2 with A1-E2@0;X1 with X2*0 (cx1); A1X | A1 xwith X1; A2X | A2 xwith X2; Y1 on X1*.5 (r1) !Main effect of X1; A1X*.05 (ax); !Interaction effect Y2 on X2*.5 (r1) A2X*.05 (ax); Y1-Y2@0.0001;[Y1-Y2*0] (my); Model Population-g2: A1 with A2@.5; 2nd approach (no xwith) changing the regression: Y1 on X1@0;s1 | Y1 on X1;s1 on A1*.05; s1@0;[s1*.5];Y2 on X2@0;s2 | Y2 on X2; s2 on A2*.05; s1@0;[s1*.5];
The message about a non pos def cov matrix will occur due to the A and C factors being correlated 1, but that is harmless and should not hinder the analysis as long as the cov matrix for the observed variables is pos def as it should be.
Dear professor Muthèn, I'm starting to use Mplus and I ran a model with interactions of latent variables. To interpret the effects of the interactions I watched your handouts about this topic, in "Advanced growth models" explanation, and I found a distinction between unstardardized and standardized solution, presenting very similar (or the same, approximated?) loading coefficients in the example formulas. My question is very basic: for the standardized solution should I standardize the observed indicators of the latent variables? Or should I consider the latent variables as standardized in computing the formula? Thank you very much
Say that you have y on f, where f is the latent variable (factor). To standardize the slope with respect to f, you simply multiply the slope by the estimated standard deviation (sqrt of var) for f. Then the new slope says how many units y changes for 1 SD change in f.
Dear all, I would like to test an interaction effect
int | F1 xwith F2
where F1 is a latent variable with continous indicators and F2 is a TWO-dimensional latent variable (with formative dimensions, each dimensions with 3 reflective continous indicators).
Does it make sense to test an interaction effect with a two-dimensional latent variable? I read in a German publication that unidimensionality is necessary when estimating an interaction effect with the Latent Moderated Structural Equation method (which as far as I know is default in MPlus).
I think you are saying that F2 is really 2 factors that share some items in common. If F2 is really 2 factors, each factor should be used to create a separate interaction effect. I can't think of a problem of using 2 factors that share some items.
I am working on a SEM model with latent interaction. I would like to get an effect size for the effect of the interaction term. Can you suggest a formula I can use to compute it?
Another question. I would like to partial out the effect of an exogenous latent variable from the first order (latent) factors before the interaction term is computed. Can I do this with Mplus? My understanding is that the interaction term is computed as a first step, but I might be wrong...
It is not clear how the interaction would be standardized. I have not seen this presented.
Timothy posted on Thursday, April 30, 2009 - 10:50 am
Hi, I have a question about interaction. I found a significant interaction. Then, the next step is to understand the nature of the significant interaction term. How can I examine the simple main effects in Mplus?
I am doing a conditional LGM with multiple indicators. I have three 0/1-coded independet variables. Im interesetd in the three-way-interaction of these dummies. And actually I get a sign. interaction term. But I'm not sure of the robustness of the results, because the "1/1/1-situation" has just a vew cases (n=13). Do you have any suggestions how to deal with this problem? Does it make sense at all to do the analysis with the interaction term? Furher I would guess, that a subsequent multiple group analysis (boys vs. girls) will be even more problematic.
Thanks for your answer. Do you have any suggestions or do you know about a paper which deals with the problem of small n in connection with dummy-interactions. Is there a rule of thumb? For example: An interaction of two 0/1-dummies, how many cases should be the 1/1-situation.
I conducted a latent interaction analysis with continuous factor indicators and compared the -2LL value in the model without the latent interaction term with the -2LL value for the model with the interaction according to recommendations.
Got back the paper with one reviewer claiming one should not compare these two models using log likelihood as they are not nested.
I saw in an earlier response a few years back to a similar question that you Linda answered that you thought that these two models are nested and that you therefore can compare them. Is that still your (or Bengts) opinion?
The reviewer instead suggested comparing the model with the interaction term with a model where one sets the latent interaction term =0. Would this according to you be more correct?
Thanks for the reply. I tested what the reviewer suggested and it turns out, as you wrote, that the -2LL value for the baseline model and the model where the latent interaction is=0 is almost identical (differs between .003 and .004). So, seems the two approaches do yield the same result.
Anonymous posted on Friday, December 11, 2009 - 8:40 am
I am testing continuous latent variable interactions in MPLUS and was looking at this post for looking at why I am not getting fit indices. This clearly explains why. My model converges normally and I get the estimates that look fine. I have a couple of questions:
I am trying to get the likelihood ratio test for the interactions in MPLUS. I have a quick question on how I am operationalizing it. Is it correct that I should use the
Analysis: Type=random; algorithm = integration;
option even for the linear model with no interaction terms? when I compare likelihoods? I see that using the analysis type = random option, the log likelihood is different than when I do not use it. I was encouraged to see that my results are not very different. Could you tell me why this happens in MPLUS ?
Also, does the type=random option give robust standard error estimates? I found that in each case, the option gave slightly higher standard error than not using the option.
You should use TYPE=RANDOM for both the model with and without the interaction. You don't need ALGORITHM = INTEGRATION for the model without the interaction.
The difference you see with and without TYPE=RANDOM is due to the fact that for continuous outcomes and TYPE=GENERAL the model is not estimated conditioned on x whereas it is with TYPE=RANDOM. You should do all models using the same analysis TYPE.
MLR is available with TYPE=RANDOM. MLR is robust to non-normality.
Simulation studies (eg. Coenders et al 2008) show, that with non-normal data LMS overestimates the interaction effects, whereas there are nearly no effects on the significance of the interaction effects. On the other hand the approach suggested by coenders et al underestimates significance, but there is no bias of the interaction effect.
Now I have this problem with my data. LMS leads to a significant interaction (gamma=.311). The coenders et al approach shows an insignificant gamma (=.241).
Would you suggest to report both approaches in a paper?
One more question: When I'm calculating the confidence bands for the moderation effect, is it possible to use the gammas of the Coenders approach and the co/variances of the coeff. from the LMS approach?
From what I read above, if I want to look at 2 models, 1 without a moderator and one adding in the moderator, I need to use the loglikelihood difference chi-square test? If I multiply -2 times the loglikelihood difference for both models, how do I determine if one is better than the other? Thank you for any help.
I think you are asking about including an XWITH interaction. If that interaction term influences only one DV, then the test of improvement of the model is the z test for that slope. If the interaction term influences several DVs you could use the 2 times logL diff which is then chi-2 distributed.
Dear Dr. Muthen, I am in fact including an XWITH interaction into my model and it is only influencing 1 DV. Therefore, it appears the z test is what I want to look at. However, I am unclear how to calculate this from my output and also do I want to find significance or not? Do I calculate the z test from something in my output? Thank you for any help! Sarah
The z test is the ratio of the slope parameter estimate to its standard error which is found in the third column of the results. It is followed by a p-value or you can compare it to a critical value of 1.96 at the 5 percent level.
So it is just the the slope parameter / S. E. for the interaction term then? Basically, it the new term is significant then does that tell me that the my moderator is a good fit for the model? Obviously you have mentioned before that there are not fit indices for moderator models... Thank you so much for your prompt reply!!
Dear professors, I am trying to run a structural model with interaction terms over five imputed data files. Setting integration to standard(7) and using starting values does not help quicken the process (the estimation screen freezes at some point) probably because I have too many integration points. When I tried Monte Carlo integration, I have some results but feel unsure whether they are of value. So my two questions are: 1. Is there yet another efficient way of specifying such an interaction model? 2. How do I estimate the statistical signifance of MC results?
Thanks for the suggestion, Bengt. Only one interaction is significant when I run the model the standard way introducing two interactions at a time. I do not know whether it would be convincing to state that I am not controlling the rest when I get that significance. What do you think?
By the way sorry to iterate but we don't know the significance of MC estimates, is that right?
As a general note: I know about the products of indicators technique and tried using it in Lisrel following (Jaccard and Wan 1995) and one reason I am learning MPlus is to study the latent factor interactions. And I am studying interaction effects between subconstructs of market orientation and environmental factors i.e. turbulence and competitive intensity and my theory suggests to study all moderations.
I would be convinced doing it one latent variable interaction at a time, but I don't know about reviewers.
With 4 interactions you could try integ = 7 instead of Monte Carlo integration - this would give less than 2500 integration points which isn't much with your small sample. This should not be slow on at least a 2-processor computer. Be sure to say Processors = 2;
With Monte Carlo integration you can generally trust the SEs - I think that's what you are asking.
I am running a simple SEM with two latent variables and their interaction, with the three latent variables predicting one observed outcome variable.
I am using analysis type=random since that is what is required for latent interaction estimation, but I cannot get the model indices (e.g., CFI and rmsea) when I use type=random. Is there a way around this?
I know journal reviewers will want to see model indices and I hate to have to drop the latent interaction term just to get model indices.
I think there is a paper that does this by Herb Marsh in Psych Methods. You'll have to search for the reference. I don't have it.
Jan Zirk posted on Wednesday, May 02, 2012 - 5:12 pm
Dear Linda or Bengt,
I have a model with an interaction term between a binary x and continuous y (entered by DEFINE). Thus the model looks like this (they enter it in broader context of other variables): x-->z y-->z xy --> z
This model generally does not fit the data quite well. Would you enter additional covariance parameters between the regressors and the interaction (like this: x-->xy y-->xy; or x<-->xy, y<-->xy)? When I enter them the goodness-of-fit gets better, though I am not sure if just the command DEFINE, without these additonal links, is not enough?
Let me try to intepret what you are modeling. It sounds like your model is of the following type (although more complex):
Usev = x m y;
Define: xm = x*m;
Model: m on x; y on m x xm;
The issue is the interaction xm which is an interaction between an exogenous variable x and an endogenous variable m (you should not call m an exogenous variable even if it is a predictor of y: When it is influenced by another variable, that is x, then it is endogenous).
If this is what you are asking, then the answer is that the inclusion of the interaction term does hurt chi-square but you cannot and should not rely on the chi-square test of model fit for such models and you should not try to add parameters to make it fit. The reason you cannot rely on chi-square here is that with an x*m interaction the variance for y conditional on x is no longer constant as is assumed in the chi-square testing, but varies with the x values.
Jan Zirk posted on Saturday, May 05, 2012 - 9:14 am
Dear Bengt, Thank you so much for this exhaustive reply. It fully answers my question. Sorry fo lack of precision in my description of the model; next time I will just paste part of the code.
Jason Major posted on Thursday, August 02, 2012 - 11:16 am
Hello, I am running an LMS model with one linear and one quadratic effect, and I'm having a problem in calculating the R-squared of the effects compared to the Mplus output.
I've read several papers that state that R-squared of the quadratic effect is equal to the coefficient of the quadratic effect squared multiplied by 2 times the variance of the IV, all divided by the variance of the DV. (Marsh, Wen and Hau, 2004 gave an expanded form of this formula; Harring, Weiss & Hsu, 2012 recently gave this form).
However, when I compare the R-squared from this formula to the residual variance in Mplus (which I assume can be used to check the explained variance), the two don't match up. The formula overestimates the size of the quadratic effect compared to the explained variance based on the residual variance.
This seems to be the case whether I estimate the quadratic effect alone, or both the linear and quadratic effects in the same model.
One issue that I thought might be causing this mismatch is that my DV is not latent but manifest. But I'm not sure whether this would effect the formula for the R-squared of the quadratic effect.
I would appreciate any help/advice you can give me.
See the FAQ on the website called Latent Variable Interactions to see how to compute R-square for a regression with an interaction.
Jason Major posted on Friday, August 03, 2012 - 3:41 pm
I have taken a look at the FAQ and Mooijaart & Satorra (2009), which it references.
The problem is that the formulas provided there are for calculating R-square for the interaction effect in a model with two linear effects and one interaction, whereas my model only has one linear effect and one quadratic effect.
The formulas also reference a disturbance term for the dependent variable, whereas my DV is a single manifest variable.
I thought that I may be able to infer the R-square of the quadratic effect by subtracting the variance due to the linear effect from the explained variance.
If the DV is standardized by setting its variance to 1, then I think the R-square of the linear effect is simply the square of the standardized regression coefficient? The problem with this though is that the intercept isn't zero in the model, so I'm not sure if this simple forumla applies.
I've also have tried comparing the R-square of the full model to one with the linear effect only, but the size of the linear effect increases slightly with the inclusion of the quadratic effect, so its size isn't consistent, which may indicate something wrong with the model assumptions?
I think the general formulas in the FAQ can also be used for your case since they pertain to products of variables. And it would seem that your model also would include a residual for the DV - the fact that your DV is observed doesn't matter.
I have a question about interaction,let's say that I have defined a new variable that is the "interaction" where INT=RR*EZ. On my regression, should I include each element? Namely, should the syntax look like this:
Y on INT RR EZ;
Again, using the same elements but changing the operation INT=RR/EZ, should the syntax look like that above?
Hello. I would like to present the standardized estimates for an SEM with latent variable interactions. I have looked at this thread and the FAQ on latent variable interactions which includes a description of standardization, but I'm confused about whether it is possible to calculate a standardized interaction beta given some of your answers earlier in the thread. It would seem that once the variance V(n1 ×n2) for the interaction was computed according to the formula in the FAQ, that could be used to compute the standardized interaction beta as b3*SD(n1 ×n2)/SD(n3). Would this be appropriate?
If so, then it would seem slightly different than the standardization described in the FAQ, which computes a standardized b3 in the moderator function (b1 + b3*n2) n1 by multiplying b3 by SD(n1) * SD(n2) and dividing by SD(n3); this form of standardization does not include the covariance term for the IVs, which is included in the estimate of V(n1 ×n2).
One more thing: I think I may be misunderstanding the FAQ, but I also am very confused about the computations in the standardization section of the FAQ: it computes standardized betas for the b1 and b3 coefficients in the term (b1 + b3*n2), which it describes as being obtained by dividing both by SD(n3) = sqrt(3.17), multiplying b1 by SD(n1) = sqrt(2), and multiplying b3 by SD(n1)* SD(n2) = sqrt(2). However, with the given values of b1=.5 and b3=.4, I don't see how the standardized estimates of .199 and .159 for b1 and b3 come out; shouldn't they be .5*sqrt(2)/sqrt(3.17)= .40 and .4*sqrt(2)/sqrt(3.17)=.32?
First, a new version of the FAQ posting will be up (probably tomorrow) and corrects the standardized numbers which were from an earlier version that had different population values.
Second, the Wen, Marsh, Hau (2010) SEM article gives an explanation for why the square root of the variance of the product of eta1 and eta2 should not be used in the standardization of beta3 and why instead the product of the square root of each variance for eta1 and eta2 should be used as in my note.
Hello! Has something changed from Version 6 to 7 in connection to the xwith command? I am using syntax and data (nonlinear effect) which run without problems in V6, but now I get the following error messages
THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL, STARTING VALUES AND/OR THE NUMBER OF INTEGRATION POINTS.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
Hello, one more question regarding the curvilinear effect: the latent var mstk2 is nonnormal distributed, thus I use in addition to the LMS approach Marsh's unconstrained and partial constrained approach (GAPI).
Now I am somewhat confused about the messurement error constraints.
According to Marsh et al (2006, in the Hancock/Mueller book, see also Kelava et al (2011) Advanced Nonlinear Latent Variable Modeling.... Struct. Equ. Mod. 18, p.465):
With the following paper supplementary material they created Marsh, H. W., Nagengast, B. & Morin, A. J. S. (2012). Measurement invariance of big-five factors over the life span: ESEM tests of gender, age, plasticity, maturity, and La Dolce Vita effects. Developmental Psychology. They created the syntax for the GROUPING a3s2 (11=A1S1, 12=A1S2,21=A2S1,22=A2S2,31=A3S1,32=A3S2); which was defines as 6 groups (3 age x 2 gender).
Can you advise me on how the groupings were created. Either with Mplus or Spss and any syntax for it. it maybe a wrong platform for this question.
You can do this using RANDOM MIXTURE with KNOWNCLASS.
lamjas posted on Saturday, March 09, 2013 - 4:29 pm
Thanks for your reply.
When I use RANDOM MIXTURE, I need to identify a latent class, which is not my intention. I use XWITH to obtain an interaction of two continuous latent variables. It turns out the only way to do is MIMIC model. Is it correct?
I have another question about plotting of interaction. I have read the FAQ paper "LV interaction". What is the syntax to obtain the graph shown in the figure?
Dear Professors I have a question concerning a latent interaction model I have been running (N=600):
SEM without latent interaction fits the data well: estimator = MLR;
MA by m1-m6 WM by w1-w5 TA by t1-t3
MA on WM TA
CFI=.98; TLI=.979 RMSEA=.03
When I add the interaction WMxTA I get the warning message: latent variable covariance matrix is not positive definite. I see that the residual variance of MA is negative (but non-significant). But even after I constrain the residual variance to zero I still get the warning. The interaction is significant so I would really want to include it in my model. But I´m a little cautious about interpreting the results when I get this message. Should I try to standardize the regression paths (as in the FAQ) to see if there is a correlation greater than one (because tech4 is not available)? Or do you have other suggestions?
PS. I also did a multiple imputation (20 datasets) and re-run the model and did not get any Warning-messages even though the residual variance of MA was still negative (p=.79).
You need to ask for TECH9 with multiple imputation. This is where the warning messages will be found.
It is likely that your model is misspecified. See the R-square without the interaction. It may already be high and adding the interaction may explain too much variance. You may need residual covariances among some of your factor indicators. The only way they can relate in your model is through the factors. See modification indices.
Dear Prof. Muthen, I am running an interaction between a latent factor and a manifest variable (using XWITH). I have two questions regarding this analysis: 1. When I am estimating with full information maximum likelihood, I get the following warning: *** WARNING. Data set contains cases with missing on variables used to define interactions. These cases were not included in the analysis. Number of such cases: 101 --> Is it not possible to use FIML with such an interaction? 2. Makes the XWITH statement use of the method of Klein? To support his with a reference, can I refer to the following paper? Klein and Muthen (2007). Quasi-maximum likelihood estimation of structural equation models with multiple interaction and quadratic effects. Multivariate Behavioral Research, 42, 647-673.