Random Slopes in SEM and Plots PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 Anonymous posted on Sunday, May 01, 2005 - 6:52 am
Is it possible to graph the random slopes in an SEM? For example, my model is similar to example 5.13 in the user's guide. I was wondering of PLOT3 will do this or if there is something special that I had to do?

Thank you in advance
 bmuthen posted on Sunday, May 01, 2005 - 4:45 pm
You can request factor score estimates and use PLOT3 to show their distribution univ and bivariate with another variable.
 Anonymous posted on Tuesday, May 03, 2005 - 4:21 am
Thank you so much!
 Paul R. Hernandez posted on Thursday, May 12, 2011 - 8:24 am
Question:
Is there any way to avoid listwise deletion when using random slopes analysis?

My situation:
I have 2 correlated outcomes of interest DV1 & DV2 (about 25% of DV2 are missing)
I have 2 uncorrelated predictors IV1 & IV2 (about 25% of IV2 are missing).

My model is:
s1 | DV1 on IV1 ;
s2 | DV2 on IV2 ;

The problem:
Any instances where the either DV2 or IV2 are missing, those cases get deleted.

thanks for you help!
 Linda K. Muthen posted on Thursday, May 12, 2011 - 9:19 am
I have no idea what version of Mplus you are using but in the current version, TYPE=MISSING is the default so cases with missing on DV2 will not be eliminated. However, cases with missing on one or more independent variables will be deleted because missing data theory does not apply to independent variables.
 Paul R. Hernandez posted on Thursday, May 12, 2011 - 10:01 am
Thanks!
 Mireille H. posted on Tuesday, December 20, 2011 - 4:28 am
Dear Mr./Mrs. Munthén

The M-plus output (version 6) indicated that a few of my 3-way interaction terms were significant, and I was wondering how I can test wich slopes of each term significantly differ from each other? I tried to use the online calculator of Mr. Preacher for this, but this gave me unplausible results. I hope you can help me with this.

Kind regards
 Linda K. Muthen posted on Tuesday, December 20, 2011 - 1:31 pm
You can use the Wald test or difference testing. See MODEL TEST for the Wald test. This assumes the coefficients being compared are on the same scale.
 Wesley Anderson posted on Monday, April 27, 2015 - 1:22 am
Basic question. I'm having a difficult time understanding how Mplus estimates equations where the slope is a dependent variable and variables are independent variables. I see that the slope is treated as a latent variable, but beyond that I am lost. I appreciate any help.
Best,
 Bengt O. Muthen posted on Monday, April 27, 2015 - 11:02 am
I think you are talking about UG ex 3.9 which says on page 30 that the random slope handles heteroscedasticity in the y residual variance as a function of predictors. The degree of variance of s and its covariance with the y residual corresponds to the degree of heteroscedasticity. We give references on that page too.
 Wesley Anderson posted on Monday, April 27, 2015 - 11:56 am
Yes, I am wondering if there is a simple way of explaining this. Forgive me that I do not yet understand the details of the citations.

I do not understand how S "handles heteroscedasticity in Y residuals variance as a function of predictors."

Is it the case, using the example you cite, that Mplus regresses Y on X1 for values of X2, to get vector of S data? And then regress S on X2?

Is it the case that Mplus regresses Y on X1 and then you take unexplained residual and regress that on X2 to see how X2 changes residuals around average slope? Can you explain to me this connection between slope and residuals? Because now it seems like S is not the dependent variable but the Y residuals after regressing Y on X1, no?

I also do not understand what you mean by variance of S and its covariance with the Y residual. Where does the data on S come from? Now it sounds as if we're regressing residuals on S or vice versa.

I'd appreciate some explanations. I am fairly competent at statistics, but I am a bit bemused by this seemingly simply example. Thank you!
 Bengt O. Muthen posted on Monday, April 27, 2015 - 3:24 pm
A fuller account is given in the FAQ on our website:

Random coefficient regression
 Wesley Anderson posted on Monday, April 27, 2015 - 4:55 pm
Yes, so let's use the pdf file that links to as our example.

Let's also assume that all the variables including the beta1 below are standardized. Where does the data on the beta1 come from? Or since it is unmeasured, what are we doing?

So we have:
E(Y|X,Z) = beta1X + beta2Z
E(beta1|Z) = beta3Z
so,
E(Y|X,Z) = beta3ZX + beta2Z

Is beta1 definitionally beta3Z (so that beta1 is simply generated by taking values of Z and multiplying them by beta3 estimated from regressing Y on ZX) or is there some data or something such that beta3Z is merely the conditional expectation of beta1? In the former case we have a mathematical relationship. In the latter case we potentially have a case where a variable Z has a causal influence on the strength of causation between two other variables. The latter case seems to be what is of interest whereas the mathematical relationship is not so interesting.
 Bengt O. Muthen posted on Monday, April 27, 2015 - 6:27 pm
Z has influence on the strength of relationship between two other variables. It acts as a moderator, but the moderation model is here more general.

beta_{1i} is a latent variable and as usual with latent variable modeling such as SEM it has implications for the relationships among the observed variables. Its values need not be estimated. Information on its implications come from the observed heteroscedasticity.
 Wesley Anderson posted on Monday, April 27, 2015 - 7:16 pm
Ok, this is starting to make sense. So let's say the residuals from Y regressed on X and Z are biased such that E(resid) not= 0. We regress the residuals of Y on Z. Say we find that there is a statistically significant relationship between residuals of Y and Z.

How can we use this information to estimate random slope of X-->Y edge as a function of Z?

I appreciate your time. This is very informative and useful. I'm getting there.
 Wesley Anderson posted on Tuesday, April 28, 2015 - 10:07 am
Let me try to ask in a different way.

So, I take it that positing a random slope is justified because doing so makes sense of observed relations in the joint frequency distribution over measured variables. So, let's think about this analogously with a more obvious (too me) reason for positing unmeasured things.

Say we have a perfect instrument for X, in the usual sense. Call it Z. We want to know if there is a direct causal relationship between X and Y. Call the strength of this relationship beta1. This is the causal parameter we wish to estimate consistently. So
beta1 = (COV(Y,Z))/(COV(X,Z))
because Z is a perfect instrument for X.

Now let's say we test for unconditional dependence of X and Y by regressing Y on X without instrumenting on X. Call the resulting parameter beta2. Let's say we find that
beta1 not= beta2.
I think this gives us reason to posit an unmeasured common cause between X and Y.

So, analogously, what would give us reason to posit an unmeasured random slope? I see it has something to do with heteroscedasticity when regressing Y on X. But it is not as obvious (too me) as the case of the unmeasured common cause what this connection is.

Thanks
 Bengt O. Muthen posted on Tuesday, April 28, 2015 - 10:22 am
It is simple. You do a regular regression of y on x and you let Mplus do a scatterplot of the y residual against x. Assume you see strong evidence of heteroscedasticity, that is, the residuals are increasingly or decreasingly more variable as the x value increases. This is what we can see. One model that accounts for that is the random coefficient model as our FAQ shows. There is no need for a z variable in eqn (2); so this is not necessarily in the moderator category of models. Eqn (4) shows that the model captures heteroscedasticity as a quadratic function of x. The quadratic function in (4) has 2 parameters, one parameter for the linear term (the Cov) and another parameter for the quadratic term (the V). As was seen in the plot, the data provide information on these 2 parameters. That's how the random coefficient model works. Feel free to share on SEMNET.
 Wesley Anderson posted on Tuesday, April 28, 2015 - 10:34 am
Ok, good. Makes sense. So the multilevel random slope is different then? In that case, it seems like it is necessarily in the moderator category. So regress y on x for each cluster. Then you have a vector of values for relationship between y and x. Then you can regress that data on cluster level variable. Yes? Is that how it works in the multilevel case? Thanks.
 Bengt O. Muthen posted on Tuesday, April 28, 2015 - 11:05 am
Right. With multilevel you can see how the regression varies across members of the cluster, but in the random coefficient case you have only one cluster member so you can't see it that way.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: