Anonymous posted on Friday, March 11, 2005 - 10:43 am
Dear Professor Muthen,
I have some questions about the path analysis. (1) I tried your example 3.11 first. The original code is model: y1 y2 on x1 x2 x3; y3 on y1 y2 x2; Question (1) is: are these x1 x2 x3 assumed correlated in the first regression by default?
(2) Then I tried with model: y1 y2 on x1 x2 ; y3 on y1 y2 x2 x3; x1 with x2; Question (2) is: In the output, I got x1 with x3, x2 with x3 also. But those are not what I specify in the model. What is it going on here? I met the similar situation with other data, when I specify one variable with another in the model, I got a lot of other "with" in the oupput.
Question (3) is: How can I specify x1 is correlated with x2, x1 is independent of x3?
(3) Finally, I tried with, model: y1 y2 on x1 x2 ; y3 on y1 y2 x2 x3; x1 with x2; output: modindices; So, in the output I got some "with" modifications. Question (4) is: How does this option work? Add one path per time? How can I specify the suggested "with" modification in the model?
1. As in regular regression, the model is estimated conditioned on the x's.
2. When you mention x1 WITH x2, they are no longer treated as exogenous variables. Therefore, you no longer estimate the model conditioned on x1 and x2 but there are part of the model. You should not mention x variables in the MODEL command except on the right hand side of ON.
3. You will obtain modification indices for all parameters that are fixed or constrained to be equal to other parameters. See the SEM literature for how to use modification indices. See the Mplus User's Guide for a description of them.
Anonymous posted on Tuesday, March 15, 2005 - 10:09 am
Thanks for your quick answers. Another question is can I use Mplus to fit the path analysis model with "feedback loop", i.e. X and Y are reciprocally causing each other? Thanks.
Reetu Kumra posted on Thursday, April 13, 2006 - 1:16 pm
Hi, I have three imputed datasets from NORM that I am working currently working with. I ran the exact same model for all three datasets. It seems as though MPlus added on a few 'with' statement that aren't specified by me. These statements aren't the same in the three outputs I am looking at. Why exactly does this happen?
I would not be able to tell you that without more information. If the inputs are identical and only the data set name changes, I would be surprised to see different defaults in effect. If you want me to look at this, send the input, data sets, outputs, and license number to email@example.com.
Sorry if I've posted this message under the wrong topic. I wasn't sure where to post it.
I really like being able to run multiple regression models using MPLUS with FIML since it avoids listwise deletion. 1. Is there a way to get a plot of the residuals (estimated value of dependent observed variable minus actual value of dependent observed variable vs the predicted (estimated) values? This is very useful to check whether the model should be linear or quadratic. 2. Is there a way to see the Variance Inflation Factor values to check for problems with multicollinearity?
Another reason I was interested in the plot of individual residuals is that it reveals whether heteroscedasticity is a problem. I have one book on multiple regression that says when the homoscedasticity assumption is violated "conventionally computed confidence intervals and conventional t-tests of OLS estimators can no longer be justified." I don't know whether this warning is applicable when the multiple regression coefficients are estimated in Mplus using FIML. 1. Should I be concerned about the potential for heteroscedasticity when using FIML with the ML estimator? 2. If I use FIML with estimator = MLR so that robust standard errors are generated? 3. If the negative consequences of heteroscedsasticity are as likely/severe using FIML as in conventional OLS multiple regression, how would you recommend I check for heteroscedasticity using Mplus? Your guidance is greatly appreciated!
I have conducted a path analysis with two independent and four dependent variables (using means and sum scores). Since I have hypotheses about the direction of the influence from the independent on the dependent variables it would be appropriate to report the one-tailed p-value. However, Mplus only computes the two-tailed p-values. Is there a possibility to obtain the one-tailed p-value using a specific output-command? Or is it sufficient to divide the two-tailed p-value by two?
Is it appropriate to restrict some of the intercorrelations between the dependent variables using the WITH-statement due to content aspects? (one dependent variable is measured via video analysis and therefore no correlations are expected with the other 3 dependent variables).
Hello, I want to accompany a correlation matrix with my longitudinal path model. Mplus output gives correlation coefficinets among variables used in the models, but how do I get significance levels of these correlations? In SPSS, the correlations are different because the program uses listwise delition (which I don't want). Thank you, Kristine
It would be complicated to do this in Mplus given that the covariance matrix is analyzed for path models not the correlation matrix. You would have to use WITH statements to define all covariances and then use MODEL CONSTRAINT to turn them into correlations.
Qilong Yuan posted on Monday, April 19, 2010 - 12:34 pm
Hi, My model specification is this: y2 ON y1; y3 ON y2; x2 ON x1; x3 ON x2; y2 ON x1; y3 ON x2; x2 ON y1; x3 ON y2; y1 WITH x1; But in addition to all of these paths, I also get an estimate of “x3 WITH y3”. When I remove “y1 WITH x1” the correlation between x3 and y3 is still estimated.
I am surprised to get a correlation I did not specify. This is a correlation between disturbance on x3 and y3, correct? Is it essential to the model, or would it be reasonable to fix it to zero (and gain a degree of freedom)?
I have some questions regarding defaults for correlations among predictor variables in a path analysis. Using example 3.11, the model statement is (y1 y2 ON x1 x2 x3; y3 ON y1 y2 x2;). According to the figure, the 3 correlations between x1, x2 and x3 are also estimated, which leads me to believe that the correlations among the predictors are estimated by default. These correlations, however, are not reported in the output and are not reflected in the number of free parameters.
If I change the model statement to also include the correlations (x1 WITH x2; x1 WITH x3; x2 WITH x3;), 9 additional parameters are estimated (the 3 correlations, 3 means and 3 variances for x1, x2 and x3). All parameter estimates, standard errors, intercepts and residual variances that overlap in the two output files are identical. In addition, the chi-square tests, CFI, TLI, RMSEA and log likelihood are also identical. The AIC, BIC and Adjusted BIC change, however, due to the increase in the number of free parameters.
Hence, is the first model statement estimating all of these parameters behind the scenes but not reporting them or including them in the number of free parameters? Or are these theoretically different models? That is, does adding the correlations for x1, x2 and x3, result in correlating the residual rather than correlating the observed variables of x1, x2 and x3? Any help or clarification is much appreciated. Thank you.
The arrows in the diagram show that these covariances are not fixed at zero during model estimation. A regression model is estimated conditioned on the observed exogenous variables. Their means, variances, and covariances are not model parameters. When you include them in the model, you treat them as dependent variables and make distributional assumptions about them. In the case of all continuous variables and no missing data, the two approaches have the same results. When you move away from this situation, you will see differences in the results.
Thank you, Linda. This makes perfect sense now. I didn't realize the arrows denoted the default for model parameter estimation. I was under the assumption that the arrows among the predictors had to denote correlated residuals, since as you stated the correlations, variance and means of the predictors are not estimated as model parameters. Thank you for the clarification.
Dear Drs. Muthén I am running a path analysis with ordinal observed variables (both exogenous and endogenous). I know that in MODEL RESULTS, the "WITH" function renders residual covariance, but I have three questions: 1. What kind of estimate is calculated between pairs of endogenous variables in the RESIDUAL OUTPUT? i.e. Model estimated Covariances, Correlations or Residual correlations? 2. Are the errors for such estimated Covariances/ Correlations/ Residual correlations, the same errors (S.E.) for their corresponding covariance/residual covariance in MODEL RESULTS? 3. If the previous answer is NO; How can I obtain such errors? Thank you.
Dear Dr. Muthén. Thank you for your quick answer. Regarding my message of August 20, 2012 - 10:10 am; some additional questions: 4. The endogenous variables are regressed on other variables (exogenous and endogenous, some common to both variables); thus, is the model-estimated correlation a part (semipartial) correlation? 5. Under which situation, will a residual correlation be estimated? Thank you.
Dear dr. Muthén. I am working with Mplus v. 4.2. I read the manual and I could not realize how to express model correlations between dependent categorical variables with the Model Constraint function. Please, how can I do that? Thank you.
See an SEM book like Principles and Practice of Structural Equation Modeling by Rex Kline to find the formulas you need. Then label the parameters you need to express those formulas in the MODEL command and use the labels in MODEL CONSTRAINT to express those formulas.
I am running two different path models and for several variables I am getting 999.00 values under the standardized residuals and the modification indices sections. I can see from other posts that this is likely due to a zero denominator but that this does not reflect poorly on the model. Is this correct for both? Also, modification index statements are all ON statements. I realize this means regressed on but in the context of MI, what is this telling me about these variables?
In addition, As these are path models and I'm using single indicators (subscale scores), is it advisable to fix an an error variance for some predetermined value for these indicators?
This Mplus default is chosen because such residual covariances among DVs are most often needed. If it was not the default, some users may overlook this fact. It is easy to avoid the default by saying e.g.
I conducted a path analysis to understand direct effect of X (binary) on Z (continuous) and indirect effect of X on Z through a mediating variable Y (binary). My question is silly - I wonder how to calculate variance in Z which was explained by X and Y, respectively? Thank you.
Thank you - the formula for mediators (continuous variables) as you described above is clear to me. After taking a look at the technical report, I am still not clear about how to calculate variance in Z (continuous variable) which was explained by X and Y (both x and y are binary variables), respectively. Is it possible you can explicitly describe the formula to help us calculate? Than you.
It seems you are using the WLSMV estimator. The results with differ with WLSMV because the sample statistics for model estimation are a set of probit regression coefficient and residual correlations. These will differ when you do the analysis one equation at a time or all at the same time.