Anonymous posted on Thursday, December 09, 2004 - 4:49 pm
Hello, I am a rather new Mplus user. I am performing an SEM where I need to free covariances between some observed variables. How do I do this with Mplus? I haven't been able to find a solution in the user's guide. Thanks
bmuthen posted on Thursday, December 09, 2004 - 5:18 pm
Look up the WITH command in the user's guide and you will find the answer.
Anonymous posted on Friday, December 10, 2004 - 5:51 am
Thank you! It works perfect and much easier than Lisrel or EQS.
Is there a trick where you can constrain a parameter to be positive, but not specify further restrictions? I tried to work with the model constraint option in the model statement, but could not make it work
I'm trying to use this trick, but I encounter some problems I could not find solutions for in the discussions. I need to constrain several parameters to be positive and others to be negative. Does this mean I have to create a dummy-variabel for every constraint? If I do so, MPlus returns an error "negative degrees of freedom". In fact due to the dummy-variables the number of latent variables is larger than number of observed variables. Haven't I understood the trick well? Is their a way to solve this problem?
In Mplus, is there a way to contrain the variance of an endogenous latent variable to be equal to a specific numeric value, or to equal the variance of another endogenous latent variable (as in Ramona or Sepath)?
You use Model constraint to express the whole variance as a function of the parameter estimates, for example using the "NEW" parameter feature, and then use that in constraints.
John Hipp posted on Wednesday, September 03, 2008 - 6:07 pm
Hi- I'm trying to constrain the variance of the disturbance of a single indicator to a specific value based on a reliability score. I'm trying to do this with your nonlinear constraints (as I'm going to estimate a more complicated model where I cannot just set it to a specific value). I'm using this code:
!variance of the indicator error terms; psblk06(p1); pshisp06(p2);
!variance of latent variable; ptbl06(p3); ptlat06(p4);
No, linear and non-linear constraints can be defined using the equal sign (=), the greater than sign (>), the less than sign (<), and all arithmetic operators and functions that are available in the DEFINE command with the exception of the absolute value function.
When I tried to constrain error variance to SD^2 x (1-alpha) and latent to indicator path to sqrt(1-[SD^2 x (1-alpha)]. I receive an error message that PSI-matrix not positive definite.
Subsequently, I tried the Tihomir's trick as recommended but I receive an error message informing me that “A parameter label or the constant 0 must appear on the left-hand side of a MODEL CONSTRAINT statement.”
How should I go about resolving this issue? Thanks!
I used the Model Constraint Command for significance testing. In the Savedata section, I saved the results of the analysis in a text file. In the Mplus output I can see results for the New/Additional Parameters. However when I try to find them in the text file, I don't see them nor are they listed in the Order of data at the end of the Mplus output.
Are the Model Constraint results saved with the other estimates in the text file. If so where would I find them? If not, how can I save them?
I'm still familiarizing myself with doing constraints in Mplus, so I apologize for the rudimentary question.
I'm running an LGM using four indicators modeled with latent intercept, linear, and quadratic parameters. Results suggested that the covariance between the intercept and linear slope terms was non-significant; thus, I initially attempted to constrain it to zero as below:
Running that was giving me troubles (I was getting results indicating that parameter SEs could not be estimated because the model may not be identified). I later tried running it as below, which did work as intended. My question: what is it that the syntax above is doing that is different from that shown below? Thank you greatly for any information!
I have tried the two versions of fixing i WITH s at one and have no problem. I see that you have different outcomes. For further information, send your outputs and license number to firstname.lastname@example.org.
Hello, I am using the model constraint command in order to constrain the correlations between the slopes and the levels of two latent variables in a latent growth model to be equal rahter than constraining the covariances to be equal. However, in the output, the correlations are not equal but i did not get any error message either. Do you know where I went wrong? Thank you very much for your answer!!
Mplus uses the Delta method which in the case of a linear function of random variables is the expression you give. See page 247 of Kendall and Stuart, The Advanced Theory of Statistics, Volume 1, Fourth Edition.
Is there any difference between alpha constraints--for example, (p1)--and numeric constraints--for example, (1)?
When would a user want to use alpha (letter) constraints as opposed to numbers? Is it just that the alpha constraints allow for more complex mathematical statements to be written in the MODEL CONSTRAINT section? Is it correct that other than this, there is no difference?
jas229 posted on Tuesday, July 10, 2012 - 12:55 pm
Dear Prof Muthen, We are trying to test a model with causal stress indicators to study the effect of a latent stress variable on two outcomes (negative and positive affect). In order to do this, we were hoping to follow Bollen & Davis's (2009) suggestion for testing underidentified models by using proportionality constraints. We need to include direct pathways between the causal indicators and the outcome variables and then impose proportionality constraints on these paths that assume that the influence of the causal indicators is mediated through a casual latent variable. We tried imposing these constraints using the model constraints command by naming the proportions between the paths that need to be constrained and then equating them to be equal. However, we get this error:
NO CONVERGENCE. SERIOUS PROBLEMS IN ITERATIONS. ESTIMATED COVARIANCE MATRIX NON-INVERTIBLE. CHECK YOUR STARTING VALUES.
Could you please guide us about the syntax we need to impose these constraints in order to run the model?
Recently I read the paper by van de Schoot et al (2010) on Testing Inequality Constrained Hypotheses and I am interested in the procedure implemented in MPLUS to estimate models with inequality constrained parameters. I have already used it, but I would like to know more about how it works. The procedure is briefly described in van de Schoot et al, and I am wondering if you have a paper or technical report that described this procedure with more detail.
There are two ways of testing for inequality constraints in Mplus: (1) using bootstrapping and (2) using Bayes Factors.
There are two publications describing the procedure for (1):
- Van de Schoot, R., Hoijtink, H. & Deković, M. (2010). Testing Inequality Constrained Hypotheses in SEM Models. Structural Equation Modeling, 17, 443–463. - Van de Schoot, R. & Strohmeier, D. (2011). Testing informative hypotheses in SEM Increases Power: An illustration contrasting classical hypothesis testing with a parametric bootstrap approach. International Journal of Behavioral Development, 35: 180-190
Two papers describing the second method are in press:
- Van de Schoot, R., Hoijtink, H., Hallquist, M. N., &Boelen, P.A. Bayesian Evaluation of inequality-constrained Hypotheses in SEM Models using Mplus. Structural Equation Modeling. - Van de Schoot, R., Vehoeven, M., & Hoijtink, H. Bayesian Evaluation of Informative Hypotheses in SEM using Mplus: A Black Bear story. European Journal of Developmental Psychology.
If you want a copy of these papers please send me an email and I will be happy to give you a copy of both papers.
Steve posted on Wednesday, June 26, 2013 - 9:37 am
I am a relatively new user to Mplus.
Basically - I need to know how to constain the correlation between two latent variables to 1.
That is, I am needing to conduct a Chi-square difference test using a CFA of two latent variables F1 and F2. Model 1 with the correlation between F1 and F2 freely estimated vs Model 2 with the correlation between F1 and F2 set to 1 (with a 1 df change).
Model 1 works fine
Model: F1 by X1 X2 X3; F2 by X4 X5 X6;
For Model 2, I am using
Model: F1 by X1 X2 X3; F2 by X4 X5 X6; F1 with F2@1;
However, the output for Model 2 does not provide model fit nor STDYX statistics.
Why do you need STDYX for Model 2 - aren't you just interested in the test result?
Note also that F1 WTIH F2 is not a correlation, but a covariance given that the factor variances are not 1. To make into a correlation you have to fix the metric by fixing the factor variance at 1 instead of the default of the first loading at 1.
Btw, you can get your test by running only Model 1 and using Model Test to see if the correlation = 1.
Steve posted on Wednesday, June 26, 2013 - 1:35 pm
Thank you for your quick response.
Yes, I don't need STDYX - just noticed it was not there for comparison and to check that correlation between F1 and F2 was in fact 1. If I understand correctly, I should use:
Model: F1 by X1* X2 X3; F2 by X4* X5 X6; F1@1; F2@1;
Then for Model 2 use:
Model: F1 by X1* X2 X3; F2 by X4* X5 X6; F1@1; F2@1; F1 with F2@1;
I also tried the Model Test version as you suggested, but was unable to get it to work do to the wrong input. I consulted the manual, but cannot figure out how to do this.
I am calculating the within and between variances for a group of variables using TWOLEVEL BASIC. For one of the variables the between variance is close to 0 so the model doesn't converge. I still want to know the within variance. Mplus suggests that I fix the variance and corresponding covariance to 0. How can I do this with only one variable?
I am conducting a path analysis and one of my major study hypotheses is that predictor A accounts for a significant portion of the overlap in variance between variables X and Y. I found that A does predict both X and Y, but I want to show that including A in the path model reduces the covariance between X and Y. Is it possible to explicitly test this and if so, what would be the best way to go about doing this?
In regards to the question I posted above, others have suggested to me that in model 1, I look at the model estimated correlation between X and Y when A is not included as a predictor. Then, in Model 2, I could include A as a predictor and constrain X and Y's correlation to be the same as in Model 1. If this constraint does not significantly decrease model fit, then I could assume that A does account for a significant portion of the overlap in variance between X and Y. Would this test be possible (I couldn't figure out if it is even possible to constrain the correlation between two variables to a particular value)? Or do you recommend what you've suggested above?
In regards to the question I posted above, others have suggested to me that in model 1, I look at the model estimated correlation between X and Y when A is not included as a predictor. Then, in Model 2, I could include A as a predictor and constrain X and Y's correlation to be the same as in Model 1. If this constraint DOES significantly decrease model fit, then I could assume that A accounts for a significant portion of the overlap in variance between X and Y. Would this test be possible (I couldn't figure out if it is even possible to constrain the correlation between two variables to a particular value)? Or do you recommend what you've suggested above?
I am running an autoregressive cross lagged model and would like to test for gender difference. I am not sure how to add a constraint on gender and to also see if it is significantly different from the free data?
If you want to test a coefficient for males versus females, you need to mention the parameter in the group-specific parts of the MODEL command. See the discussion of multiple group analysis in Chapter 14 of the user's guide.
Thank you for highlighting this as I forgot to ask for this as well. I had a look at the chapter on multiple group analysis, by what I understand is that the chi-square test is to be used to get an idea of the significance- I might be wrong here in interpreting the information. I'm not sure where I can specify the tests and what codes to use in the model.
In addition, I want to test to see whether the constrained model is significantly different from the model where male and female is not constrained.
When you label class-specific parameters, you can test if they are different using the Wald test of MODEL TEST or chi-square difference testing of the model with the parameters equal versus the model with parameters not equal.
Dear Mplus team, I am fitting a structural model using the BAYES estimator. Can you suggest a way to constrain the sum of two model parameters (specifically two intercepts) to be zero? Or, similarly, to assign a prior to the sum of two intercepts?