Message/Author 

Anonymous posted on Friday, October 29, 1999  11:44 am



There are two types of standardized coefficients printed in the Mplus output. How do they differ? 


The coefficients labeled Std are standardized using the variances of the continuous latent variables. The coefficients labeled StdYX are using the variances of the continuous latent variables as well as the background and/or outcome variables. The Std and StdYX coefficients are the same for parameter estimates involving only latent variables such as continuous latent variable variances, covariances, and regressions. They differ for parameter estimates involving both factors and observed variables such as factor loadings. Only Std should be used for dummy background variables. 

Anonymous posted on Tuesday, November 30, 1999  7:57 pm



What is the interpretation of the estimates/coefficients for paths to a categorical outcome in Mplus? 


The Mplus estimates for paths from predictors to an observed categorical dependent variable are probit regression coefficients. Typically, only their signs and significance are noted. A positive sign means that the probability of the categorical dependent variable (e.g. the category 1 for a 0/1 variable) is increased when the predictor value increases. A larger magnitude means that this probability increases faster. If more detailed description of the influence is of interest, the probabilities can be plotted as a function of the predictors. See also Appendix 1 of the Mplus User’s Guide. 

Anonymous posted on Thursday, December 02, 1999  6:09 am



I have a good model, with a pvalue greater than .05, but some standardized coefficients (both Std and StdYX) have a value of 999.000. What does it mean? 


If you have standardized values of 999, you most likely have negative values of the variances/residual variances related to the parameters with standardized values of 999. If the negative values are not significant, you could set them to zero. If they are significant, you might want to consider rethinking your model. 

Anonymous posted on Wednesday, April 12, 2000  10:46 am



I run an SEM with both latent variables and observed variables. One of the observed variables (which is also a dependent variable) is a categorical variable. Would anyone tell me how to interprete the SE (or StdYX) values? How do I know the significance level of the parameters. 


The values found in the column labelled SE are the standard errors of the parameter estimates. The ratio of the parameter value to the standard error can be used to determine the statistical significance of the parameter. The values in the column labelled StdYX are standardized parameter estimates. The parameter estimates are standardized using the variances of the continuous latent variables as well as the variances of the outcome and/or background variables. In the case where the outcomes variables are categorical, the variance of the y* variable is used. 


I want to test by how much the fit of nested models differs. However, I am modeling skewed outcomes (symptoms counts) and thus have to use the MLM estimator which, as I understand it precludes the use of Xsquared difference tests. Is there a test you can suggest other than merely inspecting the increase in pvalues for worse fitting models? 


I will shortly be putting some information on the website about how you can use MLM for nested models. Check back a Mplus Discussion early next week. 


Dear Professor Linda Muthen, I am still eagerly awaiting your suggestions as to how I could use MLM for nested models. 


We have the formulas ready and are doing some final checks before we post them on the website. 

Anonymous posted on Tuesday, August 15, 2000  8:58 am



We know the ratio of the parameter value to its standard error can be used to determine the significance of the parameter. This ratio is a t statistic and can be compared with +/ 1.96. But what is the df of this t statistic? 


Actually, this ratio is often referred to as a t statistic but can actually be viewed as a z value for which degrees of freedom are not as issue. 

jan neeleman posted on Wednesday, November 15, 2000  12:17 am



Dear Professor Linda Muthen I refer back to your message of 2652000 about the formulae for testing nested models with MLM. Have these formulae been posted yet on the website? 


Yes, if you go to the home page of www.statmodel.com, you will see a reference to this. 

Anonymous posted on Friday, December 15, 2000  5:26 pm



Does Mplus provide "effect decompositions", i.e., in causal models using either observed or latent variables can the total effects in such models be decomposed into their direct and indirect effects (along with standard errors or significance tests for all three types of effects? 


No, Mplus does not provide indirect and total effects including standard errors, just direct effects. 

Anonymous posted on Tuesday, May 08, 2001  9:55 am



Please clarify a few things for me regarding Std and StdYX values. If I'm constructing a two stage SEM where a latent (CFA) variable is used as both an outcome and a covariate. When I want to compare the effects of various x (causally prior) variables on a continuous latent variable (call it L), I use the Mplus Std values. This comparisons are valid for both continuous and categorical x's (such as ability score and gender). When I want to compare the effects of various x and my latent variable L on some additional outcome measure (call it Y), I must use the Mplus StdYX values (in Mplus the Std values for my x variables in this portion of the model are the same as the unstandardized variables). In this case, I can compare (if I wanted to) the magnitude of the CFA loadings with the effect of a continuous x on Y. However, the StdYX values are only valid for continuous x and L variables and there is no way to compare the relative effect of dummy x variables (such as "gender") with the effect of L on Y. Is this correct ? 

bmuthen posted on Thursday, May 10, 2001  10:01 am



When x is a dummy variable such as gender, you are correct that you do not want to standardize its slope by its standard deviation (sd). So for L regressed on x, you use Std. For Y regressed on L and x, you use StdYX for L, but for x you need to do a simple hand calculation. In order to get the desired standardization wrt Y but not wrt x, you can either start with the Std value and divide by the estimated Y sd, or start with the StdYX value and destandardize wrt x by dividing by the sample sd of x; the two ways give the same result. 

Anonymous posted on Thursday, May 10, 2001  11:38 am



Bengt, I have two followup questions to your response above. First, I should have mentioned that my additional outcome measure Y is an ordered categorical variable. I do not see where Mplus provides information on the SD of my outcome variable Y. (My model is also a multigroup model so I have allowed tau's to vary across groups and fixed the scale factors for Y to 1 for all groups.) Second, regardless of whether or not my Y is categorical or continuous, if I follow the procedure you describe above, wouldn't I only be able to compare the effect of dummy x's on Y, but not the effects of continuous x's on Y with the dummy x's on Y, nor the effect of L on Y with the effects of dummy x's on Y ? 

bmuthen posted on Friday, May 11, 2001  10:07 am



With a categorical dependent variable, the sd of y is not used but the sd of y* (the variable that has a linear relationship to the predictors). The y* variance is not printed, but can be deduced via the residual variance which is printed if standardized is requested. But given that the dependent variable is categorical, the second of the two alternatives that I mentioned would seem easiest  destandardizing the coefficient for the dummy x. The question of being able to compare a standardized value for a continuous x with a value for a dummy x is the same as in regular regression analysis. It is possible if you keep in mind that the value for the continuous x talks about the amount of sd change in y for an sd change in x, whereas the value for the dummy x talks about the amount of sd change in y for a change from male to female. 

Steve Lewis posted on Monday, August 20, 2001  6:43 am



In my larger model of two endogenous factors, a manifest categorical indicator and one exogenous factor with five indicators. The exogenous factor has three of the indicators with standardized loadings above 1.0. How should I interpret these or should I set the Lambdas to one? 


Are your residual variances for these three indicators negative? For categorical outcomes, the residual variances are computed as remainders and can be found with rsquare at the end of the results. 

Steve Lewis posted on Saturday, August 25, 2001  7:34 pm



No, all the residual variances are positive. 


To say anything further, I would need to see your input and data. You can send them to support@statmodel.com. 

duckhye posted on Thursday, May 30, 2002  10:47 am



Xie(1989, in the reference section) says that "USUAL" path coefficients are different from LISCOMP 0.1 standardized solution, because LISCOMP 0.1 standarized solution refers to the case of unit variances of latent variables CONDITIONAL on exogenous variables. "Usual" path coefficients assume unit variances of all variables UNCONDITIONAL on exogenous variables. When you said "For Y regressed on L and x, you use StdYX for L, but for x you need to do a simple hand calculation. In order to get the desired standardization wrt Y but not wrt x, you can either start with the Std value and divide by the estimated Y sd, or start with the StdYX value and destandardize wrt x by dividing by the sample sd of x; the two ways give the same result", are these procedures for calculating "usual" path coefficients mentioned by Xie? 

bmuthen posted on Sunday, June 02, 2002  12:13 pm



The answer to your last question is yes. Unlike LISCOMP, Mplus does not give standardization to variances conditional on x's. 

Anonymous posted on Thursday, July 18, 2002  10:07 am



Will Mplus ever be adding the capcity for effect decompositions including standard errors and test statistics for indirect, direct, and total effects? If not, why not? 


Yes, this is planned for Version 3. 

bmuthen posted on Tuesday, October 01, 2002  9:15 am



The topic of standardized coefficients greater than 1 is well treated by Joreskog, see www.ssicentral.com/lisrel/column2.htm 

Anonymous posted on Sunday, January 26, 2003  1:23 pm



Hi all! Please help me with this: three continuous latent variables a b c predict variable x (which is continuous) which predicts Y which is dummy var). a also predicts Y. so: 1) should I present Std or StdYX (or StdXY for the first part, and Std for second and what to do with aY ? 2) what "sort" of coefficients are calculated for abc to x, x to y, and a to y? 3) can i compare somehow a to b (their impact on x)? 4) can i compare Rsquare of X (or Y) in a models with all three (a b c) and only two predictors (a b)? Thanks you very much! 

bmuthen posted on Monday, January 27, 2003  9:45 am



1) You should use StdYX whenever you want to standardized with respect to both latent and observed variables, which seems to be the case here. 2) With a continuous dependent variable you have regular linear regression coefficients, and with a categorical dependent variable you have probit coefficients. 3) Yes, that's the idea behind standardized coefficients. 4) yes 

Anonymous posted on Tuesday, January 28, 2003  2:50 am



Hi! Thanks so much! For questions 3 and 4 I just wanted to ask if there is a formal test to compare this parameters? 3. can I apply formula: (b1b2)/(sqrt(vara+varb))to compare reg. weights in a single model? If so, is it the same with probit coeff? 4. is there a formula to compare Rsquares (to say that one model explains significantly more than the other)? 5. Should I interpret Rsquare of a categorical dependent variable the same way like it was a "normal" variable (the percent of explained variance)? 6. one more silly question... If i have a single variable that is predictor of Y than Rsq is square of its reg.weight... i think. But if i have two or more predictors and one of them have reg.weight (0,83) (the other is 0,13 ns) and Rsquare is only 0,54? Howcome? is this normal? Thanks! 

bmuthen posted on Tuesday, January 28, 2003  9:53 am



3) In principle, you can test equality of both standardized and unstand'd coeff's if the denominator of your test takes into account the variance and covariance of what's in the numerator (drawing on "the Delta method"). These follow general regression analysis rules and so is not Mplus specific. Same for probit. 4) See a good regression analysis book. 5) Yes, but remember that it is the Rsquare for y*, not y (see Mplus User's Guide, Tech App 1). 6) See a good regression analysis book. 

Anonymous posted on Tuesday, January 28, 2003  11:18 am



thank you very much! 

Anonymous posted on Tuesday, August 12, 2003  1:34 pm



This is a question about the model covariances. I want to report the residual correlations between my DVs (call them X and Y). Under MODEL RESULTS, X WITH Y, I interpret Est./S.E. to be the residual covariance between these variables. I interpret StdYX to be the residual correlation. Is this PRECISELY correct? If not, what should I report, and what do I call it? 


The value in the column labelled estimate is the residual covariance. The value in the column labelled est/se is the ratio of the parameter estimate to the standard error of the parameter estimate. This is a zvalue. StdYX is the residual covariance standardized using the variances of both y and x. It is not a correlation. It is a standardized covariance. 

Carlos posted on Friday, April 30, 2004  9:01 am



I am running a SEM model, with independent factors that are highly collinear (they are conceptually different though.) Some of the standardized structural coefficients that I obtain are above 1. From Joreskog (June 22, 1999) I am assuming that this may happen, and I can report those results. Is that right? More troublesome, some of the structural coefficients are high, and negative, even when factor correlations show a strong, positive association among these factors. i.e. The correlation of F1 with F3 is .84. The correlation of F2 with F3 is .54. The correlation between F1 and F2 is .87 (I know this is high, but is the result of scale usage rather than being measures of the same concept.) When I look at the regression of F3 on F1, the standardized structural coefficient is 1.36! and the coefficient for F3 on F2 is .84! I know the quality of my data is not very good, but there is nothing I can do to change it. 1) Should I report those results, or is there any fix I may try first? 2) Is the negative coefficient an end result of multicollinearity? I've seen this before in the context of simple linear models, but never found a good explanation (and there is no reason for the sign of the coefficient to be that way.) 3) Also, I've seen this happen more often when using the WLSMV estimation method, instead of ML? Any particular reason for this? Thanks! 

bmuthen posted on Friday, April 30, 2004  9:46 am



It does sound like you suffer from multicollinearity and that you need to address that before reporting your results. Either by dropping one of the factors, or by some other approach. 

Carlos posted on Friday, April 30, 2004  12:33 pm



I know, but from reading Joreskog's article my understanding was that the standardized coefficients, even if above 1, were OK. He mentions that that happens in the context of multicollinearity. Do you agree with that? I understand how multicollinearity may affect your standard errors or rsquare, but I don't understand why the impact on the sign and size of the coefficient. We are dealing with 'importance' measures, so our measures tend to be collinear even when they represent different things. I am not doing this for an academic journal, so just need a sense of direction. I was told that other methods, such as neural networks, could handle this, but I would rather use a confirmatory method. Thanks again. Carlos 

bmuthen posted on Friday, April 30, 2004  7:08 pm



Any input from other readers? 

Teagle posted on Saturday, May 01, 2004  7:01 am



I, too, work with importance ratings in a marketing research context, and I also see negative signs here and there where I expect see postitive signs theoretically. The problem is the multicollinearity among the factors. (One way to determine this is to force all the covariances among the factors to zero. You will likely see the sign become positive. Of course, this is a highly unrealistic model). BTW, I also see this in customer satisfaction models a lot. Another approach is to introduce second order factors. If the first order factors are as correlated as you say, then it is likely a higher order construct is driving the responses to your survey items (or as Bengt said, there is really only a single factor where you wish to see two or more factors). Of course, the problem with second order factors is their interpetation to the reader/client. 

Carlos posted on Saturday, May 01, 2004  9:16 am



Thank you for your comments. I've tried already fixing the covariances to 0 but did not work. I may check that again though. I do use second order factors once in a while, but in this case I have a model that worked pretty well among more sophisticated audiences and the general public in some countries, but failed in one region. I know this is more driven by scale issues (sometimes, to deal with importance measures we use a variant of conjoint analysis to get more discrimination among importance measures, but not in this case) rather than cultural differences, but is hard to prove with this data (I did not design the study.) I any case, thanks again for yours and Bengt comments. 

TEagle posted on Sunday, May 02, 2004  5:13 am



Perhaps adding a factor across all items measured on the same scale in addition to the original factors will capture the scaling effect. William Dillon wrote a article in JMR about using such factors in brand equity research where scaling issues like you have is a big deal. The factor attempts to parse out the scale effect, leaving your original factors to capture the unique measure of each item. kust a thought... 

Carlos posted on Sunday, May 02, 2004  10:26 pm



That seems like a good idea. I will try that. Thanks! 

Anonymous posted on Thursday, August 26, 2004  1:09 pm



A reviewer has asked me to provide the C.I.s for the StdYX that I am squaring to report a genetic correlation for twins similar to the Prescott paper. It there a quick way for me to calulate these values. Thanks in advance. Tom 


You would need to use the Delta method to do this. 

Anonymous posted on Tuesday, September 21, 2004  7:17 am



Could you point me to an example of using the delta method to develop the C.I.s from the StdYx. Thanks for any direction you can offer. 


See the Bollen book. 

Anonymous posted on Tuesday, November 16, 2004  7:44 am



In reference to the message regarding the standardized values of 999.000: I have a crosslagged panel design which gives "Std" values that are realistic, however my StdYx values are 999.000. Is this simply an artifact of testing against the poisson distribution? 


The 999 indicates that the StdYX value could not be computed. Perhaps you have negative residual variances. I would need to see the output to comment further. You can send it to support@statmodel.com. 

Anonymous posted on Monday, February 14, 2005  1:30 pm



When conducting a regular SEM (not a growth model), is it preferable to conduct the model using the variables in their raw metric or as transformed zscores (mean=0, sd=1)? 


I would not recommend standardizing the variables. I would use them in their raw metric. 

Anonymous posted on Wednesday, April 20, 2005  11:25 am



Hi  I used Mplus to run some logistic regressions to use in mediation analyses. When calculating mediation with logistic regression coefficients, I want to use the standardized regression coefficients. I noticed that SAS and MPLUS do not standardize these coefficients in the same way. Is MPLUS more accurate? If so, why? 

bmuthen posted on Wednesday, April 20, 2005  11:45 am



Mplus uses the logistic density variance of pisquared/3 as the residual variance in its standardization. This is in line with conceptualizing the binary outcome as having an underlying continuous response variable with a logistic density for the residual (total response variable variance is the explained part plus the residual part). I don't know what SAS does. 

Anonymous posted on Wednesday, June 08, 2005  8:11 am



Hello, I am confused concerning the use of Std and STdYX. I am estimating a model with both categorical and continous dependent and independent variables. I estimate a path model so I have no latent variables. When I want to use standardized solution should I use Std or StdYX? suppose x1 = gender (categorical), x2 = achievement (continuous) , x3 = amount of hours of math(categorical), x4 attitude(continuous) Is it correct when I say: 1) x2 ON x4: I use Std 2) x3 ON x4: I use Std 3) x2 ON x1 x4 x3: I use StdXY for x4 and for x1 and x3 I divide StdXY by the SDx1 or SDx3 (or divide Std by SDx2) because I don't want to standardize the dummy variable gender its slope by its SD (see previous on the discussion list)? However I use theta parameterization and no variances or residual variances for the categorical dependent variables are estimated. How can I make this calculation? 4) x3 ON x1 x2 x4: I use STXY for x4 but what to do with x1 and x2. Since no variances can be estimated for categorical variables under theta I cannot compute the standardized path coefficients. Is that correct? Can you suggest a solution? Thank you! 

bmuthen posted on Wednesday, June 08, 2005  6:19 pm



3 facts help you to answer your own questions: 1. These decisions are not influenced by the dependent variable scale nor by Delta vs Theta parameterization since with a categorical dependent variable, the standardization is done with respect to the SD of y*, the underlying continuous latent response variable  so you can act as if the dependent variable is continuous. 2. You don't want to standardize with respect to a binary independent variable because you are not interest in the effect of a 1 SD change in such a variable but in the change from 0 to 1. 3. Mplus does not print out what you want according to 2. If you use StdYX, you have to unstandardize with respect to the binary independent variable (x), that is divide the StdYX value by the x SD. 

Anonymous posted on Thursday, June 09, 2005  2:24 am



I am still confused. Could you please clarify what you mean? As I understand correctly for every binary (or categorical) independent variable I have to divide StdYX by x SD. But what do you mean by nr 2. I don't have to standardize at all and just use the raw coefficient for binary independent variables? I do not understand how I can figure out the x SD since in my output I only get the residual variances of my continuous variables. What do I have to do to get the variances of the binary variables? Thank you 

bmuthen posted on Thursday, June 09, 2005  6:22 pm



Let me first restate how a standardized coefficient is computed from the raw coefficient b StdYX(b) = b*SD(X)/SD(Y). Regarding your first question, no you DO want to standardize, but not wrt (with respect to) x. In other words, you standardize wrt y (so divide by SD(Y), but not wrt to x (so don't multiply by SD(X)). Standardizing only wrt y gives a coefficient that tells you how many y SD units change you get for a 1 unit change in x (change from x=0 to x=1, say). Regarding your second question, you get SD(X) by doing a type=basic run with x included in the Usev list. 


Hello, I have problems interpreting Std and StdYX. I am estimating a path model, so I don't have any latent variables. I have continuous as well as binary dependent variables. In my results Std is always equal to b. In the manual is said that the coefficient in Std are standardized using the variances of the continuous latent variables. However I don't have latent variables. Or does Mplus assume that I estimate latent variables and the variances are 1 so that can explain why b and std are always exactly the same? I understand (from previous discussion) that for binary independent variables you have to divide StdYX by SDx. Does this also apply when the dependent variable is binary? I don't understand what is meant in the manual by 'for stdYX the coeff are stand using variances of the continuous latent variables and the var of the background and/or outcome var'. As I understand StdYX= b*SDx/SDy so in this formula what are the continuous latent var and what are the var of the background and/or outcome var? As you can see I am very confused! Could you please clarify this? thank you! 


In addition to my previous question. I can perfectly calculate the stdYX when variables involved are continuous (with the formula b*SDx/SDy). But when one or both variables are binary the calculations with the formula do not correspond anymore with Mplus output. So whay is happening? How does Mplus calculate these standardized coefficients when binary or more in general categorical variables are involved? thanx 

BMuthen posted on Tuesday, June 14, 2005  9:18 am



If there are no latent variables, then STD is the same as the raw coefficient. The answer to your second paragraph in your first message is no. With binary dependent variables, the standardization uses the estimated variance of y*. I am not sure if we print that anywhere that you could do it by hand. 


Does anybody have any idea how to print the estimated variance of y*? I need this to correct the stdYX so I can calculate the correct standardized coefficient for binary independent variables on binary dependent variables. thank you!! 


I don't think this can be done. Come back after July 1 and I will research it. You don't really need to know the y* variance because the standardization choice is only dependent on the independent variable being binary or not. 

Anonymous posted on Wednesday, June 22, 2005  2:26 pm



I have an observed, binary independent variable and an observed, continuous dependent variable. If I use regression to examine the relationship the standardized beta is the same as the StdYX. So, I am unclear why we correct the StdYX value (by dividing by SD X) here, but do not correct in standard regression? Thanks. 

BMuthen posted on Thursday, June 23, 2005  3:48 am



With binary observed independent variables, the STDYX needs to be adjusted to reflect that you are interested in the change in the dependent variable when the independent variable goes from 0 to 1 rather than increasing one standard deviation which is not of interest with a binary indepdent variable. This is the same as in ordinary regression. 

Anonymous posted on Monday, June 27, 2005  8:30 am



Hello! I've got yet another question on standardization of path coefficients involving latent variables. This refers to a path leading from a latent variable (f) to range of a continuous and categorical observed variables (y1y5). I had assumed that the coefficients given in the "Std" column were standardized by multiplying the "Estimates" column with the variance of f (f being the predictor). In the model I have run, however, this assumption doesn't square up with the variance of f provided by the output. Rather, the multiplyer I would need to turn my "Estimates" into "Std" values is equal to the standardized loading of one of the observed variables (x) that I used to measure the factor with (the one whose unstandardized loading is set to 1). [Approximate example: Y1 on F: EST=0.3; STD=0.2; F by X: EST= 1.0; STD=0.67; Variance (F): EST=0.5; STD=1.0] I'd be grateful if you could help me make sense of this. Thanks. 


When f is the predictor, the STD column is the estimate multiplied by the standard deviation of f. You can find the variance of f in TECH4. 

Anonymous posted on Friday, September 02, 2005  12:08 pm



If i have a SEM model where the outcome is a latent variable and the predictors are observed exogenous variables, Does the STDXY Beta coefficients can be interepreted the same as a regular OLS regression? In other words if I square the STDXY beta coeffcients can I interpret them in the same way as in the OLS model? 


Factors in SEM are continuous variables, therefore the coefficients are linear regression coefficients. So yes. Regarding your second question, the answer would be yes I guess, but I'm not sure what you are getting at by squaring them. 


I have three continuous latent variables (X,Y,Z) and are interested in finding the partial correlation between X and Y while controlling for Z. For this I used the following model syntaks: MODEL: z BY var1var6; y BY var7var11; x BY var12var16; x y ON z; As I have understood, the correlation between (the error terms of) X and Y in this model is r = cov(xy) / (sd(x)*sd(y)), and this correlation would be the partial correlation between X and Y, controlling for Z. Unfortunately, the correlation displayed in the Mplusoutput (stdYX) is much lower than the partial correlation I should obtain when using the formula above (I get "the right value" when using another SEMprogram). Is it wrong to interpret the StdXY for "X WITH Y" in the way I do? Thank you! 

BMuthen posted on Saturday, November 12, 2005  5:49 pm



The StdYX value for the residual covariance between the residuals for x and y is not the correlation between these residuals but the standardized residual covariance. The correlation between the residuals is obtained as the residual covariance divided by the product of the standard deviations of the two residuals. 

rpaxton posted on Saturday, December 10, 2005  11:19 pm



Greetings once again, Thanks for the amount of support that you give for Mplus, you guys have been really helpful. On another note, just going through some articles in the physical activity field, based on SEM approached. When examining a SEM with both latent and manifest variables which is the best symbol to use betas or gammas. I have noticed that some researchers have use these symbols interchangeably. Could you provide a little insight on their assumptions. Before I read those articles, I assumed that all paths were standardized betas. 


The choice of beta or gamma to refer to unstandardized regression coefficients is arbitrary. Beta usually refers to regressions among latent variables. And gamma is usually used for regressions where the covariates are observed. 


Is it possible to constrain standardized coefficients? I have successfully constrained unstandardized coefficients in a model in order to test whether contraining coefficients to be equal results in decreased fit. However, it occurs to me that it is really the difference between *standardized* coefficients that would matter, and I can't seem to work out how to constrain them. For example, if I want to say that A definitely relates more strongly to B than C does to B, the standardized coefficients from A to B and from C to B are the difference of concernis that correct? thanks, tom 

bmuthen posted on Friday, February 10, 2006  7:01 am



This is possible in Mplus Verson 4, which will be released at the end of the month. 

anonymous posted on Saturday, February 18, 2006  11:35 pm



I went through the discussions on StdYX and looked up the Mplus manual and have not been able to figure out how to interpret the intercept estimates in the StdYX column. I ran a simple regression analysis using Ex3.1.dat. I regressed y1 on x1 and got the following results: Est. S.E. Est./S.E. Std StdYX Y1 ON X1 0.986 0.050 19.891 0.986 0.665 Intrcpts Y1 0.484 0.052 9.327 0.484 0.312 What does 0.312, the estimate of the intercept Y1 in the StdYX column denote? At first I thought the intercept would be equal to 0, as expected from a standardized regression equation (the Y1 on X1 for StdYX, 0.665, is a correlation estimate. I checked and it's not the mean of Y1 either (it's .4848). How would one interpret the estimate? Also, how would one interpret the intercepts of the indicators in the StdYX column of CFA? My understanding is that they are from the NU vector, correct? At first I thought they would be zeros too. Thank you for your help. 


The value 0.312 is 0.484 divided by the model estimated standard deviation of y1. Standardizations are done such that the variance of y1 is one but not such that the mean of y1 is zero. 


Hello, I am trying on confidence intervals at the moment and I would be interested in CIs for the standardized coefficients. Is it possible to obtain these directly from MPLUS and if yes: how? Thank you! 


For regression coefficients, you can use MODEL INDIRECT and CINTERVAL together and I think you will get confidence intervals for the standardized coefficients. 


Hello Linda, I did that but unfortunately I only got standardized coefficients for the indirect effects, not for factor loadings and direct effects. 


I need to see more details about your setup. Please send your input, data, output, and license number to support@statmodel.com. 


Hello Bengt, thank you for this offer but I just solved the problem with the help of my statistic professors: You can easily calculate the standardized confidence intervalls by the upper and lower bound dividing them by the same factor that you get when you devide the unstandardized coefficient by the standardized coefficient. 

sivani sah posted on Sunday, April 02, 2006  2:09 pm



Dear Dr. Muthen, We got standardized coefficient of 1.26 in SEM. Is this estimate wrong? or can STD coeff. be larger than 1? if it can be, what are the reasons to get these coeff larger than 1. If the std. coef. of greater than 1 can be reported, are there any literature around so that we can refer that literature? I will appreciate your help in this regard. Thank you? Sivani 


Standardized coefficients can be greater than one in some cases. See the following link where this is discussed by Karl Joreskog: www.ssicentral.com/lisrel/column2.htm 

sivani sah posted on Monday, April 03, 2006  6:15 am



Thanks. However, this site is not working. 


Try www.ssicentral.com and go to Karl's corner. Perhaps they have changed the link. 


Hello Drs. Muthen, I am working on a research project where our team is trying to assess the nature of association between a set of multiple choice items and open ended items within a particular subject area, like math or reading. We created parcels of the MC and OE scores, specified one latent, and specified the error variances for the two observed variables. We also fixed the MC path to 1. This was done to satisfy the trule. Thus, one path between the OE and the latent variable and the variance of the latent were estimated, leaving one degree of freedom. One question of interest is if the standardized paths of the OE and the MC to the latent variable are significantly different. I was tempted to use the CINTERVAL output to determine if the estimates overlap, but was hesitant to draw any conclusions based on this. I realize that this is an improper use of CI's, and I didn't know how this would transfer to the standardized estimates. Is there a way to test the differences of the latent correlations using the output at hand? Fisher's Z can be used with pearson correlations, but I was reasonably sure this did not apply here. Partial syntax and output is below: TYPE IS FULLCOV; MODEL: read BY RMC ROE; RMC@2.78439; ROE@3.96553; ChiSquare Value .010 DF 1 PValue .9190 READ BY StdYX RMC 0.849 ROE 0.728 Thank you in advance for your consideration. Jon 


To test equality of standardized coefficents, see the Version 4 User's Guide example 5.20 

Monti Vitti posted on Thursday, August 10, 2006  11:05 am



Dear Linda & Bengt I have a question regarding a crosslagged effects model with latent variables. Joreskog (1999) and Finkel(1995) could not help. The standardized coeff. for one stability estimate is way above 1 (residual variance also negative, although not significant).This is from prior X to posterior X. I've checked for multicollinearity with SPSS (between the items that compose the factors,and there isn't any). Q1) Could this anomaly be an effect of the autoregressive element,and if yes, is this acceptable? Q2) Could it be a byproduct of my running the model on a homogenous subsample? I don't get the problem in the full sumple. Q3)Setting the error variance to 0 produces normal estimates,yet the model falls apart.The best solution I have found is to delete the correlation between prior X and prior Y. But what does this mean? Many thanks 


It sounds like the model is misspecified  there might be restrictions imposed that are not suitable, for instance that the time 3 outcome is influenced by the time 2 outcome but not the time 1 outcome. The subsample may be systematically different from the full sample in this regard. 

Laney Sims posted on Monday, October 09, 2006  8:05 am



I am confused about how the "estimates" are computed. I read that these are the unstandardized regression coefficients, in the sense that they are not altered by multiplying by SD(X)/SD(Y), or by 1/SD(Y). However, in general linear regression, "standardized" means that the coefficients are constrained to be between 1 and 1 (computed by subtracting the mean from each observation then dividing by the sample SD). Are your unstandardized betas still standardized in this respect, or are they the unaltered regression coefficients for the raw data? Thank you, Laney Sims 


No the unstandardized betas are not standardized. They are unaltered regression coefficients. 

Laney Sims posted on Monday, October 09, 2006  8:39 am



Thank you for the clarification. I have one more quick question: if the unstandardized coefficient is signficant (based on the tvalue), is it safe to assume the associated standardized coefficients are also significant? Thanks 


The test is not for the standardized coefficients and should not be used for them. It is for the unstandardized coefficients. 

Ramin Azad posted on Sunday, October 22, 2006  6:53 am



Hello what is the difference between regression and SEM? Thank you 


SEM usually refers to a set of regressions among latent variables. 


Hello Dr.s Muthen, I am estimating several path models (no latent variables) where: TYPE IS COMPLEX MISSING H1 ESTIMATOR IS MLR I am interested in calculating regression coefficients that are standardized in various ways. Three scenarios: 1) I have a binary IV & continuous DV. I want to express the standard deviation (SD) change in the DV that occurs when my IV changes from a 0 to a 1. How do I derive this standardized coefficient from the provided StdYX value? 2) I have an ordinal IV (e.g., with 4 potential values) & continuous DV. Is it customary to report the SD change in the DV that occurs with one unit change in my IV, or the change that occurs in the DV with a SD change in my IV (even though my IV is ordinal)? If the latter is the correct approach, do I present the StdYX value as is? 3) Same scenarios as the above two, except that now my DV is a count variable. I guess the main difference in the question would be does one usually standardize the beta using the variance of the count variable (or would it make sense to present the unstandardized beta instead; so that the interpretation would be the unit change in the count variable that occurs with one unit change in the IV)? Finally, would the answer to this question change if the count DV was a scale of three summed count variables? Thanks so much for your help, Jim 


1. Multiply the StdYX coefficient by the standard deviation of x. 2. An ordinal covariate is treated as though it is continuous in regression. The regression coefficient is the change in y for a one unit change in x. You would need to create a set of dummy variables if this is not what you want. 3. Count variables don't have variances so standradized coefficients are not available. 


Hi Linda, Thank you for your replies. Followup questions on points 1 & 2: 1. How do I obtain the standard deviation of x in Mplus (given that I have complex survey data that require weight, strata, and cluster variables; TYPE IS COMPLEX MISSING HI)? 2. For the models including ordinal IVs, assume that I would like to present the change in Y (in standard deviations) for 1 unit change in X. Would I do the same as in point 1 (multiply the StdYX by the SD of X) to obtain this coefficient? My understanding is that the StdYX coefficient is the change (in standard deviations) in Y for 1 SD change in X; and the unstandardized coefficient is the unit change in Y for 1 unit change in X. Neither of these coefficients are appropriate if I would like to present the change in Y (in standard deviations)for 1 unit change in X. Is my understanding not correct? Thanks, Jim 


Hi Linda, Sorry to throw out another post before you had a chance to answer my first one, but I need further clarification on a point you made on November 11. My questions was: 1) I have a binary IV & continuous DV. I want to express the standard deviation (SD) change in the DV that occurs when my IV changes from a 0 to a 1. How do I derive this standardized coefficient from the provided StdYX value? Your answer was: 1. Multiply the StdYX coefficient by the standard deviation of x. Bengt said on June 8, 2005: If you use StdYX, you have to unstandardize with respect to the binary independent variable (x), that is divide the StdYX value by the x SD. My X variable (predictor) is binary and my Y variable (outcome) is continuous. Wouldn't I need to divide StdYX by the standard deviation x to derive the value that reflects change in Y (in standard deviations) for one unit change in X (e.g., male to female, for the binary variable "sex")? Jim 


I apologize. My mind thought divide and my fingers typed multiplied. To clarify, the StdYX standardization of a regression coefficient is: StdYX = Beta * sd(x) / sd (y) so you must divide by sd (x) to take this out of the metric of standard deviation units of x. Then it becomes Beta / sd(y). This represents the change in standard deviation units of y for a one unit change in x. You get the standard deviation of x by doing TYPE=COMPLEX MISSING BASIC; with no MODEL command. 


Is Mplus now capable of computing effect decompositions including standard errors and test statistics total effects? I have version 3.13. Thank you in advance 


Yes. See the MODEL INDIRECT command. It is in Version 3 so has been around for quite some time. 

Daniel Shen posted on Friday, March 23, 2007  10:11 am



If I fix all factor variances (rather than indicator 1 for each factor) to 1 to scale the measurements, how to correctly interpret these outputs: 1) Factor loadings (BYsyntax): column 1 (used to be the unstandardized estimate)and columns 4 and 5 (used to be standardized estimates); 2) Factor covariances (WITHsyntax) : column 1 (used to be the unstandardized estimate)and columns 4 and 5 (used to be standardized estimates); 3)Structural coefficients (ONsyntax): column 1 (used to be the unstandardized estimate)and columns 4 and 5 (used to be standardized estimates). Thanks, Daniel 


If you set the metric of the factors by fixing the factor variance to one and allowing all factor loadings to be free, then the factor covariance becomes a correlation. The raw coefficient in column one will be the same as the Std coefficients which standardize by factor variances which are one. StdYX will be the raw coefficient standardized using the observed variable variances. 


Thanks, Linda. One followup questions: What if the raw coefficient (column 1) does not equal to STD coefficient (column 4)? 


Then you should send your input, data, output, and license number to support@statmodel.com so I can see exactly what you are doing. If all factor variances are one, then it should be so there must be something else you don't see. 


Hi Linda and Bengt, I would like to obtain the standardized residual covariance matrix for a CFA with ordinal data using WLSMV. Mplus 4.2 gives me the residual correlation matrix when I specify [output: residual]. Is there a way to also obtain the standardized covariance matrix or, otherwise, to calculate this by hand? For clarification, here is an example syntax of the type of model I'm refering to: ANALYSIS: ESTIMATOR = WLSMV; VARIABLE: NAMES ARE y1y7; USEVARIABLES ARE y1y7; CATEGORICAL ARE y1y7; MODEL: f BY y1 y2y7; OUTPUT: STANDARDIZED RESIDUAL; Thanks! Rick. 


I'm having a problem understanding what you mean by standardized covariance matrix in reference to categorical outcomes. Can you help me? 


Hi Linda, Thanks for asking. Joreskog described a method of obtaining residuals that are standardized with respect to their asymptotic standard errors. This was described in Joreskog (2002) for ordinal data: http://www.philscience.com/hangul/statistics/ssi/lisrel/techdocs/ordinal.pdf Standardized residuals are handy for identifying residuals that are larger than what one would expect from sampling error. Can these standardized residuals be obtained in MPlus? 


The current version of Mplus does not give standardized residuals. This will be added in a future version. Modification indices which are available are typically better at pinpointing the source of misfit. 

Alex posted on Monday, June 25, 2007  8:22 am



Greetings, I'm doing a CFA with categorical indicators, another CFA with continuous indicators, and a final SEM testing relations between both forms of latent variables. As meny other in this discussion, I remain confused as to the use of STD and STDYX coefficients. My personals questions are: STD or STDYX in the following? (1) CFA with categorical indicators: factor loadings and factor correlations. (2) CFA with continuous indicators: factor loadings and factor correlations (here I believe we use STDYX all over). (3) Latent X on latent Y (were latent X rely on categorical indicators and latent Y on continuous indicators). (4) Latent X1 on latent X2. (5) Latent Y1 on latent Y2.(6) Latent Y on Latent X. (7) The effect of a categorical covariate on Latent X or Y (here, we use STD I think or the estimate directly). (8) The effect of a continuous covariate on Latent X or Y (here, we use STD I think or the estimate directly). However, As I realized that you answered related questions many times in the past, I would suggest (that would certainly help me a lot) that you (if you have time) develop a summary table indicating whihc one to use in which case to put either on the website or/and in the next version of the manual. Thank you very much. 


You can find the definition of Std and StdYX in the user's guide in Chapter 17. In 99 percent of the cases, StdYX is most appropriate. With binary covariates, StdYX should be adjusted so that the standardization uses only the standard deviation of y, not the standard deviation of x. Std would be used if for some reason, you want to see standardization using only the standard deviations of the latent variables. 

Alex posted on Monday, June 25, 2007  9:27 am



Thank you very much Linda, This is what I was led to believe according to the previous discussions under this post. However, I believe your current statement represents a very nice and simple summary of the discussions. It helps a lot. 


Dear Dr. Muthens Can I get standard error of standardized estimates using Mplus? If so, how? My advisor is big fan of RAMONA because he believes only RAMONA can provide standard error of standardized estimates. But I prefer to use Mplus to RAMONA, so I shoud be able to convince my advisor. Thank you! 


You can obtain standard errors for standardized estimates by using the MODEL CONSTRAINT command as shown in Example 5.20. Version 5 of Mplus will have standard errors for standardized parameter estimates. 


Thank yous so much! I am always amazed by your quick and kind answers. 


Hello Drs. Muthen, Can you shed some light on this? I ran a SEM model with 4 latent vars predicting a manifest var. 3 of 4 IVs are NS, but the STDXY estimate for one of the NS is larger than that of the significant vars. How could this come about? Thank you for any insight you might have. 


The size of a coefficient is not necessarily related to significance for raw or standardized coefficients. The size of the raw coefficient is related to the scale on which it is measured. Raw coefficients are standardized using standard deviations that may vary in size. 

Laura Pierce posted on Wednesday, August 29, 2007  10:46 am



Thank you! 

Monti Vitti posted on Monday, September 03, 2007  7:38 am



Dear Drs. Muthen, I am running a crosslagged effects model, using surface indicators (2 waves).When reporting coefficients of interest (crosseffects) I use unstandardized estimates, for reasons of comparison across subsamples. I use the typical template of "a 1 point change in X predicts an X,X point change on the Y scale". Yet, I have been asked to provide a more "intuitive metric".What do you think this means, and how do I do it? Many thanks, Mon. 


You could use standardized coefficients which talk about standard deviation units. 

Anna Siser posted on Friday, February 01, 2008  11:05 am



Hi, In my nonrecursive model beta/standardized coefficient is greater than one. The variable in question is unobserved and is indicated by two observed variables. I have included a description of the path diagram. Measurement Model: Belief1<Factual Beliefs (1) Belief2<Factual Beliefs Belief3<Factual Beliefs Belief4<Factual Beliefs Belief5<Factual Beliefs Belief(n)<errorbelief(n) (1) Cuts<Policy Preferences (1) Limits<Policy Preferences Cuts<errorcut (1) Limits<errorlimit (1) Govt1<AntiGovt (1) Govt2<AntiGovt Govt(n)<errorgovt(n) (1) Structural Model: Policy Preferences<Factual Beliefs Factual Beliefs<policy>1) Factual Beliefs<know Policy Preferences<AntiGovt Factual Beliefs<ideology Policy Preferences<ideology Factual Beliefs<errorfactbelief Policy Preferences<errorpolpref errorfactbelief<>errorpolpref Variables: Belief(n), Cuts and Limits: There are five categories. Govt(n): There are four categories. Know: Variable is dichotomous. Ideology: A three category variable. Other information: The vars have a (rough) normal distribution. The size of the data set is n= 248. 


Standardized regression coefficients can be greater than one. There is a nice discussion of this by Karl Joreskog on the Lisrel website. 

Anna Siser posted on Saturday, February 02, 2008  12:46 am



Thanks a lot! I'll check the Lisrel website immidiately. Anna 


I have noticed that in version 5 the test statistic associated w/ a given standardized effect is *different* than the test statistic associated w/ the same unstandardized effect (this was not true in earlier versions of Mplus). I did not think that this was possible (e.g., in simple OLS case, we don't get separate tests for b vs. B associated w/ same covariate). 1. Can you explain the difference in how these test statistics are computed? 2. Can you explain why this feature was added in v5 (what Q does it answer that was previously unanswerable)? 3. If I am reporting standardized effects in text, would you recommend that I report test statistic assoc w/ unstandardized or standardized effect? Thank you. mikew@unc.edu 


Test statistics were not given for standardized coefficients in earlier versions of Mplus. The test statistic was for the raw coefficient. Statistical signficance of raw versus standardized coefficients are not necessarily the same. For a regression coefficient, a raw regression coefficient is associated with a one unit change in x while a standardized regression coefficient is associated with a one standard deviation unit change in x. You should report the test statistic associated with the coefficient you are reporting. 


Typically the two types of test statistics give the same significant or insignificant result. If you observe large differences in p values, we'd be interested in seeing them  please send to support@statmodel.com. 


Linda and Bengt, Just an observation/question. Running an SEM model and requesting confidence intervals on standardized indirect effects, seems to only provide unstandardized indirect effects confidence intervals, in the standardized output. I did a search of the posts and didn't see anyone experiencing this, I wonder if I missed something? Here is my runstream: ======================================== data: file is modeldataless1.txt; variable: names are SurveyID city PACIYM PACIM SSRWE pbrwe PSITOT FMCOH FMSUPRT SSRWCY FMCOHY; usevariables are PACIYM PACIM SSRWE pbrwe PSITOT FMCOH FMSUPRT SSRWCY FMCOHY; missing are .; model: ssrwe on fmsuprt pacim ; pbrwe on psitot ; ssrwcy on fmcohy ; pacim on fmcoh psitot ; paciym on psitot fmcohy; fmcoh with psitot fmsuprt fmcohy; psitot with fmsuprt fmcohy; fmsuprt with fmcohy; model indirect: ssrwe ind pacim fmcoh; ssrwe ind pacim psitot ; output: Standardized cinterval; ========================================= Thanks Roger 


I'm can't answer this without seeing the full output. Please send it and your license number to support@statmodel.com. The STANDARDIZED and CINTERVAL options should apply to regular and MODEL INDIRECT results. 


Hi Linda and Bengt, A colleague is using Mplus to regress an ordered categorical variable with three levels onto two continuous latent variables which are themselves measured by multiple ordered categorical variables with three levels each. My colleague is using ML estimation with the LOGIT link to obtain odds ratios of the regressions of the endogenous observed variable onto the two latent variables. Mplus produces the odds ratios and 95% confidence intervals for the odds ratios for this analysis. Our understanding is that the odds ratios represent the change in the ratio of the odds of the outcome per unit change in the latent variable. We are wondering if it would be possible to obtain odds ratios that represent change per standard deviation of the explanatory latent variables? If so, how would one do it in Mplus? With many thanks, Tor Neilands 


You can use the STD standardized values. These are standardized using only the latent variable standard deviations. 


Thank you, Linda. So, I would take the STD value and raise e to its value for each point estimate, lower confidence limit vlaue, and upper confidence limit value for which I was interested in obtaining the OR and 95% CIs for the OR in the standardized metric? Tor 


Correct. 


Hello! I entered 2 continuous and centered covariates and 2 dummy coded covariates in a multinomial logistic regression to predict class membership. mplus 5.1 gives me STD as default and STDY/STDYX when requesting Standsolutions. Which solution should I use, when I want to have comparable coefficients? Should I use also stand. results for the dummy variables? My guess is, that I have to use STD for all covariates!? 


You should use STDYX for continuous covariates and STDY for dummy coded covariates. However, these would not be comparable because STDYX is for a one standard deviation change and STDY is for a change from male to female for example. 


ok, many thanks! I have some last questions. 1. That I have entered centered cont. covariates isn't a problem? 2. But I can make comparisons within both groups of cont. and dummy variables!? 3. I compute the exp(b)'s on the basis of your recommended STDYX and STDY solutions for both groups of covariates!? (I have to compute them, because mplus does not offer them using imputed data sets). many thanks again and have a nice day! Karl 


1. No. 2. Yes. 3. I would not exponentiate standardized coefficients just the raw coefficients. I could imagine exponentiating coefficients that are standardized using the standard deviation of x. 


3. Is there any possibility to get raw coefficients and there SD in a multinomial regression in mplus 5.1? I've heard of in mplus 4.21 one could divide the raw coeff. of the centered variables by SE (which is similar to SD in mplus) to get standardized coefficients!? 


sorry, regarding the last I meant "multiplied" by SE... If this is a way to get comparable coeff in mplus, one should also do this for the dummy variables? 


Multinomial regression gives raw coefficients and their SEs. If you want coefficients standardized with respect to covariates you multiply by their SDs (from Sampstat). You won't get SEs for those standardized coefficients. 


sorry to be tenacious, but this is not the same as standardizing covariates before entering the regression and using their coefficients!? However, I checked the literature and your advice seems ok, with respect to rank the coefficients in a meaningful way. (ref.:http://www2.chass.ncsu.edu/garson/pa765/logistic.htm) final question: is it ok to exponentiate these "standardized" coefficients and to compare these Exp (b)? Thank you! 


because I get no SE for my by handcalculated "standardized" coefficients: Would you expect large differences concerning significance compared to the raw coefficients? 


No, they are usually very close. 

David Lin posted on Saturday, June 14, 2008  5:38 pm



I do a CFA with BOOTSTRAP=1000 and CINTERVAL. The "CONFIDENCE INTERVALS OF MODEL RESULTS" showed estimates, my question is how can I get the CI in the form of STD or STDYX. Thanks in advance. David. 


If you ask for CINTERVAL, STD, and STDYX in the OUTPUT command, you will obtain confidence intervals for the standardized coefficients if they are available for your analysis. 


When I use 'MODEL INDIRECT', 'STDYX' and 'CINTERVAL' in a path analysis I get some unexpected results. Whilst 'MODEL RESULTS' differ from 'STANDARDIZED MODEL RESULTS: STDYX Standardization' and 'TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS' differ from 'STANDARDIZED TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS: STDYX Standardization', as would be expected, 'CONFIDENCE INTERVALS OF TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS' are IDENTICAL to 'CONFIDENCE INTERVALS OF STANDARDIZED TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS STDYX Standardization'. How can this be? Many thanks. 


Please send the output that shows this and your license number to support@statmodel.com. 

Derek Kosty posted on Tuesday, August 26, 2008  3:28 pm



In this CFA with categorical outcomes I set the residual variance of a continuous latent variable to equal zero in order to make the PSI matrix positivedefinite: MODEL: mood by LMDD4 LDYS4 LDPD4; anxiety by LGOA4 LPTS4 LSPE4 LSOC4 LPAN4 LOBC4; intern by mood anxiety; anxiety@0; As a result, the standardized factor loading from "intern" to "anxiety" is equal to one (and has no standard error). I tried working through the formula bStdYX = b*SD(x)/SD(y), as an effort to gain understanding of the issue but i cannot seem to find SD(x) or SD(y) in my output. I need to know why this is for the writeup of these analyses. Thanks in advance! 


In this application, SD(x) is the SD for intern (so sqrt of the variance that is printed) and SD(y) is the SD for anxiety (see Tech4 for the corresponding variance). Having only 2 indicators (firstorder factors) for a secondorder factor gives a weak model, only identified due to the zero residual variance of anxiety. There should be at least 3 and preferablymany more firstorder factors. 

Derek Kosty posted on Tuesday, August 26, 2008  5:07 pm



Thanks Bengt, These are proposed models within this field of research. The aim of the study is to evaluate these models. Therefore, we cannot satisfy the preferred model that has at least three firstorder factors because thats not the way it is being looked at. Do you see this as being extremely problematic? What suggestions do you have, if any? 


I can't say that it is wrong, but this is not in my view how secondorder factor modeling should be done. The model is just barely identified with no possibility to test it, or to adjust it for the many types of misspecification possibilities it may contain. My only suggestion would be to not draw on a secondorder factor in this case. 

Derek Kosty posted on Wednesday, August 27, 2008  10:03 am



What exactly do you mean by "not draw on"? Also, how do I determine if a negative residual variance is significant or not? If it is negative, the SE and pvalue cannot be computed... 


"Draw on" is here a euphemism for "don't draw strong conclusions based on", or less diplomatically  don't use. The problem of using a secondorder factor here is clear from your question about the negative residual variance. Your model MODEL: mood by LMDD4 LDYS4 LDPD4; anxiety by LGOA4 LPTS4 LSPE4 LSOC4 LPAN4 LOBC4; intern by mood anxiety; is not identified (I would think Mplus flags it as such), so the estimates (such as negative residual variances) therefore should not be interpreted. There is no way of knowing what the residual variances are. The model becomes identified by for instance fixing one residual variance at zero, but if that is not true, then the resulting estimates are distorted. 

Derek Kosty posted on Wednesday, August 27, 2008  11:21 am



Many thanks. I feel much more comfortable now. I will propose to the rest of my team getting rid of that second order factor. In hind sight, i don't see a real need for it anyways. We will just talk about the model as being an internalizing model without actually including the second order factor. Does this seem reasonable? 


Yes, good decision. 

Derek Kosty posted on Friday, September 19, 2008  3:13 pm



Hello. When conducting a CFA with dichotomous (0,1) observed variables: f1 by LADH4 LODD4; f2 by LCON4 LAPD4 LALC4 LPOT4 LDRG4; I get the following warning: "WARNING: THE RESIDUAL COVARIANCE MATRIX (THETA) IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR AN OBSERVED VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO OBSERVED VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO OBSERVED VARIABLES. CHECK THE RESULTS SECTION FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE LAPD4." And the standardized factor loading of f2 by LAPD4 is greater than 1 (1.013). I believe it is due to the nature of the variables LAPD4 and LCON4. If LAPD4=1 then LCON4=1. But if LCON4=1 does not imply that LAPD4=1. Not sure exactly what the issue is. I can calculate the correlation between the two variables and it is only .61. Any thoughts on this? Thank you. 


The message is not about a factor loading. I would need to see the output to answer your question. Please send it and your license number to support@statmodel.com. 

Jungeun Lee posted on Tuesday, September 30, 2008  12:01 pm



Hi, I am not super sure that I have a clear idea about when to go with 'Std' and when to go with 'StdYX'. I ran a CFA and its followup SEM. In the CFA, I have 4 latent variables and their corresponding observed variables (continuous). To see if a specific indicator is strongly related to the corresponding latent variable than others, I am thinking to use 'StdYX'. In its followup SEM, two continuous observed variables were added. In this model,1) a latent variable predicted another latent variable; 2) a combination of a latent variable and one of the added continuous variable predicted another latent variable; 3) a latent variable predicted one of the added continuous variable. I am thinking, for the part like 1), I would go with 'Std'. For parts like 2) and 3), I will go with 'StdYX'. Do these decisions look reasonable to you? Thanks!! 


See pages 577579 of the Mplus User's Guide for a discussion of the different standardized coefficients and when they should be used. 


I am slightly confused about the definition of StdY in the manual. I have a model which has continous latent variables, continuous manifest variables, and binary variables. I requested both StdYX and StdY. I get someting like this: From________To_______StdXY_____StdY Latent>Latent ______.456______same Binary>Manifest____ .123_____ different Binary>Latent______.234______different Manifest>Manifest_____.345_____ same According to the MPlus manual, StdY uses the variance of CONTINUOUS LATENT variables for standardization. This explains why the StdYX and StdY coefficents are the same for Latent>Latent, and different for Binary>Manifest and Binary>Latent, but why are coefficients for Manifest>Manifest coefficients same? They are not continuous LATENTS, so I was expecting they would be standardized differently in StdY and StdYX. Any clarification would be appreciated. Thanks Garry 


If in manifest/manifest both observed variables are dependent variables, then they would be the same. 


Thanks Linda they are indeed both dependent variables  but I still don't understand the MPlus rules for standardizing paths involving Manifest variables under StdY. With Manifest variables under StdY, how does MPlus 'know' when to standardize a path using the variances of the x and y variables and when to standardize using the variance of the y variable only? The User Guide is rather terse on the subject. Thanks, Garry 


I think your question is how are dependent and independent variables defined in Mplus. An independent variable is a variable that appears only on the righthand side of an ON statement. All other variables are dependent variables. 

Garry Gelade posted on Thursday, October 16, 2008  10:46 am



Thanks Linda 


I'd like to know why in my model I seem to get slightly different pvalues for my unstandardized vs. STDXY results. In STDXY, two relations between latent variables become significant that were only marginally significant looking at the unstandardized results. My model contains continuous and binary independent variables, latent endogenous variables, and continuous dependent variables. 


Raw and standardized coefficients have different standard errors. The ratios and pvalues can be slightly different. 


Is there a rule to determine which type of standardization I "should" be using? Clearly, bias would lead me to prefer the standardized over unstandardized coefficients because they produced more significant results in my model. My original rationale for looking at STDXY was that the variances of all variables in the model, including background/outcome variables, are used to standardize. 


There is no rule. I would use raw coefficients unless I had a reason to use standardized. In both cases, you should be conservative regarding the pvalues given the large number of tests being done. Some kind of Bonferroni type correction should be made. 

QianLi Xue posted on Wednesday, November 19, 2008  6:17 am



How come STDYX and STDY in a path model with only observed varaibles give same estimates? 


Please send the full output and your license number to support@statmodel.com. 

Eulalia Puig posted on Thursday, January 22, 2009  10:48 am



Dear Linda or Bengt, I've read some of the posts, but I'm still unclear as to which standardization to use. My current model uses only residuals (both as independent and dependent variables), so they are continuous, nonlatent variables, right? STD gives me the exact same output as the unstandardized coefficients. The other standardizations (STDYX and STDY) give me not only different output, but also different pvalues..... First, which standardization should I use? Second, why do I get different pvalues? Thank you so much in advance. I really appreciate your work an effort in this site. Eulalia 

Eulalia Puig posted on Thursday, January 22, 2009  10:50 am



OK, I just read the answer to my second question.... 


If all variables are continuous, I would use STDYX. 


I'm confused about how STDYX standardization relates to regression on Z scores. I've run a multiple regression in Mplus with 2 indepdendent vars. When I convert the dependent and independent vars to Z scores and rerun, I don't quite get the parameter estimates shown under STDYX Standardization in the original raw data regression (STDYX coeffients = 0.494 and .211, coefficients from analyzing Z scores = .488 and .209). According to Pedhazur the standardized coefficients should match those calculated from Zscore data (Multiple Regression in Behavioral Research, 2nd ed., 1982, p. 53). Could you comment on this? Could the difference arise because the STDYX standardization doesn't use the standard deviation of Y computed from the sample? The 5.0 Mplus User's Guide (p.577) reports that for the STDYX standardization "SD(x) is the sample standard deviation of x and SD(y) is the model estimated standard deviation of y". Could you please explain what is meant by the "model estimated standard deviation of y"? This appears to be something different than just the standard deviation of y computed from the sample. Thanks for your help! I'm trying to decide whether to publish the Betas I get from the regression on Z scores or those reported under STDYX Standardization from the raw data analysis. 


It is likely that you are standardizing using standard deviations computed using n1. In maximum likelihood estimation, the model estimated values are based on n. I would use STDYX in any publication. I would not recommend working with zscores. I would work with the original data. 


Thanks Linda for the prompt reply. I'm sorry for being dense, but is the only difference between the "model estimated" standard deviation and the sample standard deviation that the former is calculated using n and the latter with n1? Could you please explain the rationale behind why you prefer obtaining standardized coefficients from the STDYX values vs running the analysis on Z scores (beyond the obvious advantage that the STDYX approach doesn't require standardizing the raw data prior to analysis)? In multiple regression, are the standardized coefficients reported under STDYX somehow superior to those calculated by analyzing Z scores? I appreciate your further insight into the benefits/weaknesses of the different approaches. 


Q1. For justidentified (saturated) models, yes. For overidentified models, use the modelestimated SDs. Q2. Prestandardizing variables is risky in many settings; there is a large literature on this  see for example the underpinnings of invariance of structural coefficients in SEM. Why then prestandardize when you can get what you want out of the standardized solution. 


Is there a way to get the incident rate ratios for analyses using DVs with Poisson distributions? (and ORs for logistic outcomes?) Thanks, Susan 


You exponentiate the slope in the regression of a count variable on a covariate. You can do this in MODEL CONSTRAINT using the NEW option to define a new parameter. Then you will obtain a standard error for the incident rate ratio. 


Thanks, Linda. I understand this in principle, but am not sure how the syntax would work? here are the most pertinent parts of the model: (ABBREVIATED SYNTAX): COUNT ARE QF5_1 (p) QF5_2 (p) QF5_3 (p) QF5_4 (p) QF5_5 (p); ANALYSIS: ALGORITHM=INTEGRATION; INTEGRATION=MONTECARLO; MODEL: mertcq_5 ON mertcq_4; QF5_5 ON QF5_4 mertcq_4; mertcq_4 ON mertcq_3; QF5_4 ON mertcq_3 QF5_3; mertcq_3 ON mertcq_2; QF5_3 ON mertcq_2 QF5_2; mertcq_2 ON mertcq_1 GNSonce GNSevery GSonce GSevery; QF5_2 ON QF5_1 mertcq_1 GNSonce GNSevery GSonce GSevery; MODEL CONSTRAINT: NEW(irr1); irr1 = exp(mertcq_1); (obviously, i have no idea to put the model constraint in right...) Thanks, Susan 


How to use MODEL CONSTRAINT is described on pages 555558 of the Mplus User's Guide. If you can't get this working from these instructions, please send your input, data, output, and license number to support@statmodel.com. 


My colleagues and I ran a seemingly unrelated logistic regression (SULR) in a multiple group framework. The sampling design is complex with weighting, stratification, and clustering variables. We have several observed binary dependent measures and a set of observed predictors. There are no latent variables in the model. Since a given participant provided responses to each variable, errors are correlated and we would like to jointly estimate each regression equation (hence the SULR). The grouping variable has four levels, and we would like to see, for example, if the pattern (logistic coefficients) and thresholds  for the same set of predictors predicting each binary DV  are invariant across these four groups. For example, Group A y1 on x1 x2 x3 ... y2 on x1 x2 x3 ... Group B y1 on x1 x2 x3 ... y2 on x1 x2 x3 ... And so on. When we request standardized coefficients in a model that constrained b coefficients to equality across groups, the reported unstandardized coefficients, as expected, do not vary over groups. However, the STDyx estimates  which are what we want to use  DO vary over groups. For example, the unstandardized x1 coefficient above has the same value in groups A, B, C, and D, but the corresponding STDyx value varies over those groups. Is there some sort of adjustment that needs to be made to the STDyx values? 


The raw coefficients are standardized using groupspecific model estimated standard deviations. This is why they differ across groups. You should do the equality testing on the raw coefficients. 

pan yi posted on Wednesday, March 18, 2009  1:18 pm



Dear MPlus experts, I have a question about obtaining standardized regression coefficients for my SEM model. I have two latent variable interaction terms in the model and I specify the type of analysis to be random. I want to get standardized regression coefficients of latent endogenous variables on latent exogenous variables. Once I specify "type=random", the STANDARDIZED output is not available and tech4 output becomes unavailable too. I have been looking for a way to get mean and varcov matrix of my latent variables for some time but with no success. Could you please inform me how I can achieve my goal? Thank you very much! Regards, Pan Yi 


We are not aware of work on standardized coefficients when there are latent variable interactions. You might want to email Andreas Klein at Univ of Western Ontario. 


Dear Linda and Bengt, Bollen (1989, p. 125) notes that standard errors of standardized coefficients often are not correct. However, does Mplus provide the calculation of correct standard errors if the variables are random? 


Mplus does take into account that the variables are random. For example, the standard deviation of a covariate x is treated as a random quantity. See the following Technical Appendix on our website for further information: Standardized Coefficients and Their Standard Errors 

Greg posted on Thursday, April 23, 2009  6:34 am



Hello, I'm running a path analysis X>M>Y, with 3 different outcomes variables (1 latent, 2 continuous observed DVs). I'm using the BOOTSTRAP=500 option, INDIRECT MODEL as well as CINTERVAL (BCBOOTSTRAP) STDXY output options to get C.I. of standardized coefficents. What I've got in output was: MODEL RESULTS, STDXY estimates, nonstandardized and standardized indirect and direct effects as well as C.I. for MODEL RESULTS, nonstd and std indirect and direct effects but none for the STDXY coefficents. Am I forgetting to specify anything to get the CI of stdxy coef? Alternatively, should I only report CI for the direct and indirect effects and explain that the X>M and M>Y coefficients are significant? Thanks for your help! 


Please send the full output and your license number to support@statmodel.com. 

jks posted on Wednesday, November 04, 2009  8:02 pm



Hello, I ran the following model to test measurement invariances (configural, metric, scalar, complete) across two groups: f1 by y1y4; f2 by y5y8; I was trying to write standardized estimates in an external file using savedata command in MPLUS 5.2. But MPLUS didn't write the standardized estimates which were constrained to be equal across groups. Is there any way to write the standardized estiamtes in an external file? OR In the formula: b*SD(x)/SD(y), how can I get SD(x) and SD(y) for my model (specifically, how can I get standardized factor loadings, intercepts, error variances from unstandardized estimates). All variables are continuous in the model. 


We don't save the standardized parameters that are constrained to be equal. Although we don't save them, they are given in the results. You can obtain the variance of x and y using SAMPSTAT or TYPE=BASIC. The square root of the variance is the standard deviation. 

jks posted on Thursday, November 05, 2009  6:13 pm



Thanks for your quick answer. My model is: f1 by y1y4; f2 by y5y8; are the followings true: (1) X (in b*SD(x)/SD(y)) corresponds to (y1, y2,..,y8) in my model. (2) Y (in b*SD(x)/SD(y)) corresponds to f1, f2 in my model. 


I am not sure why you want to calculate the standardized coefficients when they are given in the output. They are not saved but they are in the standardized results. The formulas you show standardize a regression coefficient with respect to both x and y. 


Hi, I am running a path analysis with y1 on x y2 on x and have to direct significant coefficients in my output for both paths. I would like to compare the difference of the coefficients (.37 vs. .48) in this model. Is there a way to do this? Thanks for your advice, Sofie 


You can do a chisquare difference test of two models  one where the coefficients are free and a second where the coefficients are constrained to be equal. Or you can use MODEL TEST. 


Hi Linda, thanks for your quick reply. I also had this idea and did so, but wasn't sure if this was the right way. 

leah lipsky posted on Tuesday, January 19, 2010  6:47 am



Hello, I'm wondering why sometimes there is no S.E. or pvalue for standardized model results output in a path analysis? Thanks! 


With categorical outcomes and covariates using weighted least squares estimation, we don't give standard errors for standardized. This is not because they cannot be computed but it was not implemented in Mplus. 

ela m. posted on Thursday, March 11, 2010  10:18 am



Hello Dr. Muthen, I'm a student working for the first time with Mplus and path analysis. We are interested to see the effects of some factors (categorical or continous) on suicidality in children. For this we did logistic regression and as some factors have missing data, it was suggested to use path analysis, too. I used the following options: ESTIMATOR=MLR; ALGORITHM=INTEGRATION; INTEGRATION=MONTECARLO (500); but when I wanted to use Model indirect I got an error: "MODEL INDIRECT is not available for analysis with ALGORITHM=INTEGRATION." I'm a bit confused what option should I use, or should I calculate the indirect effects from the output I got from Model specification? Also, in the output, at the model results there is no stdyx that I read it's the path coefficient. How should I get it? A general question regarding path analysis: my understanding was that we propose a large model, with a lot of path connections, but we select the best one. How should I do this in Mplus? Do I have to try all the possible submodels and compare the fit measures? Please let me know. Thank you very much for your time. 


You can use MODEL CONSTRAINT to create the indirect effect if the mediator is not a categorical variable. Indirect effects for categorical mediatiors can be estimated only with the weighted least squares estimator and probit regression in Mplus. If we don't give standardized estimates, there must be a reason. I would need to see the full output and license number at support@statmodel.com to see what it is. Your path model should be based on theory. 


I am running a simple path model with count variables as the outcome variables. With continuous predictor and outcome variables, I would interpret STDYX as the change in y in y standard deviation units for a standard deviation change in x. How do I interpret the STDYX coefficient with a y variable that is a count variable rather than a continuous one? Because it is a count variable, it does not make sense to think of STDYX as the change in y in y standard deviation units, correct? Thank you in advance for your help! 


With count outcomes, I would either use the raw coefficients or use STDX which you can compute as the raw coefficient times the standard deviation of x. It would be interpreted as the log rate change for a one standard deviation change in x. 

Maja Cambry posted on Wednesday, March 31, 2010  10:19 am



I ran an SEM with categorical and continous indicators (WLSMV estimation). Standard errors and pvalues for the standardized estimates (including Rsquare) were not provided in the input. My questions are... 1) Can I get s.e.'s of standardized estimates with this type of model? If so, how? 2) If not, can I assume that the standardized estimates are significant if the unstandardized estimate is significant? Or, is there a way to calculate s.e. for standardized coefficients? 3)Is there a way to determine if Rsquare of the endogenous latent variable(s) is significant? 


1. With WLSMV, standard errors of standardized coefficients are not given when the model has covariates. 2. No. 3. We don't give this. I would report the raw results as far as significance goes and show the standardized without significance. 

jing xu posted on Thursday, May 13, 2010  3:52 am



I tested a model where there included three independent variables (IDV) and three dependent variables (DV). But all the Std. structural coefficients were either much greater than 1, or negative and significant. I found the zeroorder correlation coefficients between three IDVs are high (all around .85). Do you think this is due to multicollinearity, or Heywood case? I tried a second order model to include an IDVS which has three IDVs as its dimensions. The results turned that the IDVS significantly and positively affected the three DVs. The path coefficients were satisfactory between 0 and 1. 


It would seem multicollinearity could be the issue. 

jing xu posted on Thursday, May 13, 2010  8:35 pm



Thanks, Linda. I also tested the 3 IDVs on only one DV (eg. DV1), but found the results were okay. But when I added one more DV inside, the results were still unsatisfactory. I couldn't understand when I included all the "correlated" IDVs but only one DV, the things turned much better. So it seemed the number of DVs also count into multicollinearity. 

jing xu posted on Thursday, May 13, 2010  9:10 pm



And I can also tell you that I tested pairs of IDVs on the 3 DVs. One pair of IDVs has correlation .87, but the structural model is quite acceptable. One pair of IDVs has correlation .84 (lower than .87), but the structural model is very unacceptable. Also it seems that higher IDVs' correlation may not 100% result in multicollinearity. In fact, only when the problems happen, we can explain by using multicollinearity. 

Rachel Perl posted on Thursday, July 08, 2010  10:51 am



I ran the same regression in SPSS and in Mplus. The unstandardized coefficients are exactly the same for all variables but the standard errors are not. The differences are small such as .147 vs. .149 or .127 vs. .129. Mplus estimates for standard errors are consistently larger. I was wondering why this is the case. In both regressions I used listwise deletion and the number of cases is identical. 


Mplus uses maximum likelihood where n is used. It sounds like SPSS uses n1 such that the standard errors are smaller. One sees the differences when sample sizes are small. 

Simon O. F. posted on Wednesday, August 04, 2010  11:34 am



Hi, I want to test the difference between two standardized coefficients but I didn't find documentation on how to do it with Mplus. Could you please tell me if it is possible and how should I proceed ? Thanks a lot in advance, 


You would need to define the standardized coefficients as NEW parameters in MODEL CONSTRAINT and test the difference using MODEL TEST. 

Simon O. F. posted on Friday, August 06, 2010  6:03 am



Thank you for your answer Linda. I would however need further assistance. To be more precise, I am trying to test equality of 2 paths coefficients, say X>Y and Z>Y. I know I obtain the standardized path from X to Y by taking beta_xy*sqrt(var(x)/var(y)). I also know how to create parameters for beta_xy and var(x), but not for var(y) since: MODEL: Y ON X (beta_xy); X (var_x); Y (var_y); Will only get me var_y as being the residual variance of Y and not the total variance of Y. Is there any simple way to create a parameter for the total variance of a dependant variable in Mplus? Many thanks once again for your help. 


You would need to define the variance of y in MODEL CONSTRAINT and then use it in the standardization. 

Emily Yeend posted on Friday, August 20, 2010  6:24 am



Hi I am looking at modeling indirect effects and have a question about the standardized indirect effect values and pvalues. Firstly, is it useful to report the standardized indirect effect values? Secondly, (if it is) when I am looking at a variable influencing a continuous variable via another continuous variable I think that I would use StdYX (Am I right?). However, if I am looking at a variable influencing another continuous variable via a binary variable what would I use? Many Thanks, Emily 


Hello, I have some questions regarding my data which are based on an RCT. I used regression to see whether treatment condition predicted outcomes after controlling for covariates. I didn't use ANCOVA in SPSS, because I wanted to base my results on all available information regardless of whether participants completed treatment. 1. What exacly is the difference between FIML and EM in terms of estimating missing data? Which one is it that makes use of all available data and creates an estimated covariance matrix for the entire sample? In my regression analyses, the output was identical regardless of whether I specified Algoritm = EM, or not. 2. One of the reviwers of our paper said that he/she questioned the use of EM procedure in handling our missing data, which are close to 40% and nonrandom for some measures. What do you recommend as the best way of dealing with this issue? Can you direct me toward any published studies that have dealt with the issue of a high percentage of nonrandom missing data in a good way? 3. Finally, another comment we got was that the reviewer wanted Fvalues, but we only provided betaweights because we ran regression in in Mplus. Is there any way of converting a betaweight to an Fstatistic? or do we simply answer that our analysis did not provide fvalues? When I run the analyses in SPSS using ancova, I get the same significant result, but it is based on a smaller N because of listwise in SPSS. Thanks, Kristine 


Mplus uses fullinformation maximum likelihood with the EM algorithm. See the Little and Rubin book for details. I would agree with the reviewer that 40% missing may be excessive. We give zvalues for testing significance. 


Emily: I think it could be useful to present standardized indirect effects. Comparing them would not be. Use StdYX for both. Indirect effects with a categorical mediator should be done in Mplus only for weighted least squares analysis and probit regression where the continuous latent response variable is used for standardization. 

Emily Yeend posted on Tuesday, August 24, 2010  2:53 am



Thanks. I've been reading the user guide and I'm still a little unclear when I would use the different standardizations. Am I right in thinking that in general I would show standardized coefficients when I would like to compare influences of variables. Typically I would use StdYX however where I have a binary covariate (or mediator) I would show StdY for that specific relationship. (If this is the case, are these standardizations still comparable?) Which is the appropriate way to display the residual variances? Or would I always just show these in unstandardized form. Similarly, I am I right in thinking that in the unstandardized output, WITH statements show the covariance of residuals, whist in the StdYX output it shows the correlation of the resiudal. When would I use Std? Many Thanks, Emily 


It sounds like your understanding is correct. Use Std if you want to standardize only with respect to latent variables. 

Emily Yeend posted on Saturday, September 04, 2010  10:23 am



Hi, I think I'm still a little uncertain about this. Sorry to bother you again but can I just check... Say I have these variables: Y observed continuous variable X1 observed continuous variable X2 observed binary variable Z latent continuous variable And a model: Y < X1 + X2 + Z X1 < Z X2 < Z And I want to be able to compare the regression weights. I would show StdYX for every path except the Y < X2 path? For this I would show Std? And I would compare these values? OR Would I have to compare all paths using Std values since to compare everything should be standardized in the same way and Std is the only standardization that makes sense for the Y<X2 path? Many Thanks, Emily Can you suggest a good reference I could look at? 

Emily Yeend posted on Saturday, September 04, 2010  10:27 am



Sorry, where above I've written Std I mean StdY 

Emily Yeend posted on Saturday, September 04, 2010  10:32 am



I notice that when I run a slightly altered version of the above model: Y < X1 + X2 X1 < Z I do not obtain the StdY values  yet I do have a binary covariate in my model. Why would this be? Many Thanks, Emily 


You should always use StdYX for continuous covariates and StdY for binary covariates. If you compare the standardized coefficients, you should keep in mind one is the change associated with a one standard deviation change in x and the other is the change associated with a shift from one value of x to the other. Please send the output and your license number so I can see why you don't get StdY. 


What is the option to request the variance of a latent variable in addition to its residual variance? 


For latent variables, use the TECH4 option. For observed variables, use the RESIDUAL option. Both are in the OUTPUT command. 


I tested a latent variable multiple mediation model using standardized variables and my "c' " path is greater than one. Although the model fit is very good and there were no warnings, is this a sign of a problem? If not, how does one interpret path weights greater than one in this context and in general? 


This can happen if m and x are highly correlated. See the Joreskog note at the following link: http://www.ssicentral.com/lisrel/techdocs/HowLargeCanaStandardizedCoefficientbe.pdf 


Hi, I am estimating a model where I have several covariates predicting two latent factors. These two latent factors are the same construct, measured at two time points. Therefore, I constrain the coefficients of the covariates (predicting the factors) to be equal. In my output, the raw coefficinents are, in fact, constrained to be equal. However, the standardized (STdYX) coefficients are not. Can you explain why this is the case? Thank you. 


Different standard deviations are used to compute the standardized coefficients resulting in different standardized values of raw coefficients that are equal. 


Thank you for your quick reply, but I am still unclear on this. Let me be more specific. In the model, the raw coefficients predicting f1 and f2 are constrained to be equal equal. For example, in the output, the raw coefficient for the effect of education on health at time 1 is .5, and the raw coefficient for the effect of education on health at time 2 is .5. For the standardized coefficients, the effect of health at time 1 is .2 and the effect at time 2 is .05. I do not understand why the standardized coefficients for the covariate education is not also constrained to be the same. 


The coefficients at each time point are not standardized using the same standard deviations. Time 1 uses time 1 standard deviations. Time 2 uses time 2 standard deviations. Only if the standard deviations at each time point are the same will the standardized coefficients at each time point be the same. 


Okay, I understand now. Thank you. 


I have run a SEM using a WLSMV estimator with a binary observed DV. I understand that the unstandardized estimates are probit regression coefficients. I have latent IVs, continuous observed IVs and dummy IVs. My understanding from prior posts is that if I want to examine the relative impact of the IVs, I would use STDYX for continuous and latent IVs and STDY for the dummy IVs. Will you please confirm that understanding is correct and answer 2 additional questions? 1) Is it reasonable to show the standardized coefficients in a graphic of the SEM (using STDYX and STDY as described above) even though this is a probit model? 2) The STDY does not appear in my outputwhy? Thank you 


Your understanding is correct. 1. Yes, but the interpretation of standardized for probit is not a straightforward as for linear regression. 2. You need to divide StdYX by the standard deviation of x. We don't give this with WLSMV. 


Dr. Muthen – I’m a new Mplus user running version 6.1. I have a two part question: 1. I’m running an SEM model with a mixture of binary, ordinal, and continuous variables thus my default estimator is WLSMV. If I request the MODINDICES option, I see various BY, ON/BY, ON, and WITH statement. Where should I be looking if the modification is suggesting a removal of a certain pathway (i.e., a nonsignificant Wald Test for a certain path coefficient)? 2. Should I ever expect to see StdYX estimates exceeding 1 under the standardized model results? Thanks! 


1. Modification indices suggest adding not removing paths. See the user's guide for further information. Significance of paths in the model are found in the results section. See the user's guide under the OUTPUT command for a description of the output layout. 2. This can happen. See http://www.ssicentral.com/lisrel/techdocs/HowLargeCanaStandardizedCoefficientbe.pdf 


Dr. Muthen  Thank you for the quick reply. While I do see the estimate, S.E., and pvalues for each total and total indirect pathways under the nonstandarized section, those teststatistic are not given for the standaridized total and total indirect. Should I be using the BOOTSTRAP option rather than the default DELTA method if I wanted to obtain the standardized values? Given I have a mixture of variable types, should I only be concern with the standarized output? Thanks! 


With WLSMV, we don't give standardized standard errors and pvalues in all cases. I would interpret the raw coefficients. 


Dr. Muthen – In a classic SEM model, the output in the MODEL results section always set the first factor loading for each latent variable to 1 with zero standard error and 999 for the test. How can I have more control as to which factor loading gets fixed at 1 to fix the indeterminacy problem? Is it simply a matter of rearranging which variables follows the BY statement or using the @1 for variable of interest? 


See the BY option in the user's guide where it is shown how to free the first factor loading and fix the factor variance to one or another factor loading to one. 


Dr. Muthen  While I understand from previous posting that with categorical outcomes and covariates using weighted least squares estimation, Mplus does not give standard errors and pvalues for standardized model results. My understanding is that it has not been implemented into Mplus to automatically calculate; however, do you have any suggestion or example of the coding needed to obtain them directly in Mplus without using other resources? Thanks! 


You could use MODEL CONSTRAINT to specify standard parameters and obtain standard errors and pvalues in this way. 

Jeff Jones posted on Tuesday, December 28, 2010  3:07 pm



Hello, I am trying to do a standardized regression with random predictors using the MODEL CONSTRAINT command, but I am having trouble. I have read through the forum and the technical appendix on standardized coefficients, and I am still lost. I have two questions: 1) I am trying to work out a simple two predictor problem with the delta method, and do not understand how to setup the problem to match the technical appendix specifications. I am not sure where the identity matrix comes into play. 2) I am not sure how to use the MDOEL CONSTRAINT command for a random predictors regression problem. Any advice on how to set it up (or a simple example) would be very much appreciated. Thanks, Jeff 


What do you mean by random predictors? Do you mean mediators or are you referring to a model with random slopes? 

Jeff Jones posted on Wednesday, December 29, 2010  1:04 pm



By random predictors I mean that we are not assuming that the values in X (the predictors) are fixed constants. So, both the dependent variable and the independent variables are random variables. 


Your Dec 28 post talks about the delta method in (1) so you must be asking about standard errors. Perhaps you are asking about standard errors for the standardized solution? For (2), see UG ex 5.20. This is an example where you compute the standardized solution in Model Constraint. And that also gives you the SEs for those standardized coefficients using the delta method. 

Utkun Ozdil posted on Thursday, May 05, 2011  10:53 pm



Hi,, I want to report the coefficients of my model such that y ON gender. With the gender covariate I chose to report STDY results. But I am unclear in that are these coefficients represented as a beta or a gamma? Thanks... 


The beta and gamma matrices both contain regression coefficients. It is not necessary to distinguish between which matrix is used. 


Dear Mplus experts, I would like to calculate standard errors (and confidence intervals) for Ystandardized regression coefficients. The regressed variable is a latent factor F indirectly observed via categorical indicators U1U15 (1 PL probit IRT). Estimator is WLSMV. I was considering two approaches: (A) Postcalculation, e.g. within R: . read B_Est and B_SE from .out file . read factor scores F from .sav file, . compute F sample standard error S_F = sd(F) . B_Est_s = B_Est / S_F . B_SE_s = B_SE / S_F (B) From within Mplus: F by U1U15@1; ! 1PL IRT: loadings forced to 1 F on X (Beta) ; F (Vf); MODEL CONSTRAINT: NEW(B_Est_s); B_Est_s = Beta / SQRT(Vf); 1°) Using (B) am I correct in understanding that Vf is F's _residual_ variance? i.e. Vf = Var[ eps ], where F = Intercept + Beta * X + eps ? 2°) Shouldn't I use F's variance instead of its residual variance to compute the standardised Beta_s and its standard error? 3°) Should I prefer (A) or (B)? I would appreciate very much any advice. Thanks in advance for reading my enquiry. 


vf is a residual variance. The formula for standardizing a regression coefficient requires that the coefficient be multiplied by the standard deviation of x and divided by the standard deviation of y. To obtain the variance of x, use TYPE=BASIC. You will need to compute the variance of f by using the formula: var (f) = (Beta***2*var (x)) + res. var (f) 


Dear Dr Muthen, Thank you very much for your fast answer, explanation and suggestion. I had actually simplified somewhat the problem to make it clearer. In fact, I have several covariates X1, X2, ..., XJ: F on X1 (Beta1) X2 (Beta2) ... XJ (BetaJ); 1. Does your formula generalize to: Var(F) = Beta1^2 * Var(X1) + ... + BetaJ^2 * Var(XJ) + Res. var(F) ? 2. Some of the covariates Xj are ordinal, and one of them is nominal, so that each of those covariates is dummy coded as (C1) binary variables (where C is the number of categories of the ordinal/nominal covariate). In these cases, how do I compute Var(Xj)? 3. How about approach (A) = postcalculate Var(f) by reading factor scores from the savedata file? Should it yield the same results? 4. A rather unrelated question: Mplus does not compute factor scores for one of my models, with message: FACTOR SCORES CAN NOT BE COMPUTED FOR THIS MODEL DUE TO A REGRESSION ON A DEPENDENT VARIABLE. I checked that Mplus indeed did not produce any save file after estimating this model. If this behaviour is ok, could you please tell me why factor scores can not be produced by Mplus in this setting? 


1. With more than one x, you need to include the covariances in the formula also. 2. Covariates are treated as continuous in regression so you should use the variances from TYPE=BASIC. 3. Factor scores are not the same as the factors in the model. How close they are can be seen by looking at the factor determinacy. I would not use factor scores. 4. You should use Version 6.11 and if you still get the message, send it along with your license number to support@statmodel.com. 

Paresh posted on Wednesday, June 01, 2011  5:00 pm



Dear Dr. Muthen, To check for multicollinearity, I regressed DV on a second order factor of all IVs. Unlike my hypothesized model, this model has poor fit indices(CFI=0.539, TLI=0.464, RMSEA=0.095). Does the poor fit mean that I do not have multicollinearity problem, or do I need to check the significance of the beta estimate in the second order model? Thank you for your help. 


You want to first check that the secondorder factor model fits without including the DV. Also, the poor fit doesn't have to do with multicollinearity (which I assume is due to highly correlated firstorder factors). 


Hello all, How would one go about constraining a standardized structural parameter to a value in a typical SEM (e.g., constrain a beta to .2) ? I think I am missing something in the "Model Constraints" literature. Thank you, Hugh 


You would need to DEFINE the standardized coefficient in MODEL CONSTRAINT and hold it equal to .2. See Example 5.20. 


Hi Linda and thanks, Would I compute the variance of the Y (downstream) LV by using: var (Y_LV) = (unstandardized Beta***2*var (x)) + unstandardized res. var (Y_LV) or could I just get this from Tech4? Hugh 


If you want to obtain the standardized parameter only, you can use the value from TECH4. If you want to obtain a significance test, you must compute the variance in MODEL CONSTRAINT. 


Hi Linda, I am computing variances for my downstream variables and when I check them against Tech 4, most seem to be within rounding error. However, one of the computed values = 3.085 while Tech 4 = 3.072. While I realize that this shouldn't changed substantive conclusions much, I am curious if this is still within rounding error or if there are some instances when computation and Tech 4 would not match up (aside from user error, which may be the case here). Thanks again, Hugh BTW  Six demographic variables predicting outcome (brd). v_brd = b_agebrd**2*v_age+b_genbrd**2*v_gend+b_whbrd**2*v_white+b_hisbrd**2*v_hisp+b_asbrd**2*v_asoth+b_grbrd**2*v_gr+r_brd (residual variance brd) ; 


You are forgetting about the covariance. Following is an example for three covariates. You should be able to generalize that. MODEL: Y ON x1 (p1) x2 (p2) x3 (p3); Y (p4); MODEL CONSTRAINT: NEW (vary); Vary = p1**2 * vx1 + p2**2 * vx2 + p3**2 * vx3 + 2*p1*p2*covx1x2 + 2*p2*p3*covx2x3 + 2*p1*p3*covx1x3 + P4; The v and cov terms come from TYPE=BASIC for the covariates. 


Thanks again. You guys are a great resource. Hugh 


Dear all, how do I get the pvalue for the standardized YX (its not given.)? Is there a formula using the CI? Appreciate your help, thank you very much in advance! 


It sounds like you are getting only standardized parameter estimates with no standard errors. In this case, you cannot get a pvalue without computing the standard error yourself which could prove difficult to do. 


thanks for your answer! thats right,my demo version doesnt give me a SE for the standardized parameters. But the version at the uni does (Estimates S.E. Est./S.E. Std StdYX). How can I use it to get the pvalue? thanks! 


The Demo and regular version are identical except for a limit on the number of variables. If you get standard errors in one, you will also get them in the other. Perhaps you are using an old demo. It sounds like the university also has an old version. We give pvalues now. The ratio of the parameter estimate to its standard error is a ztest in large samples. You can use a ztable to find the pvalue. 


Thanks. But I still do not understand how to get the pvalue if I only have the STDYX estimate.. StdYX/...? = sorry 


You can't unless you know how to compute the standard error. 


I have a model, where probit and linear regression coefficients are estimated. My model looks like this: CATEGORICAL ARE c_wahl c_iss vn167Ar vn167Br vn167Cr; C_PID by c_spid6@1; c_spid6@0; C_K by vn167Ar vn167Br vn167Cr; c_wahl on C_PID; (lets assume this is path a) C_K on C_PID; (path b) c_iss on C_PID; (path c) c_wahl on c_iss; (path d) c_wahl on C_K; (path e) C_K on c_iss; (patch f) MODEL INDIRECT: c_wahl ind C_PID; c_wahl ind c_iss; 1) Am I correct to assume that only for path b+f a linear regression is estimated and for the other paths a probit regression? 2) Assume that c_iss is not categorical, but a continous variable am I right that than for path b+c+f a linear regression is estimated? 3) In terms of the strength of the standardized estimates, can I conclude base on the value that one or the other predictor is stronger or weaker than the other or am I not allowed to do this? 4) For model indirect command, I got direct and indirect effects. Some of the indirect paths should multiply an estimate of the linear regression and probit. is this a problem in comparing standardized total effects? Thanks for your help. 


1) You are right only if WLSMV is used, or ML with link=probit. 2) Yes 3) You can do that. 4) Not a problem with WLSMV 


Thank you so much for your quick response. 

Ewan Carr posted on Tuesday, October 25, 2011  3:01 am



I'm estimating a mixture regression model, where some coefficients are allowed to vary across classes, and others are fixed (full model: http://cl.ly/3K0g1y0H2h3s2a1X1I09) The raw regression coefficients from this model look correct: they vary by class where they're supposed to, but are otherwise fixed across classes. However, the STDY coefficients appear to *all* vary across classes (even when the model states that they should be fixed). I'm guessing this has something to do with how STDY is calculated, but would be very grateful if someone could explain this to me. Thanks, Ewan 


Standardization is done using not the overall standard deviations but the standard deviations for each class. 

Ewan Carr posted on Tuesday, October 25, 2011  6:32 am



Ah OK, that makes sense. Many thanks. 


Dear Drs Muthen, I am running a multiple mediation path analysis with dichotomous independent variables, continuous latent variable mediators with ordinal indicators, and a continuous latent dependent variable with ordinal indicators (plus other control variables). I am using WLSMV as my estimator, theta parameterization and calculating bcboot confidence intervals. My reviewers want to know how to interpret the magnitude of the coefficients for the continuous latent dependent variables. 1) How do I determine the range and standard deviation of my continuous latent variables? Since my independent variables are dichotomous I wanted to standardize my coefficients by the dependent variable (STDY) but this option is not available for weighted least squares estimation. Also, I read in the manual that standardization uses the delta method and only allows for symmetric bootstrap confidence intervals. 2) Can I standardize by y (by doing STDYX and unstandardizing by X) and still report the significance levels based on the nonstandardized bcboot confidence intervals? 3) How would I standardize by Y for the indirect effects? Is there an output that shows the standard deviation of the indirect effect using bcboot? Thank you. 


(1) The means and SDs of your latent variables are obtained in TECH4 and because they are assumed normally distributed this gives you a notion of their range. (2) You want to look at STD, that is, standardizing only with respect to the latent variables. This is because your direct and indirect effects pertain to the latent variables. Mplus does not give you bootstrapped SEs for standardized coefficients. Yes, I would say that you can report significance/CI based on the raw (unstandardized) coefficients and add reporting of standardized coefficients without significance/CI. An alternative is to use Estimator=Bayes where you achieve the same as using bootstrap, namely allowing nonnormal distributions of the estimates. With Bayes you also get this for standardized coefficients. 


Thank you. I tried using Etimator=Bayes, but this estimator does not allow sampling weights and model indirect is not available.The primary table in my paper is the one reporting the indirect effects. When I try standardize(STD) and use the model indirect command all of the total and total indirect effects = 0, but the specific indirect and direct effects have values. This doesn't make sense because the sum of the specific indirect effects should equal the total indirect effect and the sum or the total indirect and direct effects should equal the total effect. Is this because one of my mediators is not a latent variable? I forgot to mention that in addition to three continous latent variable mediators I have one observed continuous mediator (logged adjusted household income). I think this puts me back to trying to calculate STDY from STDYX, but I don't know how to do this for the indirect effects reported using the model indirect command. Do I just divide by the standard deviation of the independent dichotomous variable, or do I need to divide by the standard deviation of the indirect effect? If I need to divide by the standard deviation of the indirect effect how do I get mplus to output this? Thanks again! 


Regarding your total and total indirect effects being zero, please send this output and data to Support. Regarding Bayes, you don't need Model Indirect but can create these effects in Model Constraint. But you are right that Bayes does not yet handle weights. Regarding standardizing an indirect effect, consider a model with x, m, y and the indirect effect from x to y via m obtained as the product of two slopes. When x is binary you don't want to standardize with respect to x. The indirect effect product is standardized with respect to y by dividing the product by the SD of y. 


Regarding the zero total effects, use STANDARDIZED instead of STANDARDIZED (STD). It looks like there is a printing error when you request specific standardized effects. 


Hi! Maybe a dumb question here. I have a model with 2 factors, an outcome and some covariates (simplified here): a by u1* u2 u3 u4 u5; a@1; b by v1* v2 v3 v4 v5; b@1; a on x1x3; b on a (pab) x1x3; y on a (pay) b (pby) x1x3; y (vy); I am trying to calculate the standardized total effect from a to y ('aytots' in the code below) using MODEL CONSTRAINT: new (aytot aytots); aytot = pay + pab*pby; aytots = aytot/sqrt(aytot**2 + pbc**2 + vy); The result of above formula differs from the standardized effect output provided by Mplus, and I know it does not account for the variances and covariances of x1x3. However, if I am wanting to interpret my model conditional on the covariates, wouldn't the above formula be appropriate, i.e., yielding a standardized effect from a to y for fixed x1x3? Thanks so much. 


Never mind! I knew it was a silly question. 


So let me try a different question. Is there a way to include x1x3 in my example above in "the model" so that I can label and refer to their variances and covariances in the MODEL CONSTRAINT section? 


Yes, you simply say x1x3; in your Model statement. 

Philip Jones posted on Wednesday, November 23, 2011  5:28 am



Thanks. I had tried that but was not getting convergence using the WLSMV estimator (convergence was no problem when I didn't explicitly include "x1x3;"). So I thought I was doing something wrong. Do you know why that might be? 


This approach does not work with WLSMV. If you use ML, you will get the standard errors for the standardized coefficients. 

Arin Connell posted on Thursday, December 08, 2011  11:09 am



I am trying to test the equality of two regression coefficients using the model constraint commands to calculate relevant variances and standardized regression coefficients. The calculated variances appear fine (they match Tech4 output). I am concerned that the calculated standardized regression coefficients do not match those produced in the STDYX output (calculated Beta1 = .201, Beta2 = .49, while STDYX estimates are Beta1=.634, Beta2 = .595), and am therefore not confident in the model test. Any insights would be appreciated. Model: negaff by NA BFI_N ; posaff by PA BFI_E ; PTSD by PSSI PSSSR ; dep by HRSD BDI ; physcon by PC_1 PC_2 ; negaff (varNA); posaff (varPA); negaff with posaff (covNAPA) ; physcon on PTSD ; Dep on negaff (p1) posaff (p2); PTSD on negaff (p3); dep with PTSD ; PSSSR with BDI ; PSSI with HRSD ; physcon with dep@0; dep (p4) ; ptsd (p5) ; Model constraint: new (vardep varptsd Beta1 Beta2) ; vardep = p1**2 * varNA + p2**2 * varPA + 2*p1*p2*covNAPA + P4; varptsd = p3**2 * varNA + p5 ; Beta1 = p3 * SQRT(varptsd)/SQRT(varNA); Beta2 = p1 * SQRT(vardep)/SQRT(varNA); Model test: Beta1 = Beta2 ; 


To get to the STDYX beta1 and beta2, you want to divide by the SD of the DV and multiply by the SD of the IV  not the other way around. 


Ahh! Greatassumed I was doing something dumb here, but it wasn't jumping out at me. Thanks! 

Xiaolu Zhou posted on Saturday, January 07, 2012  10:20 am



Hi Linda, I run a SEM with Mplus: 2 independent latent variables are a and b; 4 dependent latent variables are c,d,e,f and g. Because there are too many variables, I used parcels for the observed variables. The result showed that d was not significantly related to a and b; e was significantly related to a and b. While the regression I did with SPSS before showed that d was significantly related to a and e was significantly related to b. What account for these different results between SEM and regression? Thanks a lot! 


This can happen if the latent variable model does not fit well. 

Xiaolu Zhou posted on Saturday, January 07, 2012  2:45 pm



Hi Bengt, Thanks a lot of your instant reply. Acutally the model fit of my SEM is good: CFI is 0.952, RMSEA is 0.043. What can I do with this situation? Many thanks! 


Maybe I don't understand what you are comparing. I assume that when you talk about SPSS regression, this is when you are using parcels, that is, sums of the items measuring the dependent latent variables instead of the latent variables themselves(and same for the exogenous latents). And you are comparing that to the full latent variable model with multiple indicators for the DV latents. If that understanding is correct, the different results would seem to be due to some indicators having direct effects from some of the exogenous factors  something that could be found out by requesting Modindices(All). 

Xiaolu Zhou posted on Sunday, January 08, 2012  10:57 am



Thank you very much Bengt! Your understanding is correct. I found one indicator of one DV latent have direct effect from one indicator of IV latent. I run it and get a better result: one path is the same as regression result now, but there is still another path differs from regression result. Seems like that there is no command in model modification indices is reasonable anymore, what should I do? 


If the latent variable model fits well in terms of chisquare I would trust its results over the parcel version. The discrepancies may be due to several causes, including low reliability of the parcels. 

Amber Watts posted on Monday, January 09, 2012  10:17 am



With regard to your earlier statement that significance tests for the standardized coefficients should not be used for the unstandardized coefficients what would explain why one would be significant if the other is not? For example, I am using a latent factor (Mets) to predict an observed continuous variable (memory). The unstandardized coefficient would suggest a nonsignificant relationship, while the stdyx suggests it is a significant relationship. Thanks 


The raw and standardized coefficients have different sampling distributions. This is why their significances may differ. 

Xiaolu Zhou posted on Monday, January 09, 2012  7:13 pm



Many thanks to Bengt! Yes, I agree with you. I also found the reliability issue of the parcels. Thanks again! 


I have 1 independent variable (observed variable (x)) and 1 dependent variable (latent variable (y) made from categorical variables) and 1 dependent & independent variable (latent variable (z) made from categorical items). Usevariables are X y z a b c d e f; Categorical are A b c d e f; Model: Y by a b c; Z by d e f; Z on x; Y on x; Y on Z; So I was wondering 1) since all my outcome variables are latent, does this mean that the results are not in probit coefficients but they are in linear regression coefficients? 2) since I have one observed variable I should be reporting the SDYX results, am I correct? Thanks a lot! 


Regressions where the factors are dependent variables are linear regressions. You should use StdYX if the covariate is continuous and StdY if the covariate is binary. 


Thanks a lot for your quick response! 


I’d like to use mplus to generate some Monte Carlo data to use with PLS. In mplus I can only specify nonstandardized estimates to path coefficients etc. PLS calculated only standardized values. Is it possible to calculate nonstandardized estimates for standardized estimates? 


If you generate data where variables have variances of one, the data are standardized. 

QianLi Xue posted on Thursday, February 23, 2012  7:21 pm



Hello, I understand that the correlation between the residuals of two dependent variables can be obtained as the residual covariance divided by the product of the standard deviations of the two residuals.How to do significance testing of this calculated correlation in MPLUS. Where to find variancecovariance matrix of the residual variance and covariance estimtes? Thanks in advance for your help! 


In most cases, Mplus gives the standard errors of the standardized coefficients when you ask for STANDARDIZED in the OUTPUT command. TECH3 contains the estimated covariances and correlations of the parameter estimates. 

Jiyeon So posted on Saturday, March 03, 2012  11:57 pm



Hi Prof. Muthen, I want to see if standardized path coefficients are statistically different from each other. I heard that you can do this by using equality constraint (e.g., 0 = pathA pathB). I ran the model with these equality constraints and do not know what to look for in the output in order to see if the path coefficients are statistically significant. Please help me! Jiyeon 


In MODEL CONSTRAINT, you should create a new parameter, for example, diff, and say diff = patha  pathb; You will then obtain a ztest for diff. 

Jiyeon So posted on Sunday, March 04, 2012  5:24 pm



Hi Prof. Muthen, Thank you so much for your prompt response. I just did what you suggested ( DIFF1 = ID  PSI) and got a warning as follows: *** ERROR in MODEL CONSTRAINT command A parameter label or the constant 0 must appear on the lefthand side of a MODEL CONSTRAINT statement. Problem with the following: DIFF1 = ID  PSI What should i do? I defined the paths as ID and PSI. 

Jiyeon So posted on Sunday, March 04, 2012  5:40 pm



more specifically, i have: Model: SD ON IDEN (ID); SD ON TRANS_9 (TR); SD ON PSI (PSI); SD ON PR_STD2 (PR); MODEL CONSTRAINT: NEW(diff1 diff2 diff3); diff1 = ID  PSI; diff2 = PSI  PR; diff3 = PR  TR; I added NEW(diff1 diff2 diff3); and this time the software ran but in the output this was only thing i got regarding these three new parameters. New/Additional Parameters DIFF1 0.229 0.140 1.629 0.103 (pvalue) DIFF2 0.228 0.082 2.778 0.005 (pvalue) DIFF3 0.039 0.086 0.453 0.651 (pvalue) How should i interpret this output? I wasn't able to find any ztest for diff. (is this the ztest you were referring to?) Thank you very much! 


The third column is the ztest. 

Jiyeon So posted on Sunday, March 04, 2012  9:45 pm



Thank you again! So the second pair of path coefficients are the only statistically different pair? It's counterintuitive since ID = .41, PSI = .28, PR = .26, TR = .11. So in terms of difference in the values of standardized coefficients, diff1 (= .13) was much larger than diff2 (= .02). Is it possible that diff1 is not a statistically significant difference when diff2 is a statistically significant difference? 


Significance is determined by the ratio of the parameter estimate to its standard error. If diff1 has a large standard error, this is possible. 

Jiyeon So posted on Monday, March 05, 2012  6:23 pm



Thank you very much!!! 

H. R. posted on Friday, March 09, 2012  4:45 am



I have a manifest path model containing ordinal exogenous variables, ordinal outcomes and ordinal mediators. Is it correct to use the STDY output to obtain the standardized path coefficients or should I use STDYX? Which output should be used in case of several predictors where some predictors are continous and others are ordinal? Thanks a lot for your help. 


If you are treating the ordinal variables as continuous, you should look at StdYX. If you have created a set of dummy variables from the ordinal variable, you should use StdY. 

H. R. posted on Friday, March 09, 2012  2:47 pm



Thank you, Linda for your quick response. 

gibbon lab posted on Sunday, June 03, 2012  7:19 pm



If I have only continuous variables in my model, do I expect to get the same coefficients for unstand and stand results if I standardize all continuous variables before the analysis? Thanks. 


Yes, if the model is scale free. 

gibbon lab posted on Monday, June 04, 2012  7:42 pm



Hi Professor Muthen, Does "scale free" mean that there are no scale parameters involved in the model? Is that the default if all the variables are continuous? Thanks. 


No, scale free has nothing to do with scale parameters. For a model that is scale free, the same standardized coefficients are obtained whether the unstandardized raw data or standardized data are analyzed. Scale free models have no constraints across variables, for example, equality constraints. See the Bollen SEM book for further information. 


Dear Drs. Muthen & Muthen. I have a model with two independent observed variables: XC continuous and XB binary. Also I have a continuous dependent variable YC and two binary dependent variables Y1 and Y2. I have the following model: Y1 ON XC XB YC ON XC Y2 ON Y1 YC XC XB I know Mplus creates two latent continuous variables Y1* and Y2*. And I also know that is not logical to use the StdYX estimate for an observed independent binary variable. But how can I compare the strength of effects of Y1*, YC, XC and XB on Y2*? I want to know which is the strongest predictor of Y2*. Can I use the StdYX estimates for comparative purposes in this case? Thank you. 


There is no way to standardize these coefficient to make them comparable. 


I'd like to test some hypotheses about standardized coefficients (StdYX) after fitting a latent variable path model. Is there a way to do this using the existing Mplus parameter labeling conventions but applied to StdYX coefficients? Otherwise, the standardized coefficients I'm interested in are complicated functions of other more basic model parameters. I know I can export the parameters and the covariance matrix of the parameters and use the delta method but I'm checking to see if there is a short cut. 


We can't think of any short cut. 


In the tech report, "Standardized Coefficients in Mplus", June 13, 2007, on page 2 it mentions, "We can obtain standard errors for the expression in (69) by the delta method if we have the joint asymptotic variance W for theta, Var(Y), and Var(eta)." Any chance that Mplus can output W? 


Not currently. 


Am I correct in my understanding that the 'Estimate' value in MODEL RESULTS for X ON Y is the equivalent of a beta coefficient in other data analytic programs? 


Yes, this is a regression coefficient. 


When running SEM and looking at the model result estimates (unstandardized), I notice that the first item has a loading of 1 BUT a number of the other items has loadings greater than 1. Is this an issue? and if so, what can I do about it? 


With the unstandardized results, the scale of the factor loading is going to be determined by the scale of the factor indicators. 


With the unstandardized results, the scale of the factor loading is going to be determined by the scale of the factor indicators. Be sure you have no negative residual variances for the factor indicators. 


Thanks Linda, the output shows me the residual variance for the latent variables and they are all positive. How can I get it to display the residual variance for the factor indicators? what should I do if one or more are negative? Thanks, Paula 


Please send the output and your license number to support@statmodel.com. 

Herb Marsh posted on Thursday, October 04, 2012  1:29 pm



For my multigroup SEM, I would like to have a solution in which the estimates are standardized in relation to a common withingroup metric (available in LISREL but apparently not Mplus). Here is what I did: 1. I standardized all indicators (Mn=0, SD=1) in relation to the total sample (i.e., a common metric) 2. I ran a 'total group' analysis, disregarding the multiple groups 3. I included a set of 25 dummy variables representing the 26 countries; I treated each of these as MIMIC variables that predicted the indicators in my model. 4. From this analysis, I took the factor loading from for the first indicator of each factor. 5. I then ran my multiple group analysis based on the standardized indicators. However, instead of fixing the first factor loading of each factor to 1.0, I fixed it the the factor loadings I got from the total group analysis (i.e., step 4 above). My logic is that this is equivalent to standardization in relation to a common withingroup standardization. This is relevant in that there are substantial group differences on some of the variables so that the average withingroup standard deviations would be quite different than the total group standardization. For the multiple group analysis, the model is fit separately to each group so that the withingroup common metric is appropriate. Is this appropriate and is there an easier way to do it? 


I don't follow your proposal, but it sounds like you want to standardize with respect to pooledwithin group variances  if so, that cov matrix can be obtained in a separate run. Maybe you want to try and see if this gives the same answer as your proposal. 

Dave Graham posted on Thursday, November 08, 2012  1:42 pm



Dear Professors Muthen, in my model I am regressing an observed and two latent variables on two dichotomous independent variables (this is not the full model, but the part where the problem occurs). I get higher significance levels for the path from the first dichotomous variable to the observed outcome compared to the paths from the second dichotomous variable to the latent constructs. However, the standardized coefficients are higher for the paths to the latent variable. No matter what type of standardization I use (STDYX, STDY, STD). I can see that the Est./S.E.values are bigger for the coefficient estimated for the observed outcome and should therefore be more significant. However, the result does not intuitively make sense to me. Shouldn't the significance level be reflected in the coefficients after standardization? I am not sure if I have misspecified my model. I tried different latent variables (but the same observed outcome) but the problem/phenomenon stays the same. It would really be great if you could give me a hint on whether this is an explainable and sensible result or if I might have a wrong model. Thank you very much! Dave 


Please send your output and license number to support@statmodel.com. Isolate one case to be looked at. 

Lois Downey posted on Friday, March 01, 2013  12:15 pm



I need 95% confidence intervals for standardized coefficients in a complex regression model with a latentvariable outcome (ordered categorical indicators) and several manifest predictors. (My motivation for seeking standardized estimates was my believe that this would keep the estimates independent of which indicator was used to scale the latent variable.) However, with the default WLSMV estimation, 95% CIs seem to be given for only the unstandardized coefficients. By contrast, if I specify restricted maximum likelihood estimation with a probit link, the 95% CIs are given for both the unstandardized coefficients and for coefficients standardized any of the three ways. Several questions: 1) Is there any reason not to use the MLR/probit solution in lieu of WLSMV? 2) How do I test for effect modification in an MLR/probit model in Mplus. (Stata appears to have a utility for evaluation of interaction effects in logistic and probit models, but so far I've been unable to locate this facility in Mplus.) 3) Is there some reason for the omission of CIs for standardized coefficients with WLSMV? 4) I assume that the use of WLSMV as the default for models of this type is based on the designers' belief that it is preferable for some reason. Can you explain the reason for this preference (in lay terms)? THANKS VERY MUCH! 


1. No. 2. Create the interaction terms using the DEFINE command. 3. We don't do standard errors for standardized in WLSMV for conditional models. Therefore, confidence intervals cannot be created. This will change in one of the the next versions of Mplus. 4. We use WLSMV as the default because it does not require numerical integration with categorical outcomes and because residual correlations are more easily included in the model. WLSMV is not preferable to maximum likelihood. 

Lois Downey posted on Saturday, March 02, 2013  7:16 am



Thank you for the answers. With regard to response #2, I was not specific enough in the question I asked. I understand how to compute the interaction term. However, there is an article in The Stata Journal (http://www.statajournal.com/article.html?article=st0063) that seems to suggest that a different method is needed for correctly estimating the interaction EFFECT and its SIGNIFICANCE when the model is based on either probit or logistic regression. Is there a way to obtain these corrected values in Mplus? 


No. Seems reasonable, but I don't have a good view of how generally used this procedure is. It is not in the LongFreese Stata book Second edition for instance. 


I have a few question about the regression coefficients reported in the output. I am running a simple mediation in which the IV and Mediator are continuous, and four DVs which are categorical. One of the DVs is dichotomous, the other DVs are ordered categories. 1. Do the regression coefficients reported for each of the relationships vary in type so that the IV M path are linear regression and the paths to the DVs probit coefficients? 2. How does one interpret coefficients considering they express different things, e.g linear interpreted as unit changes in both variables; probits as changes in the probability of a zscore? Does the output make things 'easier' in that the regressions reported are scaled to allow for similar interpretation? 3. This is not a coefficient question per se. When using categorical variables, do ordered categorical variables need to be coded as dummy variables, or can they be used in their raw ordered form in one variable, e.g. 1, 2,3,4,5? 4. Related to 3. When doing simple regression a coefficient of .88 for predicting a dichotomous DV from continous IV was not significant. However a variable with multiple categories predicted from the same IV had a smaller coefficient but was significant. How does this arise? Many thanks in advance for your replies. Volker. 


If your estimator is WLSMV or Bayes you are working with probit. For ML you have a choice in using link=probit or logit. What's used is printed in the Summary output. For interpretations with a binary DV, see Muthén, B. (2011). Applications of causally defined direct and indirect effects in mediation analysis using SEM in Mplus. which you find on our web site under Papers, Mediational Modeling. Re q 3, ordinal variables can be kept in raw form. Re q 4, one of many possible reasons could be that you have more information/more power when the variable is polytomous. 


Thank you Bengt for all the fabulous support. ML is not available as I am bootstrapping the analysis. (v6.11) But if I have a mix of variable types under wlsmv (which is default I think when categoricals are used) are all coefficients probit, even those for continuous variables? I have seen the paper but found it rather technical. However, does your it apply equally to ordinal variables? Re q4 is there any way to fix this, for example by dichotomising all dependent variables? 


For the continuous DVs in a model with mixed scale types, WLSMV uses a regular linear regression model. With ordinal DVs one can perhaps argue that the relevant DV is the continuous latent response variable that is the key DV and thereby fall back on regular formulas for continuous DVs. I would not necessarily dichotomize ordinal variables; I don't see your discrepant results as a problem. 


Thank you. Much appreciated. 


Hi there I am checking my use of the MPlus output in regards to standardized coefficients. I have a simple path analysis with only observed variables. y on c x1 x2 x3 c on x1 x2 The estimator is WLSMV as c is ordered categorical ( 1 2 3). x1 and y are continuous x2 and x3 are binary (0 1 as in gender)but not designated as such as exogenous. I will report stdxy for y on x1 and calculate stdy to report for y on c x2 x3. My confusion arises in the paths c (ordinal) on x1 (continuous) and c on x2 (binary). I think these are probit regressions but I'm unsure which form of std coefficient to best use? Thanks for any advice.. 


The type of coefficient to use depends on the scale of the covariate. Covariates in regression can be treated as binary or continuous. For binary covariates, use StdY. For continuous covariates, use StdYX. 

lamjas posted on Tuesday, April 02, 2013  7:53 pm



Hi there, I have a path model (no latent variables) with two binary and two continuous observed variables. The model is like this: u1 on u2 c1 c2; u2 c1 on c2; where u are binary variables and c is continuous variables. The estimator is WLSMV as default. I have questions whether I should use unstandardized or standardized coefficients when I report the results. (1) Should I report unstandardized coefficients for paths (u1 on u2; u1 on c1; u1 on c2; u2 on c2) involving binary DV with odd ratios in the results? These paths looks like logistic regressions to me. (2) For path involving two continuous variables (c1 on c2), I believe I should use Stdyx coefficients, is that right? (3) For indirect effects, should I used Stdyx coefficient provided in Model indirect command? Thank you for your advice. 


With WLSMV u2 as a covariate is the continuous latent response variable underling u2. So in all cases your covariates are continuous. You should use StdYX in all cases. For binary covariates, use StdY. 

lamjas posted on Thursday, April 04, 2013  6:15 pm



Hi Linda, I have a followup question to confirm the report of indirect effects. For the indirect effects, c2> c1 > u1, I believe I should use StdYX provided in Model indirect command as both direct effects are StdYX. How about c2 > u2 > u1? For direct effect, c2 > u2 is StdYX, while u2> u1 is StdY, should I use the coefficient by StdYX times StdY? Thanks again. 


You look at the exogenous variable. Both u1 and u2 are continuous latent response variables with WLSMV when they are used as covariates. So you would use StdYX. 

Hemant Kher posted on Saturday, April 06, 2013  10:51 am



Dear Professors, I ran a growth model with a predictor for the latent intercept / slope, and some distal outcomes predicted by the latent intercept / slope. When I see the raw data model results, all of the paths from the latent intercept / slope to the distal outcomes are nonsignificant. Yet, when I see the same paths within the STDYX portion, I see that some of the paths are (in some cases very highly) significant. I am confused as to why there are two seemingly contradictory results. 


Please send your output, input, data and license number to Support@statmodel.com. 

Cecily Na posted on Sunday, April 07, 2013  8:48 am



Dear professors, I am running a twolevel model without latent factors, but with a dichotomous outcome. The unstandardized coefficient of the between level path is not significant, but the standardized path is highly significant (beta= 0.999, p< .001). What is the reason? I used STDYX. Thank you! 


Please send the output and your license number to support@statmodel.com. 


Dear Dr. Muthen I wonder how to interpret the coefficients labeled "StdYX" and "StdY" correctly? Do we have the cutoffs that can be used to interpret the values? Can we use the 0.2, 0.5, and 0.7 (Cohen's effect size) for interpretation? Thank you so much! 


The interpretation is shown under the STANDARDIZED option in the user's guide. These can be used as effect size for a binary covariate because they represent a mean change but not for a continuous covariate. 


Hello, I am performing path analysis with various sorts of variables. When requesting standardized output, I get the following message: "STANDARDIZED COEFFICIENTS ARE NOT AVAILABLE FOR MODELS WITH CENSORED, CATEGORICAL, NOMINAL, COUNT, OR CONTINUOUSTIME SURVIVAL MEDIATING OR PREDICTOR VARIABLES." (1) Is there a way to calculate the standardized coefficients myself for binary, censored and continuous covcariates or would it be sufficient to report the unstandardized regression coeffecients? (2) In a paper I reported the unstandardized regression coefficients of the path model in tables. However, the reviewers want me to visualize the path model. But it seems unusual to me to report the unstandardized regression coeffecients next to the arrows in the path model. Would it be ok to mention the unstandardized regression coefficients next to the arrows? (3)When I tried to model mere correlation between two variables in the path model, I got the following message: "Covariances for categorical, censored, count or nominal variables with other observed variables are not defined." Might there be another way to take into account the relationship between those two variables? Thank you for your advice. 


Please send your output and license number to support@statmodel.com so we can see the full context. 


Dear Mplus Team, with one categorical variable y and one factor f (measured by other variables u1u10) and the probit regression y on f: What is the meaning of the STDYX Standardization solution when I'm using the MLR estimator with Probit Link? With WLSMV this can be seen as polyserial correlation (regression of the standardized underlying latent y* variable on the standardized continuous factor f)? But in the MLRProbitModel, how can I interpret the STDYX solution with regard to y ON f? (It is a multiple group model and I’m interested in the polyserial correlations between y and f  but I must take the MLR estimator because of empty cells). Thank you. 


With a categorical y, the ML(R)probit STDYX for y ON f pertains to the linear regression coefficient of y* regressed on f. This is the same as with WLSMV, except MLprobit considers a Theta parameterization and WLSMV Delta by default. 

Tom Booth posted on Tuesday, June 25, 2013  2:56 am



Bengt/Linda, I have a model in which a latent variable (which has count variables as indicators) is regressed on a continuous variable. I appreciate the factor loadings of this model are best reported raw coefficients, but which standardization is most meaningful for the regression path involving the latent construct and continuous variable? Thanks Tom 


In a conditional model, the choice of a standardization depends on the covariate. For a binary covariate, use StdY. For a continuous covariate, use StdYX. 

Tom Booth posted on Wednesday, June 26, 2013  1:55 am



Thanks Linda. So given here the covariate (IV) in the model is a latent variable, StdYX is correct. As always, thanks for the swift response. 


StdY and StdYX are the same when the covariate is latent because an X is not involved. 


I have run a structural equation model with both continuous latent and continuous observed variables. My problem is that when I interpret the standardized coefficients, I find that some coefficients are very significant (for instance, .17, p <.001),> .05). If the coefficients are standardized, it is my understanding that I should be able to directly compare them, but the output seems to defy that logic. Could you help me understand what is going on? 


Are you saying the some standardized coefficients that are large and not significant while smaller ones are significant? I don't understand your question. 


Hi Dr. Muthen, Yes that is exactly what I am saying. I have standardized coefficients that are small, but significant, and others that are large, but not significant. My understanding with standardized coefficients is that I should be able to compare them in terms of magnitude, and so a smaller coefficient should not be significant if a larger coefficient is not. 


Whether a coefficient is significant depends on its standard error not its size. 


Does EST/SE always have a z distribution, regardless of whether I'm working with categorical or continuous variables? Thank you, 


Yes, in large samples. 


On 9 June 2011 in this thread, you gave the scenario below: MODEL: Y ON x1 (p1) x2 (p2) x3 (p3); Y (p4); MODEL CONSTRAINT: NEW (vary); Vary = p1**2 * vx1 + p2**2 * vx2 + p3**2 * vx3 + 2*p1*p2*covx1x2 + 2*p2*p3*covx2x3 + 2*p1*p3*covx1x3 + P4; And said that we can use MODEL CONSTRAINT to calculate the standardised estimate manually. The problem is that how can we access to the sample covariance terms when the estimated covariance terms are specified to be zero in the model (as above)? 


You would need to label the covariances of the covariates in the MODEL command and use the labels in MODEL CONSTRAINT, for example, x1 WITH x2 (p5); 


Thank you for the response. While I was waiting for your answer, I played around with the model to observe Mplus behaviour. It seems that it still specifies covariances between independent variables even though I did not specify WITH at all. And after I specified one covariance, degrees of freedom and goodnessoffit indices changed. Nonetheless, they became the same when I created covariances among all independent variables. Right now I am figuring out how to extend the same formula to a more complex model: y ON x1 x2; x1 ON x3 x4 (p1p2); x3 ON x4 x5 x6 (p3p5); I tried to create a formula under MODEL CONSTRAINT for calculating variance of x1, but the figure does not match the result (i.e. in the estimated covariance matrix). The problem is that x4 indirectly affects x1 through x3, and this is different from having multiple covariates that do not affect one another (i.e. no residual term). I know that parts of x1's variance comes from p1**2*vx3 (i.e. vx3 is variance of x3 and the calculation is correct since it matches the value in the estimated covariance matrix), p2**2*vx4, and rx1 (i.e. x1's residual term). I do not think that I can use covariance terms here since x4 directly affects x3. Could you clarify this point? 


Dear Mplus Team, I am running a SEM model with a dichotomous outcome. I don't think I can use the MLR estimator to get logit coefficients because then you dont get model fit indices (i.e RMSEA), which I need. So I have to use the WSMLV estimator that produces probit coefficients because that comes with model fit indices. However, I am struggling to interpret them. and I have the formula to make marginal probabilities which I have seen in the forum, but I'm unsure how to use it: P(y=1): f(threshold+ b1*x1 + b2*x2 ....) 1) I don't know what (f= cumulative normal distribution function) is? 2) What threshold do I use? the threshold for the outcome, or the threshold for the predictor? 3) If my x1 variable is a latent factor, what value do I put in the model? 4) and if I have three paths of indirect effects, do I have include all their coefficients in the equation? 


Siddhi: You would need to use a reduced form equation where you substitute the x3 equation for x3 in the x1 equation. Then you express x1 in terms of x4, x5, and x6. 


Linda: Thank you for the advice. I have spent the whole day to learn the tracing rule from Mulaik's book (2009) and Wright's paper (1921). Below is the variance formula: y ON x1 x2; x1 ON x3 x4 (p1p2); x3 ON x4 x5 x6 (p3p5); x4 WITH x5 (c1); x4 WITH x6 (c2); x5 WITH x6 (c3); x4 (vx4); x5 (vx5); x6 (vx6); x1 (rx1); x3 (rx3); MODEL CONSTRAINT: NEW(vx1 vx3); vx3 = p3**2*vx4 + p4**2*vx5 + p5**2*vx6 + 2*p3*p4*c1 + 2*p3*p5*c2 + 2*p4*p5*c3 + rx3; vx1 = p1**2*vx3 + p2**2*vx4 + 2*p1*p2*p3*vx4 + 2*p1*p2*p4*c1 + 2*p1*p2*p5*c2 + rx1; I have checked with maximum likelihood results (estimated matrix), and the calculation is correct, but I would like to crosscheck with you that it is indeed correct. I used this formula feeding into Bayesian path analysis, but this time the calculation results differ from the estimated variance–covariance matrix. Also, posterior predictive p value is changed just because I created new variables under MODEL CONSTRAINT. Does Mplus treat the calculation as a separate independent variable? It seems to have an impact on model fit in Bayesian analysis. 


Please send the ML and Bayes outputs and your license number to support@statmodel.com. 

db40 posted on Friday, November 15, 2013  12:48 pm



Dear mplus team, I have a fairly basic mediation model as seen by my syntax. One of the problems im experiencing is a large Est/s.e = 999.000 as per below. I have searched and read that may have negative variances. Apologies, but what does this mean and how can I get around this problem. I have reduced to model significantly to see if this negates the issue as below but it hasnt. TwoTailed Estimate S.E. Est./S.E. PValue Effects from SINGLE to SUICTHLF Total 0.376 0.049 7.733 0.000 Total indirect 0.000 0.000 999.000 0.000 ****************************************** my syntax VARIABLE: NAMES ARE pserial Age female male married single SWD MH suicthyr suicthlf suicatwk suicatyr Soc_2010 wt_ints1 occ_prestige high medium low N_WS WS_2 ResSex; USEVARIABLES ARE single SWD MH suicthlf ; CATEGORICAL are suicthlf ; MISSING ARE ALL (99, 999); ANALYSIS: Type = missing ; Estimator = WLSMV ; MODEL: suicthlf ON single SWD MH ; Model indirect: suicthlf ind single ; suicthlf ind SWD ; 


Please send the output and your license number to support@statmodel.com. 

Raj Kumar posted on Tuesday, December 10, 2013  2:30 pm



Hello: I'm new to Mplus program, so forgive me if this is a trivial question. My research question is as follows: Y= ordinal X= continuous M= binary I would like to get standardized estimates to compare in this mediation analysis. From what I understand, I should use the STDYX for the Y on X, and M on X. I'm confused what to use for the mediatior standardized estimate, as it is binary and STDYX is not appropriate. I believe it is STDY, but I'm still not sure where you get the Y* in order to derive this. Also, can you compare the value of a STDYX estimate to a STDY estimate directly? I have gone through the user manual extensively, and am still finding it unclear. I would greatly appreciate if you could walk through the steps for completing this problem. Thank you in advance. 


If you use WLSMV, a continuous latent response variable behind the observed binary M is used in the modeling, so the regular STDYX estimates are fine. 

Stephanie posted on Monday, January 27, 2014  5:19 am



As I am using the WLSMV, I do not get results for the pvalues and standard errors of the standardized results. But if I would like to report the significance of these standardized estimates, can I assume that if the corresponding unstandardized results are significant the standardized are as well? And if I can’t, is there another possibility to get their pvalues? And my second question: Did I understand it correctly, that with WLSMV I should report the stdyx results for both, continuous and binary variables? Thank you once more for your kind support! 


You cannot assume that the pvalues for unstandardized is the same as for standardized. These pvalues will be available in the next release which should come out next month. No, use StdYX for continuous covariates, Use StdY for binary covariates. 

Stephanie posted on Monday, January 27, 2014  11:36 pm



Thank you very much! 

Raj Kumar posted on Thursday, January 30, 2014  2:26 pm



Dr. Muthen, I am a bit confused regarding the consistency between your last two posts (12/10/2013 and 1/27/14). The latter post, you said the 'regular STDYX estimates are fine'. But on the more recent post you said use 'StdY for binary, StdYX for continuous.' My original question was how do you calculate StdY (as you need a Y* to calculate this)? To recap again, my research question is as follows: Y= ordinal X= continuous M= binary So I was just wondering the proper way to approach this problem. Thank you in advance. 


Are you referring to the 1/27/14 post by Linda Muthen? She was talking about a binary covariate. You don't have that. My 12/10/2013 answer still stands. 

Raj Kumar posted on Thursday, January 30, 2014  3:35 pm



I apologize, yes I was referring to the post by Linda Muthen. Thanks, I will use WLSMV and the StdYX estimates. 


Hello, I am constructing a SEM with both binary and continuous variables (TYPE is general; estimator is ml; integration is montecarlo). Like Heidi Knipprath posted on 6/11/03, I get the error message "STANDARDIZED COEFFICIENTS ARE NOT AVAILABLE FOR MODELS WITH CENSORED, CATEGORICAL, NOMINAL, COUNT, OR CONTINUOUSTIME SURVIVAL MEDIATING OR PREDICTOR VARIABLES". I get the same message when I request stdY or stdYX. Based on the discussion on the board, it seems that standardized coefficients should be available for binary variables. Thank you in advance for your insight. 


With categorical outcomes and maximum likelihood estimation, numerical integration is required. Standardized results have not been implemented in this cases. 

Stephanie posted on Wednesday, February 05, 2014  7:07 am



I would like to refer to my post on January 27, 2014. I have just realized that I only get results for stdyx and std but not for stdy. I used OUTPUT: standardized; How is this possible? 


Full standardized output is not available with WLSMV for models with covariates. You will need to convert StdYX to StdY. See the STANDARDIZED option if you do not know the formulas. 

Stephanie posted on Tuesday, February 11, 2014  12:53 am



Thank you for your reply. But which standardized coefficient should be used if both variables, dependent and independent, are binary? Does the StdY also make sense in that case? 


For a binary covariate use StdY. For a continuous covariate use StdYX. It is the covariate that determines the choice of standardized coefficient. 


I have a multiple group model where the grouping variable is gender. I ran an unconstrained model in which the paths were free to vary by gender. In the output, one of the parameters is significant for both males and females. Is there a way to test if the coefficient for males is significantly larger than the coefficient for females? Thank you! 


There isn't a test of one coefficient being larger than the other, just a test of them being the same or not. 


Hi, The standardized coefficients at the between level in a twolevel MSEM were quite large (.3  .4), all with pvalues <.001. The model ICC, however, was .012. This is a typical school effects model. I grandmean centered the between school predictors. Because the ICC seemed negligible, I reran the model accounting for stratified sampling but without the multilevel modeling. The standardized school effects are now very small (.01 .03) and not significant. I am surprised by the drastic change in the standardized coefficients. Are you? 


Please send the two outputs and your license number to support@statmodel.com. 

Hugo posted on Tuesday, April 22, 2014  6:15 am



Hello. I am wondering the reason to get the same value in the estimate in this two model (I have this problem with more models) in the output STDYX Standardization. You need to know that all the variables are binary, I use the ESTIMATOR =WLSMV and PARAMETERIZATION=THETA; NEUNEO BY d1 x9 x10 x11 x20 x21 x58 x59 x60; NEUNEO@1; TwoTailed Estimate S.E. Est./S.E. PValue NEUNEO BY D1 0.707 0.000 999.000 999.000 X9 0.751 0.085 8.856 0.000 (…) NEUNEO BY x9 d1 x10 x11 x20 x21 x58 x59 x60 NEUNEO@1; STDYX Standardization TwoTailed Estimate S.E. Est./S.E. PValue NEUNEO BY X9 0.707 0.000 999.000 999.000 D1 0.033 0.123 0.273 0.785 (…) Why do I have the same value in both models?? 


Please send the two outputs and your license number to support@statmodel.com. 

M.G. Keijer posted on Thursday, May 15, 2014  12:08 pm



Dear Linda, I was wondering how I could constrain standardized coefficients that are same among groups In a basic multi group model the constrained unstandardized coefficients are the same among groups, but constrained standardized coefficients differ, probably because of group specific estimates differ. Is there still a way to get the same STDYX coefficients, for instance by constraining the "PSI part" ? If so how can do I do this. Thank you in advance 


You should be very careful considering standardized coefficients for group comparisons. The statistical literature has many articles warning against doing that and reviewers will most likely protest. The unstandardized coefficients are the ones likely to be invariant, not the standardized ones. This is because the different groups most likely have different variances. 


I'm running a TWOLEVEL model and would like to confirm whether the coefficient estimates in my model results are standardized. The previous answer on this string doesn't seem to apply since I don't have "Std" as a descriptor for my coefficients. It doesn't seem like my coefficients could be standardized since I have coefficients greater than 1. Thank you. 


You will receive standardized estimates only if you as for them in the OUTPUT command using STD, STDYX, STDY, or STANDARDIZED. 

Sarah Racz posted on Monday, June 30, 2014  1:59 pm



Dear Drs. Muthen, I am conducing a series of SEMbased latent growth curves with censored data due to a stacking up of the data at the top of the distribution. Based on reading through the Mplus discussion boards I am using the WLSMV estimator. I have covariates in my model, but I understand that Version 7.2 will provide fully standardized output for WLSMV with covariates. I just downloaded the new version but I am still not getting standardized standard errors and pvalues in my output. Any thoughts as to why I'm still not getting the full standardized output? Thank you in advance! 


Please send the output and your license number to support@statmodel.com. 


Dear Linda (or Bengt)! I have a logistic model with a latent predictor and would like to present a coefficient that stands for "the increase in logit for outcome = 1 when the latent predictor increases by one SD". Is this what the stdstandardized coefficient stands for? 


Yes. 

Stephanie posted on Thursday, July 10, 2014  3:18 am



Dear Drs. Muthén when using the STDY for a binary independent variable I get coefficients larger than 1. Is this possible? Thank you for a short a reply. 


Yes. See the FAQ on the website Standardized Values Greater Than One. 


Dear Drs. Muthén, I am testing a crosslagged model with three assessments over time, t0, t1, t2. One of my variable is binary, and the other is continuous. When I inspect the correlations in SPSS and MPlus, I see major diversions for the binary variable. t0t1 and t1t2 correlations in SPSS are more or less comparable, but in Mplus the same values are .39 and .82, respectively (using the exact same sample). I see the same differences in the stability paths of the crosslagge model. For the binary variable, the standardized (styx) stability coefficient from t0 to t1 is .33, whereas the stability coefficient from t1 to t2 is .82. This is weird because these observations are rather stable over time, 89% and 91% of the sample remaining in the same category from t0 to t1 and t1 to t2. Can you please give me a hint about what is the reason for these differences in the estimated values? Thanks, 


I think the differences you are seeing are that you treat the binary variable as categorical in Mplus. Therefore, you are comparing Pearson correlations in SPSS to tetrachoric correlations in Mplus. 


Dear Linda, Thank you very much for your prompt response. Can you please also comment on the differences in the stability paths in MPlus. "I see the same differences in the stability paths of the crosslagge model. For the binary variable, the standardized (styx) stability coefficient from t0 to t1 is .33, whereas the stability coefficient from t1 to t2 is .82. This is weird because these observations are rather stable over time, 89% and 91% of the sample remaining in the same category from t0 to t1 and t1 to t2." Why do we get so much different stability coefficients even though the change over time between each time periods are very similar to each other? If I do not define t1 and t2 observations as binary, what would the estimate be based on? Logistic regression, or OLS? Thanks, Metin 


Q1. The different sample statistics used (Pearson vs tetrachorics) make it impossible to predict what differences you might see, so we cannot comment on the stabilities. Q2. Linear regressions using ML. 

Mandy Cao posted on Monday, December 01, 2014  8:23 am



Dear Dr. Muthen, I am struggling with 1 question and would appreciate your help a lot: I am using 5 IVs to predict engagement, then engagement predicts one DV. I found 3 IVs have significant path coefficients to engagement: .35, .26, .11 respectively (standardized). So I concluded that the one with .35 had the biggest prediction on work engagement. One of my committee members said i can not eyeball the coefficients to make conclusion and I need to test it (but did not tell me how). Another committee member told me to conduct relative importance test or dominance test. After reading two articles, i found it impossible to do these 2 tests because i loaded every item onto relevant latent construct in the first step CFA and then did the structural regression part. The examples i have read about relative importance or dominance analysis seem to have one score represent one variable. I have consulted with my research professor and he told me it is impossible to do such tests. I did read the mplus menu but did not find relevant information. Could you please kindly provide your insights? (FYI: these 5 independent variables are correlated) Thank you very much! 


If you are saying that you want to test that the standardized coefficient x is greater than the standardized coefficient y, that is a more advanced topic. You may want to look at the article on our website: Van de Schoot, R., Hoijtink, H., Hallquist, M. N., & Boelen, P.A. (2012). Bayesian evaluation of inequalityconstrained hypotheses in SEM models using Mplus. Structural Equation Modeling, 19, 593609. 


I am running a latent profile analysis model where the latent class moderates a simple regression relationship between two continuous variables. The latent classes are determined by four continuous variables, (that are not the exogenous/endogenous variables). I have included the stdyx command in the output to obtain the standardized results, and I want to make sure I am interpreting the output correctly. 1. The means, and parameter estimates (coefficients and intercepts) in the unstandardized output are just the means of each variable within each class, as well and the unstandardized slope coefficients, correct? 2. are the standardized parameter estimates for the coefficient Y on X just typical, standardized coefficients (betas)? 3. How are the means of the indicators within each latent class standardized in the standardized output? 4. What is the interpretation of the standardized intercept (the endogenous variable)? In the unstandardized output, the intercept is just the mean of the endogenous variable(s) within each class, weighted by the posterior probabilities, correct? Thanks! 


12. Right 3. They are divided by the SD of the indicator. 4. In the unstandardized solution, the intercept of Y for Y ON X; is the intercept in each class, not the Y mean (you get the latter in TECH4). The standardized intercept is simply divided by the Y SD, that is, in the standardized Y scale. 


Thank you for the clarification. Another question: I want to be able to determine if the parameter estimates (slope coefficients and intercepts) are statistically different from one another across the different classes. Is there a test associated with one of the tech outputs that reports this? Or, if I want to do this by hand, is it as simple as calculating confidence intervals for each estimate based on the standard errors and seeing if the confidence intervals of any two classes overlap with one another? Thanks. 


You can do this with Wald testing using Model Test  see UG. 


Is it possible to hold standardized coefficients constant across groups when using ON statements? 


I think this can be done using Model Constraint if you express the standardized coefficients in terms of model parameter labels. 

db40 posted on Saturday, January 10, 2015  4:10 am



Hi Dr.Muthen, is there a way in Mplus to set a corrected alpha level (Bonferroni Corrected) but also ask for Bonferroni Corrected Confidence Intervals? 


No, this is not an option. 


Hi, I've been running a 2nd order bivariate growth curve model to look at the longitudinal change (covariance among slopes) in two domains. At the first level I have factors defined by at least three manifest variables, measured in years 1,2,and 5. The second order is the linear growth across five years with Lx and Sx for the xfactors and Ly and Sy for the yfactors. All variables are continuous and estimation is FIML. The covariance among Sx and Sy is statistically significant with with t=3.5, p<.001 implying that those who change in variable x also do so at variable y. Accordingly, the correlation in STDXY among Sx and Sy is r=.76 but in the STDXY the correlation comes out as not significant (z=1.194, p=.23). I don't understand the reason for this? Also, if I run a nested model to obtain the likelihood ratio (LR) test with the correlation (covariance) constrained to 0 I get LR=14 with 1df, which is statistically significant  as I would have expected from the tvalue of the covariance test. I'm pretty sure there is a very easy solution to my confusion  but right now I don't seem to be able to find it. 


One question is what the SEs are for the Sx and Sy estimated variances; perhaps they are large relative to the variances. Standardized quantities have different sampling variation than raw ones. Question is which has a distribution closest to the normal which in turn determines the dependability of the tests. One possible arbiter is Bayesian analysis where you can see the actual distribution of your raw and standardized parameter. 


Hello, I conducted SEM analyses using Mplus several years ago. Finally publishing. When i reported R2, researchers/reviewers are asking me how the R2 was calculated. Also, they are asking me whether the R2 i reported is based on a Cohen's d or a Pearson's r value. Can i assume R2 as reported in SEM from Mplus that R2 is based on Pearson's r?? They said they need to know to evaluated the magnitude (I reported an R2=.4) that depending on whether it is d or r based they use a different cut off point. I'm a bit rusty on remembering how R2 is calculated or what it is based. Can you please remind me? Thank you, Suellen Hopfer, PhD REAL Prevention 


What is the scale of the dependent variable? 


Hello Linda, The scale of the DV is categorical (vaccination: yes/no). The SEM model tests the impacts of an intervention (dummy coded for 3 versions of an intervention) and its impact on 2 latent mediators on the outcome of vaccination. Suellen 


For a categorical DV, Mplus gives the Rsquare for a continuous latent response variable underlying the categorical observed variable. So it is the same as a linear regression Rsquare for that latent response variable. This was proposed in McKelvey, R.D. & Zavoina, W. (1975). A statistical model for the analysis of ordinal level dependent variables. Journal of Mathematical Sociology, 4, 103120. and it is also used in other texts, such as Scott Long's nice book on Regression Models for Categorical... and the Snijders & Boskers multilevel book. Note also that with a binary outcome you may want to consider indirect and direct effects based on counterfactuals  this is becoming the new standard for mediation modeling; see e.g. Muthén, B. & Asparouhov, T. (2015). Causal effects in mediation modeling: An introduction with applications to latent variables. Structural Equation Modeling: A Multidisciplinary Journal, 22(1), 1223. DOI:10.1080/10705511.2014.935843 

Back to top 