Message/Author 

Anonymous posted on Thursday, April 28, 2005  12:39 pm



Hello. I am attempting to run a path analysis will all variables in my model treated as directly observed. Since I am not including a measurement model, I would like to correct for measurement error. I am aware that this can be achieved by multiplying the variance of an observed variable by 1  relability. My first question is that I only want to employ this correction with exogenous variables in the model, not endogenous variables  correct? My second question is how do I fix the variances in MPlus? I have tried using the @ function (e.g., x@.09) following the model command but this drastically worsens rather than improves model fit. Do I need to create single indicator latent variables to employ this correction? For example: xlat by x@1; x@.09; This seems to help model fit, but I am not sure it is proper procedure. Finally, I am using the define command to examine interaction terms in my model. Does it matter if I fix the variance of a variable that represents one of interaction terms? Any help you can provide will be very much appreciated. Thank you. 

bmuthen posted on Thursday, April 28, 2005  6:30 pm



As for your first question, the correction is most important for exogenous variables given that parameter estimate biases will occur otherwise. But you may want to do it also for dependents, to separate measurement error and other residual sources. You answered your second question yourself. I think this is posted somewhere on Mplus Discussion. Note that you are fixing the residual variance and that you should fix it to (1  reliability)*sample variance. For you final question, I don't know why you would want to fix the variance  unless you are referring to the second question above in which case you want to do the interaction using the factor you define. 

Anonymous posted on Thursday, April 28, 2005  6:56 pm



Thank you for your help. Your answers to my first two questions I followed, and I also found the other posting and it was very helpful. I would like to followup/clarify your response to my third question. Assume I create a latent variable for an observed variable (x) in order to fix the residual variance of that variable. Also assume I want to create a third variable that represents the interaction (xz) of this variable with another observed variable (z). Do I create the interaction term using the original observed variable (x) or the latent variable I created for x? If I have to now use the latent variable, do I need to change from using the define command to create the interaction to using the XWITH command? Thanks again for your help. 

bmuthen posted on Thursday, April 28, 2005  8:00 pm



You would use XWITH, not Define. 

Timothy posted on Monday, April 26, 2010  7:16 pm



Hi Prof. Muthen I am using the same approach stated in the above to run a path analysis. Even thought I used the two commands, the model fit is still drastically worsens, rather than improved. I then used LISREL to run the path analysis with the same approach and had good fit of the data. I am wondering if I have done something wrong in the Mplus command. Can I send you the outputs to you and see if I have any problems with the commands? 


Please send the two outputs and your license number to support@statmodel.com. 


Hi Linda, I am trying to run a path analysis with all variables in my model treated as directly observed. Since I am not including a measurement model, I would like to correct for measurement error. I am aware that this can be achieved by multiplying the variance of an observed variable by 1  relability. Could you pl. tell me where the sample variance is in the output? Thanks in advance, Pratibha 


If you ask for SAMPSTAT or use TYPE=BASIC, the variances are on the diagonal of the variance/covariance matrix. 

Bee Jay posted on Monday, March 26, 2012  4:09 pm



I am using this equation as well, to fix residual variance for single indicators  as discussed in another thread. So the sample variance is in the SAMPSTAT output. Is the "reliability" you're talking about the variance explained for the indicator? R^2? And when I have completed the equation, will I just enter it into my model, e.g. F1@__; Thanks! 


A residual variance can be used as an estimate of reliability. 


Dear Linda I am trying to control for measurement error in my model, similar to others on this posting. I would like to find out where can I find the info on the output file for: "(1  reliability)*sample variance". thanks 


The sample variance is obtained from SAMPSTAT or TYPE=BASIC. You also need to determine reliability. That is not given automatically. 


Hi, I'm also trying to run a single indicator path model and correct for measurement error. I'm using the standard approach: F1 by Y1 @ 1.0; F1 @ (1reliability)*sample variance I want to use Bayes estimation. I have four exogenous variables and ten endogenous in four 'levels' (i.e., 4 exog > 3 endog > 5 endog > 1 endog > 1 endog). The most downstream variable is actually derived from a single observed score so I have no reliability estimate for it and so can't fix its error variance. I get the following fatal error: THE VARIANCE COVARIANCE MATRIX IS NOT SUPPORTED. A VARIANCE PARAMETER IS FIXED TO A VALUE DIFFERENT FROM 1. USE ALGORITHM=GIBBS(RW) TO RESOLVE THIS PROBLEM. Trying the Gibbs algorithm (which I seem to recall is not appropriate for this sort of model anyway), the model fails to converge. I get the following message: THE CONVERGENCE CRITERION IS NOT SATISFIED. INCREASE THE MAXIMUM NUMBER OF ITERATIONS OR INCREASE THE CONVERGENCE CRITERION. Increasing the iterations does not help and I don't think I should be relaxing the convergence criterion just to get it to run. If I run it using ML I get a nonposdef psi matrix. Sorry for the long post. Any ideas gratefully received. Thank you, David. 


It sounds like you are saying that ML gives a nonposdef psi matrix. If so, I would first sort out the reason for that before turning to Bayes. 


Hi Bengt, Thanks for your reply. The nonposdef problem in ML and nonconvergence in Bayes seems to be a function of fixing errors because if I don't fix them the model runs fine using either estimator. Also, if I collapse the four exogenous variables into one, it runs if I fix the error for just the exogenous variable and for the last but one endogenous variable in the 'causal' chain. But if I then fix any other endogenous variable's error it produces a correlation between it and one other variable > 1.0 (not the same other variable each time, but always at the same 'level' in the model), hence the nonposdef psi matrix. Not allowing the disturbances to correlate solves this, of course, but then the model fit is poor (it fits very well with no errors fixed or with only the first and last but one fixed). Finally, I can see from TECH1 that the variances of the endogenous variables with errors fixed are not being estimated, but the variance for the exogenous variable with errors fixed is estimated. Is this correct? Hope all this makes sense. David. 


You say "but the variance for the exogenous variable with errors fixed is estimated. Is this correct? " Not sure what this means  perhaps you are talking about singleindicator factors. For these factors  be they endogenous or exogenous  their variances are fixed if you say: F1 @ (1reliability)*sample variance Note that for endogenous factors it is the residual variance of the factor that is fixed this way, not the full factor variance. 


OK thanks, I get that. I still haven't managed to solve the main problem  psi nonposdef when I fix any endogenous variable's error variance for those variables in the middle of the model  i.e., the 3 vars or 5 vars in the following sequence: 1 var > 3 vars > 5 vars > 1 var > 1 var. David. 


Perhaps that problem is related to my statement: Note that for endogenous factors it is the residual variance of the factor that is fixed this way, not the full factor variance. 


Ah  the light's come on! Sorry for being so dumb. Thanks, David. 

milan lee posted on Saturday, February 28, 2015  6:13 am



Hi Linda and Bengt, Previously I ran a mediation model with an observed mediator, a1. Now, I'm trying to reanalyze the model yet define a1 to be a singleindicator latent mediator (M1). I know it is needed to correct for M1's measurement error using the following approach: M1 by a1 @ 1.0; M1 @ (1reliability)*sample variance. But I don't get it why I need to calculate the residual variance for M1, since in the original output when a1 was just an observed mediator, its residual variance was already shown. Can I just use it directly? Thank you! 


Why do you think you need to calculate its residual variance? 

milan lee posted on Saturday, February 28, 2015  8:51 am



Hi Bengt, I think I need to calculate it because of Linda's previous post on March 27, 2012, "A residual variance can be used as an estimate of reliability." Otherwise, how I can estimate the reliability for the formula, (1reliability)*sample variance? Thank you very much! 


I think the previous post was for a factor indicator which is not what you are considering. You should use the sample variance of M1. 

Back to top 