Message/Author 

Anonymous posted on Tuesday, May 01, 2001  1:20 am



I've been working on growth curve models. Sometimes I'm getting 999.000 for Std and StdYX coefficients. What does this mean? 


It means these values could not be computed. You probably have negative residual variances in your model. 

Anonymous posted on Wednesday, May 02, 2001  1:13 am



Regarding the negative residual variances, (1) why it happens? (2) what would the remedy? Is it okay to constrain that negative variance to be zero? 


Negative residual variances are usually caused by incorrect starting values or an incorrect model. If they are not significant, they can be set to zero. Otherwise, change the model or starting values. 

Anonymous posted on Tuesday, October 02, 2001  12:03 pm



I just need one thing clarified. If my model has a negative residual variance that is not significant, and the standardized value cannot be calculated (999), does this mean that the parameter estimates are definitely invalid and should not be used? I ask this question after having tried setting the residual variance to zero and changing starting values. 


It seems that your question has more to it than is being stated in this discussion. Why don't you send your input and data to support@statmodel.com so I can give you a more informed answer related to your exact problem? 


If the Std and StdYX values are 999, does this mean that the parameter estimates and SE values may be incorrect too? Or could it just be the std and STDYX values that are incorrect? Peter Elliott 


You must have a negative variance for a factor for this to occur. That would point to an inadmissable solution. 

Anonymous posted on Sunday, April 06, 2003  3:46 am



I am having a similar problem. Can I also send you the data and input file? Thanks. 


Send the output and the data to support@statmodel.com. Please include your license number. 


What a wonderful forum! I feel very fortunate to have such a renowned expert available to answer my questions! I'm running a CFA with 24 binary outcomes (true/false responses) and one latent factor using WLS estimation. Am I correct in my understanding that the chisquare test of model fit probably isn't the best one to use because of problems with nonnormal data and that chisquare df with WLS does not represent interpretable information? Also, I'm not sure how to interpret SRMR with tetrachoric correlations. Is the value of .234 reliable? If so, do you believe it represents a better indication of fit than RMSEA (.028) for this anaysis? Finally, what do you think would be the best way to compare nested models? Are chisquare difference comparisons appropriate with tetrachoric correlations, or should I use CFI? Thank you very much! 


You might find the following publication helpful: Yu, C.Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles. It can be downloaded from the Mplus website from Mplus Papers. This dissertation examines the behavior of the fit measures you are asking about for categorical outcomes. I believe your reference to degrees of freedom and weighted least squares estimation refers to the fact that for the WLSMV estimator, the degrees of freedom are not computed in the regular way. This does not make the chisquare untrustworthy. In fact, WLSMV is the Mplus default. I recommend that you use that not WLS. The degrees of freedom for WLS and WLSM are computed in the regular way. I would compare nested models using chisquare difference testing. I'm not sure how two CFI values can be compared. 


I am working with Latent Growth curve models. I have the following warnings: THE COVARIANCE COVERAGE FALLS BELOW THE SPECIFIED LIMIT. THE MISSING DATA EM ALGORITHM WILL NOT BE INITIATED. CHECK YOUR DATA OR LOWER THE COVARIANCE COVERAGE LIMIT. THE STANDARD ERRORS FOR THE STANDARDIZED COEFFICIENT COULD NOT BE COMPUTED DUE TO FAILURE OF THESTANDARD ERROR COMPUTATION FOR THE H1 MODEL How do I solve it? Regards. 


See the COVERAGE option in the user's guide. You can lower the default coverage. 


Dear Dr. Linda, I am reading the topic " COVERAGE Command" on page564 but could not manage to get an idea on how to lower the defualt coverage. Would you give me further clues to deal with it? regards. 


The default is COVERAGE = .10. You give a number less than .10. 


Thank you very much. regards. 


Hi there, I am conducting a multiple indicator growth curve (4 time points) and I am wondering if there is anyway to be able to estimate the intercept mean (I have attempted constraining other paths and freeing this so the model will still be identified). Also, I am not sure why (forgive me if this is a naive questions), but the STANDARDIZED option wont produce standardized values for my intercept and mean variances (there are no negative residuals in the model). Any help would be greatly appreciated! 


You can fix the intercepts of the factor indicators to zero at one time point instead of holding them equal and then free the mean of the intercept growth factor. You don't gain anything by doing this. It is a reparametrization of the model which results in the same fit and the same estimates. 


Thank you so much for your response (Linda K. Muthen posted on Saturday, November 24, 2012  10:37 am)! Just to clarify, you mean that I should 1)fix the loadings of a 1st order latent construct to 0 (see below), and 2) [i]? EFSW_1 BY EFSW1_1@0 EFSW2_1@0 EFSW3_1@0; Thanks again! 


I was talking about intercepts and means not loadings and residual variances. See Example 6.15. You would change [u11$1 u12$1 u13$1] (3); to [u11$1@0 u12$1@0 u13$1@0]; and add [i]; 

Back to top 