Std and StdYX coefficients PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
 Anonymous posted on Monday, April 30, 2001 - 7:20 pm
I've been working on growth curve models. Sometimes I'm getting 999.000 for Std and StdYX
coefficients. What does this mean?
 Linda K. Muthen posted on Tuesday, May 01, 2001 - 7:34 am
It means these values could not be computed. You probably have negative residual variances in your model.
 Anonymous posted on Tuesday, May 01, 2001 - 7:13 pm
Regarding the negative residual variances, (1) why it happens? (2) what would the remedy? Is it okay to constrain that negative variance to be zero?
 Linda K. Muthen posted on Wednesday, May 02, 2001 - 9:11 am
Negative residual variances are usually caused by incorrect starting values or an incorrect model. If they are not significant, they can be set to zero. Otherwise, change the model or starting values.
 Anonymous posted on Tuesday, October 02, 2001 - 6:03 am
I just need one thing clarified. If my model has a negative residual variance that is not significant, and the standardized value cannot be calculated (999), does this mean that the parameter estimates are definitely invalid and should not be used? I ask this question after having tried setting the residual variance to zero and changing starting values.
 Linda K. Muthen posted on Tuesday, October 02, 2001 - 7:44 am
It seems that your question has more to it than is being stated in this discussion. Why don't you send your input and data to so I can give you a more informed answer related to your exact problem?
 Peter Elliott posted on Thursday, December 19, 2002 - 6:59 pm
If the Std and StdYX values are 999, does this mean that the parameter estimates and SE values may be incorrect too? Or could it just be the std and STDYX values that are incorrect?

Peter Elliott
 Linda K. Muthen posted on Friday, December 20, 2002 - 7:39 am
You must have a negative variance for a factor for this to occur. That would point to an inadmissable solution.
 Anonymous posted on Saturday, April 05, 2003 - 8:46 pm
I am having a similar problem. Can I also send you the data and input file?
 Linda K. Muthen posted on Sunday, April 06, 2003 - 9:06 am
Send the output and the data to Please include your license number.
 MichaelCheng posted on Wednesday, March 02, 2005 - 12:19 pm
What a wonderful forum! I feel very fortunate to have such a renowned expert available to answer my questions!

I'm running a CFA with 24 binary outcomes (true/false responses) and one latent factor using WLS estimation.

Am I correct in my understanding that the chi-square test of model fit probably isn't the best one to use because of problems with non-normal data and that chi-square df with WLS does not represent interpretable information?

Also, I'm not sure how to interpret SRMR with tetrachoric correlations. Is the value of .234 reliable? If so, do you believe it represents a better indication of fit than RMSEA (.028) for this anaysis?

Finally, what do you think would be the best way to compare nested models? Are chi-square difference comparisons appropriate with tetrachoric correlations, or should I use CFI?

Thank you very much!
 Linda K. Muthen posted on Wednesday, March 02, 2005 - 2:40 pm
You might find the following publication helpful:

Yu, C.Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles.

It can be downloaded from the Mplus website from Mplus Papers. This dissertation examines the behavior of the fit measures you are asking about for categorical outcomes.

I believe your reference to degrees of freedom and weighted least squares estimation refers to the fact that for the WLSMV estimator, the degrees of freedom are not computed in the regular way. This does not make the chi-square untrustworthy. In fact, WLSMV is the Mplus default. I recommend that you use that not WLS. The degrees of freedom for WLS and WLSM are computed in the regular way.

I would compare nested models using chi-square difference testing. I'm not sure how two CFI values can be compared.
 Tom Fisseha posted on Monday, July 12, 2010 - 6:07 am
I am working with Latent Growth curve models. I have the following warnings:


How do I solve it?

 Linda K. Muthen posted on Monday, July 12, 2010 - 6:21 am
See the COVERAGE option in the user's guide. You can lower the default coverage.
 Tom Fisseha posted on Monday, July 12, 2010 - 9:23 am
Dear Dr. Linda,

I am reading the topic " COVERAGE Command" on page-564 but could not manage to get an idea on how to lower the defualt coverage.

Would you give me further clues to deal with it?

 Linda K. Muthen posted on Monday, July 12, 2010 - 9:41 am
The default is COVERAGE = .10. You give a number less than .10.
 Tom Fisseha posted on Monday, July 12, 2010 - 9:55 am
Thank you very much.

 Melissa Simard posted on Friday, November 23, 2012 - 12:52 pm
Hi there,

I am conducting a multiple indicator growth curve (4 time points) and I am wondering if there is anyway to be able to estimate the intercept mean (I have attempted constraining other paths and freeing this so the model will still be identified). Also, I am not sure why (forgive me if this is a naive questions), but the STANDARDIZED option wont produce standardized values for my intercept and mean variances (there are no negative residuals in the model).
Any help would be greatly appreciated!
 Linda K. Muthen posted on Saturday, November 24, 2012 - 10:37 am
You can fix the intercepts of the factor indicators to zero at one time point instead of holding them equal and then free the mean of the intercept growth factor. You don't gain anything by doing this. It is a reparametrization of the model which results in the same fit and the same estimates.
 Melissa Simard posted on Thursday, November 29, 2012 - 9:32 am
Thank you so much for your response (Linda K. Muthen posted on Saturday, November 24, 2012 - 10:37 am)!

Just to clarify, you mean that I should 1)fix the loadings of a 1st order latent construct to 0 (see below), and 2) [i]?


Thanks again!
 Linda K. Muthen posted on Thursday, November 29, 2012 - 2:56 pm
I was talking about intercepts and means not loadings and residual variances. See Example 6.15. You would change

[u11$1 u12$1 u13$1] (3);


[u11$1@0 u12$1@0 u13$1@0];

and add

 Claire Johnston posted on Friday, October 28, 2016 - 2:43 am
Referring to Linda's post on Sat 24 Nov 2012, she states "You can fix the intercepts of the factor indicators to zero at one time point instead of holding them equal and then free the mean of the intercept growth factor" and then gives the example pertaining to example 6.15.

However, [u11$1@0 u12$1@0 u13$1@0]; refers to the intercepts for the same item constrained to zero at three different time points? Not at one time point?

Or did I misunderstand something?

If I do this in my models, the estimates are the same, but not model fit?
 Linda K. Muthen posted on Friday, October 28, 2016 - 12:12 pm
For binary items, there are two choices for the parameterization of a growth model. If you fix the threshold to zero at each time point, you can estimate the intercept and slope growth factor means. If you hold the thresholds equal across time, you can estimate the slope growth factor mean only. The mean of the intercept growth factor is found in the threshold estimate.
 Claire Johnston posted on Monday, October 31, 2016 - 12:13 am
Thank you for the response. Does the same logic hold for continuous items?
 Linda K. Muthen posted on Monday, October 31, 2016 - 6:39 am
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message