Anonymous posted on Monday, November 19, 2001 - 8:08 am
Dear Linda & Bengt,
I am wondering about the best approach to compute confidence intervals around the estimated growth trajectories. I have tried to used the estimates produced by the cinterval option. The intervals become very wide towards the end and take values which are outside the range of possible values. Is this the only approach available right now and/or am I missing something? Thanks.
bmuthen posted on Wednesday, November 21, 2001 - 8:24 am
The cinterval option is helpful when you want a confidence interval for a single parameter estimate. But you are interested in the confidence interval for each time point for the estimated mean growth trajectory. The estimated mean growth value for a given time point for a linear model is the sum of the estimated intercept mean plus the estimated slope mean times the time score. The estimated variance around this is the sum of variances of the two parts and the covariance, where you use (co-) variance estimates from Tech3. Alternatively, you can get your confidence interval easily for a given time point by centering at this time point so that the estimated growth curve mean is simply the estimated intercept mean - the confidence interval given for this estimate is then what you want.
Pierre posted on Thursday, October 07, 2010 - 7:49 am
Dear Linda & Bengt,
Partly in relation to this, I would have 3 questions.
1. If I follow Bengt's advice above for the level 2 intercept factor -- in effect a mean variance I am therefore calculating the confidence interval of a variance. The problem is, if this mean value is constrained to 0, then the lower confidence interval is negative. Since variances can't be negative, can I consider this a valid result?
2. In the same model (one class, two levels GMM, with categorical observed variables at 5 time points with 6 covariates at the within level, and 1 at the between level), I get the 'non positive first order derivative product matrix' error, since the number of clusters is 20 and the estimated parameters 22. In order to reduce the number of parameter , I was thinking of constraining the threshold in the full model to the estimated values of the model with 20 parameters, that does not yield the NPD error. Would you recommend this?
3.In another variant of the same model, with one and two latent classes, I have been trying to estimate the intercept mean of the growth curve(s) by setting the first thresholds to 0, and the following ones to match as far as possible the cumulative frequencies of the observed categorical variables. My question is: does this remain valid if I introduce independent variables?
I want to graph the predicted values of IB for each level 2 unit (as provided by the SAVEFILE command), together with the confidence interval for its mean value, which I have set to 0. I though I could compute a confidence interval for [IB] using 0 +/- 1.96 (SQRT(IB)/SQRT(n)). Since [IB] represents the mean level 2 variance, is there not a problem with the lower boundary of its confidence interval being negative? Sorry if this is a basic question.
Re 2. Fix the thresholds at that particular value is not part of my design , but I thought I might be allowed to do this since the value of their estimated value did not seem to differ much in versions of the models with less parameters, and the one returning the NPD error. Do you mean that doing this introduces a bias in the estimation of the remaining parameters?
I was looking for clarification with regard to the trajectory confidence intervals. I have estimated a five timepoint LGM in which tere are 21 seperate groups (sample sizes vary from around 50 to over 300 in each group; its a large sample!) I would like to develop confidence intervals around each trajectory. I have estimates the model as a multiple group LGM and the fit is ok(ish) (CFI:0.95).
Despite the fit, I should be able to calculate the 95% CIs. From the information above, I should be able to use the covariance estimates produced by TECH3.
I have used a quadratic parameterisation, but have fixed the variance of the quadratic parameter to zero (fixing the population to the same quadratic shape)
According to the first post in this thread, my variance should be the (co-)variance (from tech 3) of the Intercept, Slope, and the Covariance of the two (the variance of Q has been fixed, so that is done).
However, when looking at the covariance, I'm getting number like (I Var: 1.897, S var: 7.63, IS var: 34.177). these values seem incredibly high to simply sum and produce my variance.
You can get the model estimated variances from the RESIDUAL option. You can express the means in MODEL CONSTRAINT using the NEW option and obtain standard errors in that way.
IYH Boon posted on Friday, March 02, 2012 - 2:28 pm
Quick question about Bengt's 2001 comment: How do you center at a given time point? I want to put confidence intervals around my estimated growth trajectories, but I'm having trouble following the advice that's given above.
Sara Payne posted on Friday, July 06, 2012 - 12:22 pm
Hi Linda and Bengt
I am using a parallel process model to evaluate the relationship between two behaviours. I know the data is not normally distributed, so I have elected to use and report bias-corrected 99% confidence intervals to minimize any bias in parameter estimates that may occur. However, I have notice conflicting results between my model results and BCBOOTSTRAP output. In the model output section, several covariances are significant (p<0.001). However the 99% bias-corrected confidence intervals contain 0.000 suggesting that the covariances are not significant. I am wondering why I get conflicting results.
I'm guessing that in the model results, the standard errors that are used to calculate the p-values are not corrected for non-normality, while the confidence intervals are corrected for non-normality. Is this correct? If this is the case, I should forget what the p-value in the model results tells me and concentrate on the confidence intervals.
The only other explanation that I can think of is that my covarainces are small (0.001). Therefore, perhaps the confidence interval is not actually 0, but gets cutoff at 0.000 in the output. Would this be a possibility?
Do you mean multiple-indicator growth modeling? There the mean of the intercept growth factor is instead picked up in an intercept of the indicators and the growth curve for the factor wouldn't involve that parameter.
Jon Heron posted on Thursday, April 18, 2013 - 9:24 am
yes, that's what I mean
the width of my confidence interval around the growth curve is then zero at baseline.
Jon Heron posted on Thursday, April 18, 2013 - 10:04 am
Did I say "puzzled"? I meant "stupid". I guess I can just go grab that mean of the first indicator and use it instead
I guess if E(f_t)=0 for t at the first time point (which is the standard setup we use), then the predicted value is 0 and should have no interval around it.
Moving an indicator intercept into the intercept factor mean makes it dependent on the choice of indicator.
You can also show expected indicator growth with confidence bands, one for each indicator.
Jon Heron posted on Thursday, April 18, 2013 - 11:44 pm
I've just knocked together a lovely graph for my single indicator growth model which combines estimated growth under the model with the mean values of the manifest sumscore at each time point. Much clearer than the growth factor parameter estimates themselves as this is a quadratic model with constant Q.
I was hoping for a figure for the multi-indicator model to illustrate the similarities. Perhaps showing the population growth overlayed with the first order factor means.
We have a growth mixture model with three latent trajectories. One of them has growth factor means of 49.063 (I), 9.496 (S), and -3.432 (Q). Using these point estimates, the peak of the trajectory occurs at 1.383 on the x-axis. Is there any way to come up with a confidence interval for this value?