Message/Author 

Anonymous posted on Tuesday, February 25, 2003  11:05 pm



Dear Linda & Bengt, I have used LGM to model children´s academic skill development over a fiveyear period (one measurement during each year). In this context, I am particularly interested in to examine the extent to which children´s initial status (level of skill at the baseline) is related with their growth rate (linear / linear and quadratic trends). Consequently, I have fixed the first loading on the slope factor as zero. If I have understood right, the centering point defines the interpretation of the intercept factor. This is clear. Now, however, I have found arguments (mainly in HLM litarature) that centering at the first time point is not 'the ideal choice' because it produces high correlation between the intercept and slope. In contrast, centering in the middle have been suggested to have various desirable effects. I am little bit confused. I thought that centering at different time points change the interpretation of the results concerning the intercept (including the covariance between the intercept and the slope) rather than validate those. I would appreciate any clarification on this issue. (1) Is there any reason to test the same model with different centering points if I am only interested in how the initial status is associated with the growth rate? (2) Are there any differences in this issue between HLM and LGM? (3) Does quadratic term anything to do with this? (4) What are the benefits to center at the middle, or are there any? Thank you for advance. 


There is a difference between how HLM and LGM handles time scores. In HLM, time scores are treated as data whereas in LGM, time scores are treated as parameters. When time scores are treated as data, the problems that you have been reading about are seen particularly with quadratic growth. When time scores are parameters, these problems are not seen. This is discussed in the following paper: Muthén, B. & Curran, P. (1997). General longitudinal modeling of individual differences in experimental designs: A latent variable framework for analysis and power estimation. Psychological Methods, 2, 371402. 


I have run 8 occasion growth models of selfperception with a timevarying predictor of depression. To insure the level1 effect of depression is truly withinperson and not contaminated by BP differences in depression, I centered the depression scores. If I pick an occasion and center other occasion scores on that occasion's depression level, letting the BP differences be carried at level2 by the depression score for the year I pick, and the level1 offsets from this score capture the WP fluctuations in depression  the model runs fine. However there is no reason to pick any particular year. It seems more reasonable to compute the mean level of depression for each individual and center each individual's scores on that, where the mean is used for BP differences at level2 and the timevarying meanoffsets capture WP at level1. But this model does not run successfully because of the linear dependancy between the mean and the set of ofsets.  we get a poorly conditioned matrix. "THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NONPOSITIVE DEFINITE FISHER INFORMATION MATRIX." It does run if I leave out one year to eliminate the linear dependancy. IS there an estimator that is insensitive to the condition of the information matrix? I tried ML, MLR, MLF. (All variables are continuious.) This model does run in MLR framework in traditional packages. 


Please send the files and your license number to support@statmodel.com. Also, specify what you mean by "This model does run in MLR framework in traditional packages." How does this differ from what you are doing. 


HI Linda, I am not free to send this model and data due to a consulting agreement, but I can illustrate. Consider Ex#6.10 in manual. This LGM has timeinvariant and timevarying covariates. If I wanted to follow general recommendation in books by Singer&Willett or Raudenbusch, and meancenter the TV covariate within person, I would recompute a31a34 as deviations from the mean of a31a34 for each person. While this is not a problem in MLR (tall file) analyses, it is a problem in LGM/SEM because a31a34 now sum to zero for each person. In the data for Ex#6.10, which has no missing, I get a warning "WARNING: THE SAMPLE COVARIANCE OF THE INDEPENDENT VARIABLES IS SINGULAR. PROBLEM INVOLVING VARIABLE A34C"  which makes sense since there is a dependency. BUT  the model does terminate normally, and gives almost identical estimates achieved with equivalent specification in SPSS Mixed. In my dataset, with more occ and missing data, I get the different warning about FISHER quoted above, and no SEs. I am asking if there is an estimator, integration technique, or tolerance setting, or something that permits person meancentering to be estimated in MPlus. 


I just centered the timevarying covariates in Example 6.10, and I do not get any message. It is not possible to answer your questions without more information. These messages can be caused by a variety of problems. 


I think you centered on some value but not meancentered on each individual's mean value. For example: DEFINE: xbar_a = (a31+a32+a33+a34)/4; a31mc = a31  xbar_a; a32mc = a32  xbar_a; a33mc = a33  xbar_a; a34mc = a34  xbar_a; I will have sent my Ex#6 output to the support email. Whether I compute the meancentered covariates in MPlus with DEFINE, or outside of MPLUS. The result is the same. Since the tvcovariates now sum to zero, the message about the singular matrix makes sense. This ex#6 model does run, but once missing data is in the picture the model does not run. I suspect person meancentering just cannot be done in SEM framework. 

Kerry Lee posted on Tuesday, June 18, 2013  8:29 pm



Dear Dr.s Muthen, I am trying to fit a growth model on a latent construct from four waves of data. The latent is indicated by three observed variables at each wave. When I first ran an associative model without random intercept or slope, the model converged, but the latent from one wave was correlated with the others at r > 1. Having done the due diligence regarding equality of factor loadings and intercepts, I found that a growth model with random intercept and slope would not converge. It did converge when I recentred from the first to the last timepoint. My question is why does recentering aid in convergence? Although not explicitly modelled, TECH 4 indicates that the latent was still correlated with the others at r >1. This seems to indicate that recentering did not "fix" the collinearity problem. Sincerely, Kerry. 


You can try covarying the residual of the same item at adjacent time points. Centering changes the path of the estimation and can help in convergence. 


Given that I have a group of participants around 18 and 35 years and i want to center my AGE variable to 18, is it right when I define the following syntax: Define: center age (18);? Do I have to define a new variable and save the new variable? Thank you very much! 


In DEFINE say AGE = AGE  18; 

Peter Lekkas posted on Thursday, December 10, 2015  3:07 am



Dear Drs Muthen I aim to use a timevarying covariate  'a' in a LGCM. Moreover, I aimed to center this timevarying covariate around the grand mean of all observations such that the GMC timevarying covariate is calculated by subtracting the GM of all observations from each observation 'a' at time 't' for person 'i'. Given this, is the applicable statement >= CENTER a1a3 (GRANDMEAN); Or, does this statement only generate a 'withinoccasion' GMC adjusted covariate? 


Grandmean uses the overall mean for that covariate, so it is occasionspecific. 

Peter Lekkas posted on Thursday, December 10, 2015  7:31 pm



Thank you Bengt. I remain confused. If within the context of LGCM, the advice for centering a timevarying covariate is to 'center' it around "one common mean" (David Kenny; Betsy Coach), and the covariate has been measured at say x3 occasions (e.g.a1, a2, a3), then what statement would be used to generate this GMC timevarying covariate using the onecommon" grand mean of "all" persontime observations? Or, is it the case that one only needs to GMC within each measurement occasion of the timevarying covariate based only on the occasionspecific mean? 


I assume you take a "wide" approach so each tvc occupies one column in the data. To get centering by onecommon grand mean you need to use find what that mean is and then subtract it using Define. 

Peter Lekkas posted on Saturday, December 12, 2015  11:48 pm



Thank you kindly Bengt  and yes, I have been using a 'wide' approach as cf. to an archetypal long mode 

Rachel posted on Monday, March 20, 2017  3:43 pm



Dear Dr. Muthén, I compared the estimates for a LGCM both with and without meancentered predictors that are part of an interaction term, checking whether the estimates changed for the main effects to make sure I did it right. Estimates under "Model Results" are identical, except for "I" and "S" under "Intercepts." I was expecting the estimates for the main effects to change; remaining stable suggests I coded something wrong. Perhaps it has to do with this warning: *** WARNING in DEFINE command The CENTER transformation is done after all other DEFINE transformations have been completed. Here is my code: DEFINE: effct = MEAN (AFM ICM); efcXeu = effct * APM; CENTER APM (GRANDMEAN); CENTER effct (GRANDMEAN); If the centering transformation is done after everything else is completed, that sounds like it is meancentering my interaction term, not meancentering my X's prior to creating their product term. How do I code it to meancenter my X's prior to creating the interaction term? (Or, can I define the interaction later, elsewhere in the code?) Thank you! 


You should update to the current Mplus version. 

Rachel posted on Thursday, March 23, 2017  10:28 am



Thank you. I've rerun it after upgrading to 7.4, and found that although the output includes: Variables with special functions Centering (GRANDMEAN) X1 X2 after running the below code: DEFINE: X1 = MEAN (V1 V2); X1X2 = X1 * X2; CENTER X1 (GRANDMEAN); CENTER X2 (GRANDMEAN); the parameter estimates for the main effects are identical between this version, and a version run without the centering commands. Is this a sign that I coded something wrong? 


Please send your output to Support along with your license number. 


I'm running a basic growth model with 4 time points, and I'd like to center around the third time point based on extant theory. E.g., i s  ace1@2 ace2@1 ace3@0 ace4@4 When I do this, only about half of the 10,000 bootstrapped draws are completed (regardless of whether percentile or biascorrected is used), and the parameter estimates look off. However, when I recenter to an earlier time point, the model runs successfully, the parameter estimates make sense, and I get a much higher completion percentage for the bootstraps >90%. Model fit is good regardless of where I center the time scores. Changing starting values has not helped, and there are no modification indices recommended. I'm wondering why centering at one time point versus another would cause such problems? Is there a solution that would allow me to center at the desired time point still? 


Do a nonbootstrap run with the centering at time 3 and check: is the is correlation close to 1? are any residual variances negative? Also, check that you have computed the time scores correctly relative to the run with time score 0 at time 1. 


I had not explicitly modeled the is correlation before. When I add it, it improves things slightly (60% completion of the bootstrap draws). The is correlation is quite low (.014). No residual negative variances. Time scores look ok to me: 0, 1, 2, 6 in one model and 2, 1, 0, 4 when centering on the third time point. Is there a rule of thumb on the percentage I want to see completed in order to feel good about the model? 

Back to top 