If the main interest is not to model the nature of change on certain variables over time with LGM, but to examine effects of certain predictors on latent growth curve (i.e., on level and slope factors), is it okay to use time scores (three measurement occasions) as standardised distributions (Mean=0.0, SD=1)? In this case, we have peer-nominated measures collected in schools, so the distibutions are standardised within classes to control for the variation in class sizes. Even though this might seem "stupid" in the context of a modelling technique that specifically enables to examine individual variation in a given measure over time, I'm getting significant variation in both growth components, and the model(s) fit well to the data. Do you see any problmes with this, or have any suggestions concerning the estimators ect., in Mplus for modlling these kind of scores?
I would not standardize the outcomes. The growth model assumes that the same outcome is measured at each timepoint and standardizing would violate that.
zhenli posted on Wednesday, March 14, 2007 - 1:14 am
Dear Dr. Muthen, I have a similar question. I fit the LGM to five wave data with both linear and quadratic model (with and without time invariant and time variant predictors). however, when I use raw measurement the model fit terrible (e.g., RMSEA larger than 0.1). when I standardized the outcome variable, the model fit very good. I ploted a small random sample from the data to see the individual growth patten. the patterns are diverse, some are more close to linear, some are not linear, and some have data point all over the place. In this case, what should I do? I just acknowledged that standardize outcome variavbles are not appropriate.but I donot feel comfortable to interpret the parameters when models donot fit well. Are there any ways that can fit the model better using raw measurement?
We are trying to run trajectory analyses but we are facing a problem:
We have measured our variable of interest at 4 different time points, but with 2 different instruments (one questionnaire was used for T1 and T2 and another questionnaire was used for T3 and T4). The two questionnaires measure the same construct, but do not use the same scales. Therefore, we are dealing with different means and variances. So, should we:
1- Standardize the variables before conducting the analyses?
2- Transform the variables prior to the analyses so to have comparable means and variances?
3- Or, run the analyses directly with existing variables?
I am conducting a cross-lagged panel analysis. For this purpose, I am testing the measurement invariance of my latent factors. In doing so, I found that the model fit for strict measurement invariance was better when I used z-transformed indicators for one of my latent factors than when I used raw scores. Could you maybe explain, why this is the case? Or do you know any literature related to this phenomenon?
Also, in the previous discussion you recommend using raw scores in longitudinal modeling. Would you recommend this also for indicators of latent factors in simple cross-lagged models?
Yes, I recommend raw scores for any comparisons - over groups or over time.
Seltzer, Frank, Bryk (1994). The Metric Matters: The Sensitivity of Conclusions About Growth in Student Achievement to Choice of Metric. Educational Evaluation and Policy Analysis Spring 1994, Vol. 16, No. 1, pp. 41-49