Message/Author 


I am trying to run an unconditional growth model in a multilevel framework (long data file). The variable I am using is teacher analysis collected over 3 time points (analysis nested within teacher). Each teacher has their own individual time points, as their rate of completion for the intervention cycles varied slightly across the year depending on how quickly they completed them and/or how many they completed. The unconditional model shows non significant variability at the within level (on average teachers remain stable in their analysis skill), however it does show significant variability at the between level with an estimate of 0.000. Not being able to explain how individuals differ significantly by 0.000, I standardized my teacher analysis variable. However, upon running the same exact model with the standardized variable the results drastically changed in that teacher growth rates no longer significantly differed at the between level. The p value went from 0.000 to 0.891. I am quite perplexed by this. I am using model: type = twolevel random. I am curious why this is occurring? Which set of results I should trust as the "true" result of my model? And if there is something I am failing to account for once I standardize my teacher analysis variable? I would greatly appreciate your feedback. Thank you! 


You should never standardize variables in a growth model. Standardizing variables changes their relationships to each other. See the following paper: Seltzer, Frank, Bryk (1994). The Metric Matters: The Sensitivity of Conclusions About Growth in Student Achievement to Choice of Metric. Educational Evaluation and Policy Analysis Spring 1994, Vol. 16, No. 1, pp. 4149 


Thank you for your very quick and certainly helpful reply to my issue. I will not standardize my variable. However, the estimates I am getting are so small that I am having difficulty talking about them in a meaningful way. Is there something else I can try? Or is there a way to have MPlus report more than 3 decimal places? Between variance of Growth Rate Estimate SE Est./SE p 0.000 0.000 12.925 0.000 My interpretation would be: There is significant variability in teacher analysis growth rats across persons. Individual growth rates differed from the mean growth rate by zero on average? 


Check your variances using TYPE=BASIC or SAMPSTAT. If your variances are small, you can make them larger without changing the relationships among the variables by multiplying them by a constant using the DEFINE command. We recommend keeping variances of continuous variables between one and ten. 


The variance of my time variable is: 7.27 The variance of my outcome variable (analysis) is: 0.07 Just to clarify, are you suggesting I multiply my variable by a constant or the variance by a constant? I used the define command to multiply my variable (analysis) by 10 and the significant between person growth was no longer significant. It went from: Estimate SE Est./SE p 0.000 0.000 12.925 0.000 to 0.010 0.062 0.168 0.867 


Multiply the variable by a constant. This should not change your results. Please send the output with and the output without the DEFINE command along with your license number to support@statmodel.com. 


Does the above recommendation to not standardize variables in a growth model also extend to using TScores (i.e., in the case of say CBCL data)? 


I'm not sure I understand your question. The TSCORES option and AT are used in growth models with individuallyvarying times of observation. 


I meant using Achenbachbased tscores (rather than raw scores) as observed indicators in the model. So not timescores. In short would it be problematic to use CBCL tscores (opposed to raw) as indicators in the model because they are standardized? I ask because I have situation in which different forms of the CBCL were used over time. Thank you! 


Not sure about those scores, but for a related discussion of the importance of using raw scores in growth modeling, see the article Educational Evaluation and Policy Analysis Spring 1994, Vol. 16, No. 1, pp. 4149 The Metric Matters: The Sensitivity of Conclusions About Growth in Student Achievement to Choice of Metric Michael H. Seltzer University of California, Los Angeles Ken A. Frank Michigan State University Anthony S. Bryk University of Chicago 

Back to top 