Message/Author 

Anonymous posted on Friday, September 09, 2005  8:53 am



My question is one that is more methodological. The issue is that I have a dataset that asks juveniles about writing habits. The data are responses that individuals gave about writing while they are in 1998. The subsequent waves ask individuals in 19992004 the same set of questions about writing, but these individuals share demographics like age, race, and sex. I'm wondering if my data are suited for latent growth curve modeling, and where there may be citations that I may read to figure out how to argue that this analysis plan? 


I am not sure that I understand your question. Do you have a group of the same individuals who were asked the same questions in 1998, 1999, 2000, etc. up to 2004? What do you mean that they share demographics? 

Anonymous posted on Friday, September 09, 2005  4:25 pm



The individuals are different, but the questions are the same. Some of my colleagues have suggested a trend analysis. With this in mind, I'm wondering if latent growth curve analysis is possible? That is, between 1998 through 2004, the students are between 1012 years old during each year. The other demographics: race and sex are similar as well. With these issues, I'm really wondering if the data is in a proper format for latent growth curve analysis? 

Anonymous posted on Friday, September 09, 2005  4:36 pm



The individuals are not the same. The questions are the same. The demographics are the same (e.g., ages ranged from 1012 for each year). I certainly hope that this is proper data for latent growth curve analysis. What do you think? 


You would need the same individuals measured repeatedly over time to estimate a random coefficient growth curve model. 

Anonymous posted on Saturday, September 10, 2005  6:00 am



Thank you. 


I have used latent growth curve analysis for a data of 40 intervention group individuals with 3 assessment time points. despite restrictions of a small sample size, the CFI and chisquare values indicate that the model is a good fit for most varibles. However, for one variable, the CFI is 0.650 and chisquare is 11.50 (significance 0.003. I am unclear as to why the values for this variable are low...also, unclear as to whether i can term this as an acceptable or poor fit.. I would be grateful if you could help me clear my doubt 


It is easier to get a good fit with a small sample, not harder. This is due to lower power to reject. A CFI of 0.65 is very poor and there should be ways to improve the model  there must be something different about this variable. 

RuoShui posted on Wednesday, November 20, 2013  9:31 pm



Hello Dr. Muthen, I am quite new to LGCM. I have a question regarding running LGCM. When I only modeled the growth of factors across time points, there was a significant slope. However, after I brought in predictors of the slope and intercept, the slope became non significant. I am not sure what this means. Actually, if the predictors are observed score, the slope became non significant; however, if I put the predictor as a factor with indicators, the slope remained significant and had a similar size. What does this mean? Thank you very much for your time. 


I think you are talking about the mean of the slope growth factor. In a model with covariates, it is the intercept that is estimated not the mean. It is often seen that latent variables explain less variance than observed variables. 

RuoShui posted on Thursday, November 21, 2013  1:43 pm



Thank you very much Dr. Muthen. Yes, I did look at the intercept of the slope growth factor. Thank you for your explanation. It did seem like latent variables explained less variance. But there is one thing I still don't understand. The slope growth factor was significant; so I used predictors to predict the slope. But what does it mean when the slope growth factor became non significant? Is there still a growth? Thank you very much. 


The slope growth factor is a variable. When you regress it on a covariate, the object is to explain the variance in the slope growth factor. You now have a conditional model where the intercept and residual variance of the slope growth factor are estimated. The mean and variance of the slope growth factor may have been significant but that does not mean that the intercept and residual variance are. These are two different models with different parameters. See the Topic 3 video and course handout on the website where growth modeling is discussed. 

RuoShui posted on Thursday, November 21, 2013  7:29 pm



Thank you so much Dr. Muthen! 

xiaoyu posted on Sunday, March 02, 2014  2:07 pm



Dear Dr. Muthen, Can MPLUS handle large data for latent growth curve modeling? I have 575,3068 observations and 3 variables. When I ran the latent growth curve modeling, it looks like MPLUS is always reading the data, but no further progress. Thank you! 


Please send the input, data, and your license number to support@statmodel.com. This should not be a problem. 

CMP posted on Wednesday, March 26, 2014  8:48 am



Dear Dr Muthen, I am new to Latent Growth Modelling using MPlus. While studying a text book on it I decided to try out some analyses with a simplified version of my data set. I tried conducting a first order LGCM but kept getting an error message. Below are my input and the error message. The number of observations is not zero and I could not find any invalid symbol in the data. I do not understand this message. Thank you for your help. data: file= T1_T2_T4_HP_no_id.dat; variable: names = a1a3; missing= all(99); model: interc linear  a1@0 a2@1 a3@2; output: sampstat stdyx; plot: type = plot3; series = a1 (linear) a2 (linear) a3 (linear); *** ERROR The number of observations is 0. Check your data and format statement. Data file: T1_T2_T4_HP_no_id.dat *** ERROR Invalid symbol in data file: "﻿2" at record #: 1, field #: 1 


Open your data set in the Mplus Editor and see what you find at record 1, field 1. 

CMP posted on Thursday, March 27, 2014  12:37 am



Thank you for your quick response. I did as advised and found that there was this symbol "ï»¿" in the first line of my data set. It was not the case in SPSS or in Notepad. I tried several times saving the data set again but each time the problem remained the same in Mplus. Where is this symbol coming from and how can I get rid of it in order to proceed with my analyses? thank you so much for your help. 


You should open the data set in the Mplus Editor where you can see it, delete the symbol, and then save the data. This has something to do with how SPSS saves the data in a recent release. 

CMP posted on Friday, March 28, 2014  7:23 am



I did as advised and this solved the problem. Thank you very much for your help. 


Hi, We want to model the relation between prenatal head growth and a behavioral trait. Fetal head size was measured at 3 timepoints and coded as gestationalage adjusted zscores. We want to adjust the regression of the intercept and the slope of head growth on the behavioral trait for confounders. We used this script: analysis: estimator=mlr; model: i s  sdhcg1@0 sdhcg2@7.6 sdhcg3@17.5; Z_srs_sqrt on i s gender ETHN_d1 ETHN_d2 mdrink_1 mdrink_2 mdrink_3 msmoke_1 msmoke_2 EDUCM5_1 EDUCM5_2 AGE_M; gender ETHN_d1 ETHN_d2 mdrink_1 mdrink_2 mdrink_3 msmoke_1 msmoke_2 EDUCM5_1 EDUCM5_2 age_m; This warning appeared in the output: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NONPOSITIVE DEFINITE FIRSTORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.235D19. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 122, AGE_M This warning isn't related to a specific covariate (it refers to other covariates when ‘problematic’ covariates are removed) and occurs as soon as any covariate is entered, but not in the basic model. Please could you say if the model is still trustworthy or if there are ways to remove the warning? Thanks! 


Please send an output and your license number to support@statmodel.com. 

AT Jothees posted on Saturday, April 22, 2017  12:25 pm



Dear Muthen, I read mplus book on higherorder growth modelling with great passion. It was wonderful , but did not find any information on how the missing data should handles. As described in the book, I am trying to run secondary factorofcurve wth 10 variables ( 7 are continuous and 3 ordered categorical ) across 6 waves. My problem is there are full wave missing variables in my model. I am not sure whether I should run multiple imputation or ML under MAR assumption. Kindly advice me. Bit confused since I have mixed variables in my model . Regards J 


Just say Estimator = ML or MLR. This gives you good missing data handling (assuming you have missing data flags in your data)  it is called ML under MAR (also called FIML). Multiple imputation is not needed. You can also use Estimator = Bayes. 

AT Jothees posted on Sunday, April 23, 2017  11:50 am



Dear Bengt, Thank you for the quick response. I tried Estimator=ML and MLR. But I get error message.Please see the syntax below. VARIABLE: NAMES are id memory1 memory 2 memory3 verbal1 verbal2 verbal3 depress1 depress2 depress3; USEVARIABLE = memory1 memory 2 memory3 verbal1 verbal2 verbal3 depress1 depress2 depress3; MISSING ARE all (999); CATEGORICAL : depress1 depress2 depress3; ANALYSIS: ESTIMATOR = ML; !! also tried MLR MODEL: IS1 BY depress1 memory1 verbal1; IS2 By depress2 memory2 verbal2; IS3 By depress3 memory3 verbal3; [depress1depress3@0]; [memory1memory3]; [verbal1verbal3]; depress1depress3; memory1memory3; verbal1verbal3; [IS1IS3]; depress1 with depress2 depress3; depress2 with depress3; memory1 with memory2 memory3; memory2 with memory3; verbal1 with verba2 verbal3; verbal2 with verbal3; OUTPUT: STANDARDIZED; When I run this command, I get following error message. kindly advice me. ***ERROR One or more variables in the data set have no nonmissing values. Check your data and format statement. Many thanks in advance, J 


Please send your files to Support along with your license number. 

Back to top 