Message/Author 

Anonymous posted on Saturday, March 10, 2001  1:43 pm



Hi Bengt and Linda I have 2 groups (males and females) and I'd like to compare their growth trajectories in math over grade 1  6. With two different trajectories for the two groups, is it possible to test two expected scores (for the two groups) at each time point are significantly different? 


Typically in a 2group growth model you want the time scores (the growth rate factor loadings) to be equal across groups, allowing for differences across groups in the intercept and growth rate factor means. So a test of equality of expected values at all time points could be done by testing equality of means of the growth factors. Testing instead the equality of the mean at a particular time point is a little more involved because the mean is a function of several parameters. You can do this using the "delta method", or you can do this by changing the centering point. The centering point is the time point that you use for determining the intercept factor (where the slope factor loading is zero). At the centering point, the observed mean is the intercept factor mean, so that gives you a test. 

dave posted on Tuesday, May 10, 2005  7:34 pm



If one has multiple cohorts, should these be modeled separately or should they be modeled together in one input file even if their trajectories are hypothesized to be different. Also, can mplus estimate s shaped curves or only linear and u shaped. 

bmuthen posted on Wednesday, May 11, 2005  8:09 am



You may want to analyze multiple cohorts in a multiplegroup run since often you want to test if development is similar or different. You can handle sshaped curves by either estimating time scores (with some fixed for identification purposes) or fixing the time scores according to an s shape (small time score increases early on, larger in the middle, and small later on). 


Hi there  I am dealing with a similar scenario as the person who asked the above question. I am comparing the growth trajectories of learning scores (over 12 evenly spaced time points) of two groups of rats (control vs experimental group). The change in learning scores is nonlinear. In fact, it looks hyperbolic with a stable asymptotic level. So far, I haven't come across any mention of hyperbolic growth curves in structural equations modelling. Would you be able to give me some pointers on the type of analysis that is most appropriate for my problem? Thank you very much for your time 


Nonlinear growth curves have been discussed in the psychometric literature by Browne and du Toit and by Cudeck. The former discussed curves such as the Gompertz and was published in In L. Collins & J. Horn (eds.), Best Methods for the Analysis of Change. Recent Advances, Unanswered Questions, Future Directions . Washington DC: American Psychological Association. The latter appeared not long ago (in MBR?). Some of these models can be made to fit into the SEM framework. Often, simple approximations would seem feasible such as a square root function with a specific ceiling, but I am not sure here. More generally, growth models that are nonlinear in the growth factors have been discussed in books such as the ChapmanHall book by Davidian and ? 


I appreciate your advice. Thanks for your time. 


Hi! I was in your short courses in Montreal!! Thank you for the seminar!! So I’m doing a lot of things since Monday!! And I have a basic question: when comparing two groups with BIC and AIC for choosing how many latent classes, is there a maximum value of BIC that is a good value. I have a model with a value of BIC 11 000 and I found that is a large value... Thank you for your help Annie 


You want the model with the lowest BIC. 


So it doesn’t matter if isn’t large or not? Annie 


When you choose the number of classes, you are comparing BIC and other measures for models with different numbers of classes. You want the model with the lowest BIC among other things. You might find Bengt's paper in the Kaplan edited book under Recent Paper on this website helpful. 

Scott Ronis posted on Tuesday, January 16, 2007  5:50 am



Regarding your comment posted on 3/12/2001, how does one test the difference of growth factors (intercept & slope). Should we do this by hand with a ttest or ANOVA? If so, should we use the std errors of the int. & slope from the output in the denominator? Thanks. 


Are you referring to testing across groups? 

Scott posted on Wednesday, January 17, 2007  8:04 am



Yes, across groups. 


You can do difference testing across groups using chisquare or the loglikelihood. This is discussed in Chapter 13 of the Mplus User's Guide at the end of the multiple group discussion. 


Hello: I'm examining whether stress and depression growth over time are similar for men and women. The error message is "no convergence, iterations exceeded." What am I doing wrong? I sent the raw dataset under 'stressfinal', My syntax is as follows: DATA: FILE is Z:\stressfinal.dat; FORMAT is FREE; TYPE is INDIVIDUAL; VARIABLE: NAMES ARE v103 v2618 v6618 stress1 stress2 stress3 socsum v10916; USEVARIABLES= v103 v2618 v6618 v10916 stress1 stress2 stress3 socsum; GROUPING IS V103 (1=Male 2=Female); TITLE: Growth Model by social status for stress measures; ANALYSIS: TYPE= MEANSTRUCTURE; TYPE=MISSING H1; ESTIMATOR=ML; H1ITERATIONS=900000; MITERATIONS=900000; MODEL: i1 By stress1 stress2 stress3 socsum@1; s1 BY stress1@0 stress2@1 stress3@2 socsum@3; i2 BY v2618 v6618 v10916@1; s2 BY v2618@0 v6618@1 v10916@2; s1 ON i2; s2 ON i1; [stress1stress3@0 v2618v10916@0 i1s2]; OUTPUT: SAMPSTAT MODINDICES (10) RESIDUAL TECH3 TECH4 TECH5 CINTERVAL STANDARDIZED; Thankyou! 


The problem could be caused by not fixing all of the loadings to one for the intercept growth factors. 


Hello, I am looking at ptsd symptomatology outcomes in three intervention groups  data was collected at three points. However, given that there are significant baseline differences in baseline scores among the three groups, I want to control for baseline scores. I tried two different ways of doing this (1 and 2 below) the model will not run with option 2; it did with option 1. Am I specifying my model correctly in option 1? That is, does model 1 compares the growh in the three groups controlling for baseline scores (among other things)? 1 Model: iu su  PDStot21@0 PDStot22@1 PDStot23@2 PDStot21 ; iu su on Dgroup1 Dgroup2 TxMOD raceeth1 raceeth2 Age controlled1 controlled2 controlled3 ; 2 Model2: iu su  PDStot21@0 PDStot22@1 PDStot23@2 ; iu su on Dgroup1 Dgroup2 PDStot21 TxMOD raceeth1 raceeth2 Age controlled1 controlled2 controlled3 ; Thank you! 


First a basic check of your setup  if you have 3 groups you want to use 2 dummy covariates, not 3. 


I am using only two dummy group covariates (Dgroup1 and Dgroup2); the other are variables I want to control for (TxMod = treatment modality, race and ethnicity, and being in a controlled environment at the three points of data collection 1, 2, and 3). 


So the baseline score is PDStot21. Is that a preintervention measure? If so, you can follow the MuthenCurran (1997) Psych Methods approach. Your first alternative does not control for baseline. Your second alternative is rejected because it uses the baseline score both as a covariate and as an outcome. 


If one want's to control for initial status scores in intervention studies, I found two approaches on your mplus shortcourse slides. One is, to regress the intercept (initial status, centered at T1) on let's say "treatment". And another one is to regress the slope on the intercept. Are there any differences regarding these two approaches? 


The MuthenCurran (1997) Psych Methods approach is to regress slope on intercept in the treatment group, where intercept is centered at the preintervention time point. I don't see how regressing intercept/initial status on treatment would make sense and I don't recall describing such an approach  but perhaps I am misunderstanding. 


Sorry, I was wrong with the first approach. I overlooked that you did not center the intercept at T1. Does the second approach make sense, when using two part modeling? (I need the covariance of the intercepts of the two parts). 


Yes, the MuthenCurran approach would seem applicable also to 2part modeling. The 2group analysis would here be done via the Knownclass option. 


Hi Bengt and Linda I have 2 groups (males and females) for each of which linear function provided the best fit to the offending data. I also included several timeinvariant covariates (all measured at the baseline). I would like to compare whether there are gender differences in how these covariates are related to the growth factors. For example, deliquent friends were significantly associated with both the intercept and the slope of offending for both gender groups. Is it possible to test whether the strength of this association differ across gender groups. Thank you. 


You can do this using chisquare difference testing which is described in Chapter 13 or using MODEL TEST. 


Thank you, Linda. Can I ask a related question? I undentified 4 subclasses of boys with distinct offending trajectories. Can i estimate the extent to which the intercepts and the slopes differ across the classes of boys? Which command to use? Can I also estimate the extent to which timeinvariant covariates are associated with the growth factors across latent classes of boys? Arina. 


To test the intercept and slope mean differences across the classes you want to use Wald chi2 testing via Model Test. You can't do a likelihoodratio chi2 test because the run holding them equal across classes will change the class formation. You can do the same to test classinvariance of the influence of timeinvariant covariates, although here I think it would be ok do alternatively do it via a likelihoodratio test. 


I understand that in categorical LGM's the intercept mean is set to zero and modeled by threshold parameters instead. I have two trajectory classes in a binary growth mixture model and I want to determine whether both intercept means are different. In one class the intercept mean is zero (default) and in the other class sig. different from zero. Does that already suggest a sig. mean difference or do I have to conduct an equality test (set the intercept mean to zero in both classes to test for equality)? Many thanks! 


The ztest for the nonzero class tests against zero. So you can use the ztest to determine if the mean is different across classes. 

Victor Heh posted on Wednesday, October 28, 2009  7:03 am



How do I handle negative variance of a growth factor? 


This means your model is not admissible. You need to change the model. A very small insignificant negative variance is sometimes fixed to zero. 

Jim Thrasher posted on Thursday, November 12, 2009  7:33 am



Hi, I see that above you discuss establishing invariance of growth factor loadings before conducting multiple group comparisons of intercept and slope means. If the trajectory indicators are ordinal, should you first establish invariance or partial invariance of the thresholds across groups, then establish factorial invariance? 


You can look at the multiple indicator growth example in the Topic 2 course handout to see how measurement invariance is tested across time for continuous outcomes. For categorical outcomes, the invariance is related to thresholds and factor loadings. The models described in the user's guide on pages 399400 for testing measurement invariance across groups for categorical outcomes can be applied across time. 


Hi, I have a conditional LGM where the intercept and slope are predicted by gender and the slope is predicted by treatment (randomized control group design). To get estimated growth curves separated by treatment and control I estimated separate grwoth curves for each treatment and control and fixed all parameters to estimates of the overal model. Only the slope means were freely estimated in the separate grwoth models, to capture treatment effects. All works fine and growth curves differ as expected, but the pretest mean estimates (intercepts) differ very slightly. This irritates me, because both treatment and control should have the same mean at pretest. Is it because the separated models are still conditional on gender (which influences the intercept at T1)? Does my approach correctly reflect the estimated growth curve of the overall conditional model? 


Yes, you are saying that gender influences the intercept growth factor mean so T1 means will differ insignificantly over gender since the randomization presumably isn't perfectly balanced on gender. If you want estimated growth cuves plotted, an easier way is to use the Mplus Adjusted means option with tx and gender as covariates. You can then for instance have 4 plots for tx by gender. 


ok, as far as I understand, because I have (for instance)a little bit more boys in treatment, the mean of aggression in treatment group is slightly higher at T1 in my separeted model? (despite treatment does not influnece the intercept) Sounds reasonable when considering the regression equation. BTW., adjusted means was not available in my kind of analysis, so this approach was my only chance to get the trajectories. 


Maybe you have categorical outcomes inducing numerical integration for which the Adjusted means plot is not available with covariates. An alternative then is to use Type = Mixture with gender and tx as latent class variables that have known class membership. 

Dan McDonald posted on Thursday, January 21, 2010  11:16 am



I am modeling a piecewise regression involving three groups. When I set up my model, I first tested it with all the data combined (disregarding the grouping factor). The chisquare value for model fit was 14.049, 15 d.f, p >.50. When I tested for the three different groups, allowing their slopes to be different, I got an overall fit that was much worse, especially in relation to one group in particular. After doing some modification to that group's model, the best I get is chisqure = 52.56, 37 d.f., p= .0465. Am I correct in interpreting this as saying that the overall model fits much better than the model that allows variation between the groups? Thank you very much. I'm new to MPlus. 


I would say that the model that fits well in the total sample does not fit well for each group. I would start with each group and get a wellfitting model for each group. Only if the groups have the same growth model does it make sense to compare growth factor parameters across groups. 


Hello. I am estimating a growth model with categorical outcomes and 3 time points. Estimation of the model based on the overall sample terminates normally. However, when I try to estimate a twogroups model (males and females), I get the following warning: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 49. THE CONDITION NUMBER IS 0.406D17. Parameter 49 is the intercept term in the alpha matrix. What does this warning mean? How can I fix the problem? Thanks! 


Please send the full output and your license number to support@statmodel.com. 

Li Lin posted on Thursday, September 09, 2010  1:58 pm



Dr. Muthen, For a semicontinuous outcome, if we only have got pre and post measurements, and we want to test proportion change in the binary part and mean change in the continuous part. What kind of model you would suggest? Thanks! 


You can use DATA TWOPART to create the continous and binary parts of the variable. Then estimate the following model: MODEL: [y1] (p1); [y2] (p2); y1 WITH y2; [u1$1] (p3); [u2$1] (p4); f BY u1@1 u2; [f@0]; f@1; where the factor loading for u2 is the covariance between u1 and u2. MODEL CONSTRAINT: NEW (ydiff udiff); ydiff = p1  p2; udiff = p3  p4; 

Li Lin posted on Wednesday, September 15, 2010  10:38 am



Thank you very much! The program ran successfully, and both changes were significant with ydiff=.4 and udiff=13. In order to understand the model, I ran another one with code " MODEL: [y1](p1); [y2](p2); f2 BY y1@1 y2; [f2@0]; f2@1; [u1$1](p3); [u2$1](p4); f BY u1@1 u2; [f@0]; f@1; f WITH f2;" This model output had ydiff=.3 (p<.01) and udiff=3 (p=.2). Why the first model instead of the 2nd? The second one seems more direct to me, and I don’t understand why no relationship specified between the binary part and the continuous part in the first model. Could you elaborate the reason behind the model choice? Thanks! 

Li Lin posted on Thursday, September 16, 2010  2:02 pm



Another question  what link function is used in twopart model. Is it probit with MLR and logit with WLS for binary part, and identity for the continuous part? Thanks! 


You are correct that the processes should be correlated. I would use the model with f1 WITH f2. The model should be estimated using maximum likelihood where the default link is logit. 

Li Lin posted on Friday, September 17, 2010  7:49 am



Thank you, Linda. If I use results from a pilot study to specify those parameters, can Monte Carlo simulation in Mplus accommodate this kind of model for sample size? I tried, but always got "THE POPULATION COVARIANCE MATRIX THAT YOU GAVE AS INPUT IS NOT POSITIVE DEFINITE AS IT SHOULD BE." 


If you get that message, you are either not specifying population parameter values for all parameters in the model in which case the value zero is used or the values you are using result in a population covariance matrix that is not positive definite. 

Li Lin posted on Friday, September 17, 2010  10:21 am



Thank you! After correction, I ran MODEL population: [y1*0]; [y2*.4]; y1y2*.8; f1 BY y1@1 y2*.6; [f1@0]; f1@1; [u1$1*3]; [u2$1*6]; f2 BY u1@1 u2*8; [f2@0]; f2@1; f1 WITH f2*.7; The output had population value of 0 for both the two intercepts y1 and y2 and the two threshold u1$1 and u2$1, while the estimates average were close to the values I gave. Also, population value for udiff and ydiff were .5. What was wrong with my code? 


Please send your full output and license number to support@statmodel.com. 

Li Lin posted on Thursday, September 23, 2010  6:53 am



Still for the model posted above, since the research question is not about the factor loadings or the covariance between f1 and f2 or the residual variances, can I fix those values? i.e. use "@" instead of "*". By doing so, power to detect udiff and ydiff should be increased. 


I would not do this. 

Jessica posted on Tuesday, March 15, 2011  1:08 pm



I am working with an existing data set to compare the growth trajectories of two groups of learners (impaired vs typical). As the original data set stands, the impaired group is only about 11% of the population (n=141). Would you suggest: 1) randomly selecting 10% of the typical population so the groups are even (though much small final n, that probably doesn't have sufficient power)? 2) using all participant data (n=1228) despite having significantly unequal groups? 3) based on power analysis (MacCallum, Browne, & Sugawara, 1996) for RMSEA between .05.1, power .8, alpha .05, 4 dfs, 682 participants are needed  so using the 141 impaired, and randomly selecting 541 typical...which still results in unequal groups? Thanks for your input! Any suggested references to backup decision would be greatly appreciated. 


This question is not related to Mplus. I suggest posting it on a general discussion forum like SEMNET. 

Carolin posted on Monday, July 25, 2011  1:42 am



Dear Mrs and Mr Muthen, I have two questions concerning the comparing of two models: 1)When I compare a linear model with a quadratic model I can refer to BIC, AIC and entropy, right? 2)I would like to compare an unconditional model with a model with covariates. How can I do this? I think I can't use BIC and AIC. Thanks a lot for your help! 

Carolin posted on Monday, July 25, 2011  1:46 am



... I'm analyzing a growth mixture model with 4 timepoints. 


BIC can be used to compare both nested and nonnested model. Entropy is not a test of fit. 

Carolin posted on Monday, July 25, 2011  11:44 pm



So if I compare an unconditional model with one with covariates and the latter has a lower BIC/AIC, I can assume that this indicates a better model fit? A related question: when do I do difference testing? Thanks! 


Yes, as long as the set of dependent variables is the same in both models. 

Anne Chan posted on Thursday, November 03, 2011  5:56 pm



Hello. I run a oneslope latent growth curve model and then a twoslope latent growth curve model, using the same data. I don't think the two model are nested (they have the same df). May I ask how should I determine which one provide a better data fit? Thanks a lot! 


When you say a "twoslope" model, do you mean linear and quadratic slopes as in i s q  ... or do you mean a piecewise model with two separate linear slopes? 

Anne Chan posted on Thursday, November 03, 2011  9:09 pm



Hello! Thanks for your prompt reply! I mean a piecewise model with two separate linear slopes. The one slope model is specified as: i s Y1@0 Y2@1 Y3@2 Y4@4 Y5@7; The piecewise model is specified as: i s1 Y1@0 Y2@1 Y3@2 Y4@2 Y5@2; i s2 Y1@0 Y2@0 Y3@0 Y4@2 Y5@5; May I ask: (1) I think this model is not nested (they have the same df). Am I correct? (2) How can I determine which model provides a better data fit? Any statistical test? Thanks! 


1. Models with the same degrees of freedom are not nested. 2. I know of no statistical test of nonnested models. 

Anne Chan posted on Friday, November 04, 2011  1:00 pm



Hello, sorry, I just find that the df of the two models are different. The one slope model is specified as (df = 10): i s Y1@0 Y2@1 Y3@2 Y4@4 Y5@7 The piecewise model is specified as (df = 6): i s1 Y1@0 Y2@1 Y3@2 Y4@2 Y5@2; i s2 Y1@0 Y2@0 Y3@0 Y4@2 Y5@5; (1) Are the two models nested? (2) If there is no statistical test of nonnested models, how can I argue that one of the model offer a better fit? Such as comparing their RMSEA, CFI, SRMR etc? 


What you really want to test is if [s1] = [s2] and this can be done using Model Constraint. 


Hi, I have a model in which I want to predict a growth process of daily hassles (h) on a growth process of depression (d): id ON ih; sd ON sh; sh ON ih; sd ON id; However, when I look under "model results" the intercept of depression seems very small and is much lower than the estimated mean of the intercept in the TECH 4 section. However, when I allow for correlation between the latent factors of the two processes (instead of regression) the intercepts under "model results" and TECH4 are nearly the same. Do you have an answer to the large difference found in the regression approach as opposed to the covariation approach? 


In a regression model, y = a + bx the intercept is equal to the mean of y minus b times the mean of x. It's size will depend on the size of the regression coefficient and the mean of x. 


Thank you, what I actually wondered about was the large difference between the intercept mean under "model results" compared to TECH4? Is it because the intercept mean in TECH4 is not affected by the predictors in the model? Are both ML estimates? Thank you, Karoline 


You do not get an intercept in TECH4. TECH4 gives the mean. TECH4 gives the model estimated means, variances, and covariances of the latent variables in the model. 

xybi2006 posted on Friday, January 16, 2015  2:54 pm



Dear. Dr. Muthen, I ran three latent growth models: intercept only model, linear LGM, and quadratic LGM. In addition to use (a) chisquare difference test, and (b) AIC, BIC, ABIC to select the model, can I also include significant growth factor (i.e., linear slope, quadratic slope) as one of criteria to select the model? If so, do you mention this in any of your publications so that I can cite it for the manuscript that I am working on? Thank you, 


Not sure I have anything you can cite, but it seems like BIC is useful here. If you start with a quadratic, the first thing that is often not needed according to BIC is the quadratic variance, while the quadratic mean may be significant. You can't do proper (zscore or) chi2 diff testing when one alternative has a nonzero variance and the other a zero variance (border problem). 

xybi2006 posted on Saturday, January 17, 2015  11:51 pm



So, normally deviance statistic test is not appropriate in comparison of linear LGM with quadratic LGM, if I understand your posting above correctly. BIC is useful here. How about AIC? Also, are linear LGM and quadratic LGM nested model or not? Thanks much, 


AIC may be good also, but doesn't encourage parsimony of the models as much as BIC. Linear is nested within quadratic, but the assumptions behind chi2 testing are not fulfilled. 

xybi2006 posted on Sunday, January 18, 2015  4:10 pm



Dear Dr. Muthen, Thanks much! Some articles I read use deviance statistics to compare linear LGM with quadratic LGM. So, they articles do not do the comparisons correctly, right? How about linear LGM with piecewise LGM, do deviance statistics work or can only AIC and BIC work? Thanks again, 


Whenever two models differ in terms of a variance being zero versus not you have this potential problem. See e.g. Stoel et al (2006) in Psych Methods. 


Is it possible to tell Mplus to analyse only 1 group in a data file? For example my data file consists of 2 subsamples (coded 0 and 1) and I want mplus to analyse only the subsample coded 0. If yes what code should I use? Many thanks 


See the USEOBSERVATIONS option in the user's guide. 

EunJee Lee posted on Sunday, May 17, 2015  1:18 am



Hi, I have two basic questions with lgm and lca models. 1) I read your comments regarding the model comparison between linear and quadratic models. So far, I figured out that linear is nested within quadratic but chidifference test is not appropriated for model comparison because of zeroconstrained varience which is usally done analyzing quadratic models. In this case, I can compare with BIC and check the significance of mean of quadratic term. My question is if I do not constrain any variance in quadratic model, I can use chidifference test when I make a dicision about linear or quadratic models. And did I understand right so far? 2) I learned it is wrong to use standardized value as indicators when analyzing lgm models. I wonder that if I can use zscore when modeling LCA, or it is also inappropriate? I am trying to model LCA with 7 items indicating the levels of participation for social activities. I summarized 3 of 7 items because they are combined in the same category, theoritically. So 5 indicators were included and all of them were transformed into zscores because one of them were summarized with 3 items so its scale is different from others, and another indicator was measured in different way from others. It is confusing that which one is better to use, raw values with different scales or standardized values. Thank you. 


1) You can only use the chidiff test if the quadratic model has zero variance when comparing to the linear model. 2) Don't standardize. It is ok to have variables on different scales in LCA. 

EunJee Lee posted on Tuesday, May 19, 2015  6:51 pm



Thank you. Then is it ok to use zscores when comparing each indicators means within classes to name and characterize for each class after analyzing LCA models with original scores? 


I would use the profile of means across the variables to interpret classes. See LCA applications in the journals. 

EunJee Lee posted on Wednesday, May 20, 2015  7:13 pm



I read several articles using latent profile analysis but some used zscores and others used raw data for analyzing but used zscores for interpreting. So I was confused and left messages to you. Could you please recommend a few articles using continuous variables measured on different scales as indicators of LPA if you know one? I tried to interpret the results with unstandardized means, but as mentioned above, one of variables used summed score of three activities, so I cannot say one with higher score of this indicator participates on these kinds of activities more than others before comparing with standardized means. And when I looked at other comments you wrote, there was an answer saying using zscore is not appropriate because it analyzed with not covariance matrix but correlation matrix. So if I didn't make any constrains like same variance across classes, which means my model is scale free, is it okay to use zscores? I really appreciate your taking time. 

Aidan posted on Monday, February 20, 2017  9:46 am



Dear Dr Muthén  I have a question relating to the Muthén & Curran (1997) approach to estimating treatment effects for an intervention vs control group (summarised above as "The MuthenCurran (1997) Psych Methods approach is to regress slope on intercept in the treatment group, where intercept is centered at the preintervention time point.") If the two groups have undergone propensity score matching such that both groups in the matched dataset are balanced on relevant measured covariates and on the baseline (Time 1) measurements for the outcome of interest, is it still advisable to regress slope on intercept? If the purpose of this regression is to control for preexisting differences between the two groups, that should be taken care of by the propensity matching and so I wonder if the initial status is better specified as a correlate of the slope rather than as a predictor. I may be misunderstanding your technique. 


I think the propensity score acts in the way that the modeling can be done as if randomization took place and therefore the method would be applicable. I wouldn't correlated but use initial status as a predictor (along with the propensity score). 

Aidan posted on Tuesday, February 21, 2017  4:40 am



Thank you. I'll admit that I'm still slightly uncertain about this  I think because I'm not sure about the specification of the treatment slope in a multigroup analysis, rather than simply including a dummy variable for treatment status as a predictor of intercept/slope. For example, I've used the example given in your Topic 4 handout (slides 6977) to suggest the following model specification for my data (I have three timepoints): i s  y1@0 y2@1 y3@2; i t  y1@0 y2@1 y3@2; i (1); s (2); [i] (3); [s] (4); i WITH s; t ON i; I can see that the means and variances of I and S are constrained to equality. However, I'm not sure where in the example it is specified that the 'T' treatment slope only applies to the treatment group  how do I do that? Is the model specified twice (once per group), with 't' omitted for the control group? Thank you for your patience. 


You see that the treatment slope "t" does not apply to the control group on slide 71 where the slope and the mean are zeroed out (the residual variance is already zeroed out on slide 70). 

Aidan posted on Thursday, February 23, 2017  1:50 am



Ah, I see. Thank you! 

gloria posted on Friday, March 24, 2017  12:59 pm



Hi! I have read previous messages on this thread suggesting that comparing a linear and quadratic LGM is not appropriate whenever these two models differ in terms of variance being zero versus not (Stoel et al., 2006). However, if I have a linear LGM with only variance in the intercept specified and a second model, a quadratic LGM with also only variance on the intercept, do you recommend using a chi2 diff test? Both models are nested and the # of variance parameters are the same so I do not see the problem with this. Is my thinking flawed? Thanks! 


You should post this question on a general discussion forum like SEMNET. 

Back to top 