Message/Author 


What is the function of 'count' option in 'variable' statement? I ran growth mixture modeling with and without count option for the variables in use, but didn't find any difference between them. I thought count option utilize use of Poisson distribution instead of multivariate normal distribution. Did I miss something? 


With high counts, the Poisson distribution approaches the normal distribution. If you would like us to look at your analyses further, please send your inputs, data, outputs, and license number to support@statmodel.com. 


It seems that growth mixture model with count data use MLR estimation and this is same for GMM with continuous data. Here is my updated questions about count variable option in GMM. 1. If count variables are specified in GMM, does it use Poisson distribution as well as MLR estimation? 2. If not, how can I specify a type of error distribution other than multivariate normal? 3. Does Mplus use link function such as exponential link for count data as default? 4. If not, is there any way to specify a link function for the count data in Mplus? 


The answer to questions 1 and 3 is yes. Specifying the variable as count will produce the proper error distribution and exponential link. The MLR estimator is the maximumlikelihood estimator with robust standard error and can be used with any link/error type. 


i ran a countoutcome growth model embedded in a continuous latent path model. I couldnt find any annotation as to which tests correspond to the independence model and/or the model fit chisqs. any thoughts? "TESTS OF MODEL FIT Loglikelihood H0 Value 40489.267 H0 Scaling Correction Factor 1.756 for MLR Information Criteria Number of Free Parameters 112 Akaike (AIC) 81202.534 Bayesian (BIC) 81732.275 SampleSize Adjusted BIC 81376.600 (n* = (n + 2) / 24) ChiSquare Test of Model Fit for the Count Outcomes** Pearson ChiSquare Value 3783.913 Degrees of Freedom 9912 PValue 1.0000 Likelihood Ratio ChiSquare Value 873.551 Degrees of Freedom 9912 PValue 1.0000 ChiSquare Test for MCAR under the Unrestricted Latent Class Indicator Model for the Count Outcomes Pearson ChiSquare Value 389.968 Degrees of Freedom 2205 PValue 1.0000 Likelihood Ratio ChiSquare Value 219.265 Degrees of Freedom 2205 PValue 1.0000" Thanks! Susan 


With count outcomes, you don't get traditional chisquare and related fit statistics. Nested models are tested using 2 times the loglikelihood difference which is distributed as chisquare. Instead of a difference in the degrees of freedom, the difference in the number of free parameters is used. 


Ok, this makes sense. Just one point to clarify. When I get the following output with a combined count/continuous outcome model: TESTS OF MODEL FIT Loglikelihood H0 Value 11739.929 H0 Scaling Correction Factor 0.961 for MLR Is H0 referring to the independence model? or to the actual hypothesized model fit? I just want to be sure when I compare nested models that I am comparing actual model fit. Thanks! 


The H0 model is the model specified using the MODEL command. 


Hi, I need to estimate a simple unadjusted growth curve model with count indicators (number of chronic conditions over 9 waves of interviews). The model (intercept, slope, model fit, the estimated trajectory compared to sample means) looks fine if I specify the model as Normal, that is, I do not specify the indicators as COUNT. If I specify a Poisson distribution, the model converges but the results are strange: the intercept and slope make little sense; the estimated trajectory looks reasonable but it's consistently above the sample means. I check the inputted data, tried MLR or WLSMV estimators, etc  no difference. Do you know what might be going on? Many thanks in advance for your reply! 


Remember that for counts, the estimated parameters such as mean of the intercept and slope growth factors are in the log rate scale, where rate is the Poisson mean. This implies that you have to exponentiate those estimated means to get them on the rate scale instead of the log rate scale. The sample mean that you refer to is treating the outcome as continuous, not Poisson, I assume. Note also that the Poisson may not fit the data as well as for instance a negative binomial model or a zeroinflated Poisson (ZIP). You can try those models as well. 


Dear Bengt, Many thanks for your reply! Anna 


I am running a LCGA for a count outcome using a ZIP model. When I look at the plots for the estimated trajectory values, they do not appear fitted to the specified functional form. Instead they look more like observed rate values, although they do not perfectly match with the sample means either. I am trying to figure out why this is the case. Does this have something to do with the inflation adjustment? 


It's a difference in scales. The estimated model refers to the lograte being say linearly developed over time. But the plot is for the rate, that is the mean  which is exp(lograte). And then the rate gets multiplied by the probability of not being at zero (see our Topic 2 handout about ZIP). 


I am running a multigroup latent growth model using a ZIP. I found syntax on how to run the model for multiple groups using the mixture and known class commands. However, I cannot figure out how to run a model where slope and intercept factors are constrained to be equal, in order to test for significant differences in these parameters across groups. Any help would be appreciated. I have pasted my model syntax below. Thank you! USEVARIABLES ARE HE12YQ33 HE13Q24D HE14Q24D HE15Q24D HE16Q24D HE17Q24D; COUNT ARE HE12YQ33 HE13Q24D HE14Q24D HE15Q24D HE16Q24D HE17Q24D (i); CLASSES = civlead (2); KNOWNCLASS = civlead (LR29Q5TD=0 LR29Q5TD=1); ANALYSIS: TYPE=MIXTURE; ALGORITHM=INTEGRATION; MODEL: %OVERALL% INT SLOPE  HE12YQ33@0 HE13Q24D@1 HE14Q24D@2 HE15Q24D@3 HE16Q24D@4 HE17Q24D@5; INTi SLOPEi  HE12YQ33#1@0 HE13Q24D#1@1 HE14Q24D#1@2 HE15Q24D#1@3 HE16Q24D#1@4 HE17Q24D#1@5; slope@0 slopei@0; 

Jon Heron posted on Wednesday, September 12, 2012  12:51 pm



Hi Laura you need to add an extra bit: model civlead: %civlead#1% [int] (a1); [slope] (a2); [inti] (a3); [slopei] (a4); int with inti (a5); int (a6); inti (a7); %civlead#2% [int] (b1); [slope] (b2); [inti] (b3); [slopei] (b4); int with inti (b5); int (b6); inti (b7); then use model test to compare a and b parameters. Or, my preference would be to use model constraint to derive some new parameters that gave estimated difference (plus SE) between your two classes (since SE is better than pvalue alone). best wishes, Jon 

Back to top 