Process Mixture PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
Message/Author
 Liuben Siarov posted on Friday, April 08, 2011 - 4:17 am
Dear Mplus team and community,

I have encountered the following problem: I would like to decompose an observed sequence of variables into latent component processes, with known shapes (but with parameters estimated within the model). Really, the model is simple (with I_{i/1} being an indicator variable denoting whether subject i is affected by process 1, 2 processes for simplicity):

Y_{i,t} = I_{i/1}P1_{t} + I_{i/2}P2_{t}

If P1 and P2 are, e.g. Brownian or LGC, then error variances for Y_{t} follow. Both processes have no individually varying parameters.

I have attempted to estimate this as a mixture with 3 latent classes (either one or both indicators are one), but since MPlus apparently attempts to estimate parameters for BOTH processes in class 3, the model fails. So, my question is: is there any way to prevent estimation in class 3 for all parameters except the covariance of the processes and instead maximize the likelihood with argument class membership with respect to parameter estimates in Class 1 and 2? I.e. I would like for the estimation to proceed sequentially.

Can this be done in Mplus? Any ideas would be greatly appreciated!
 Bengt O. Muthen posted on Friday, April 08, 2011 - 8:48 am
Perhaps you mean that class 3 should have the process-specific parameters held equal to those of class 1 and 2, respectively, and only estimate as a new parameter the covariance between the processes. If so, that can be accomplished using equality constraints on the parameters.
 Liuben Siarov posted on Friday, April 08, 2011 - 9:33 am
Dear Prof. Muthen,

thank you for your prompt feedback; I have done this already. However, the algorithm apparently attempts to jointly maximize the likelihood with respect to the sought parameter in *all* classes (weighted, I assume, by the posterior probability of belonging to the respective class) and this fails to converge in class 3.

As an analogy, it is equivalent to estimating a mixture distribution with two means - class 1 has mean 1, class 2 has mean 2, and class 3 has the sum. Running this model will result in lack of convergence since in class 3, the model still attempts to estimate the two means. I am also aware that we can fix this with MODEL CONSTRAINT, but it quickly becomes very complex in my case (for n component processes) - not only algebraically, but also I think in terms of fiddling with the starting values.

I hope I've made my predicament clearer. Thank you again!
 Bengt O. Muthen posted on Friday, April 08, 2011 - 8:49 pm
I am not sure I can be helpful, but let me ask:

Regarding your second paragraph, yes if the class 3 mean is the sum of the means of the other classes you would handle it by Model Constraint. But it didn't sound like this was the situation for your model (although you surely know that better than I) - I thought the 3rd class had mostly the same parameters as the first 2 classes, and added another mathematically independent parameter.

Regarding your first paragraph, yes, it is a joint maximization going on. How do you see that it is class 3 that causes non-convergence?
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: