Anonymous posted on Monday, December 01, 2003 - 7:08 am
I am trying to run a simple linear and fixed model for a repeated latent measures: y1 by x1 x2 x3 y2 by x4 x5 x6 y3 by x7 x8 x9 y4 by x10 x11 x12
However, the model is not converging. I tired free the last 2 points, but still it wouldn't converge. I also tried increasing the iterations, no luck. Could you please let me know what might be the problem. thanks.
I am a little confused because you mention freeing the last 2 points but I don't see a growth model which is what that implies to me. I see four factors which I assume are the repeated measures over time. If your model does not converge, see the suggestions in the Mplus User's Guide, pages 160-162. Also, see Example 22.4 on page 218 of the Mplus User's Guide. It shows a multiple indicator growth model which is what I think you want to do.
If you have not done so, you should fit the CFA model at each time point separately before you put them together. You need measurement invariance over time if you want to study development of the factors over time.
Anonymous posted on Monday, December 08, 2003 - 6:25 am
Following up on the previous mail, I am still not sure how to go about this. I have 3 latent variables measured at 4 time points. Before running the growth model, I would like to fit latent variables, X(1-4), Y(1-4), Z(1-4) as z ON x y y on x. DO I first have to fit x,y,z at each time point simultaneously and see if the models are similar? Also, Do you mean that I have to show that all the manifest variables predicting x, y, z load on to their latent variables equally (measurement invariance of factor loadings, mean levels of intercepts and latent factors)? Wouldn't it be adequate to just show that each set of manifest variables predicts its corresponding latent variable to a certain level, for example R square invariance or something (regardless of the exact values of factor loadings, intercepts etc..?)
If x, y, and z represent the same construct which I assume that they do or you would not want to do a growth model, you would definitely want to establish measurement invariance over time with respect to intercepts and factor loadings. If the constructs are not the same, then modeling their development would not make sense.
I am attempting to identify latent growth factors for PTSD symptom counts in youth over the course of 4 timepoints. Relevant input instructions are:
COUNT ARE t1 t2 t3 t4; CLASSES = c(4); ANALYSIS: TYPE = MIXTURE MISSING; STARTS = 1000 20; STITERATIONS = 20; ALGORITHM = INTEGRATION;
Even after increasing the starts I got the following error message:
WARNING: WHEN ESTIMATING A MODEL WITH MORE THAN TWO CLASSES, IT MAY BE NECESSARY TO INCREASE THE NUMBER OF RANDOM STARTS USING THE STARTS OPTION TO AVOID LOCAL MAXIMA.
After adding the quadratic, I get the following error message:
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.308D-11. PROBLEM INVOLVING PARAMETER 12.
Entropy and BIC values generally improve for 3, 4, & q solutions but I am concerned about this error message. Any suggestions? The Baltimore workshop was incredibly helpful and I have read through the book and GMM articles but haven't been able to determine how to move forward with analyses. Thank you in advance for your assistance!!!
The WARNING is always issued as a reminder that you should have several replicated best loglikelihood values in your output.
The STANDARD ERROR message must be addressed - for example, parameter 12 (which you can find in Tech1) may be specific to a class with too few subjects to support its estimation. It is hard to say anything more general about this without seeing it. So if my comments don't help, send input, output, data, and license number to email@example.com.
Thank you so much for your prompt and helpful response! If solutions with more than 1 class repeatedly result in this WARNING even after increasing STARTS, would you typically conclude that the model is not valid (even if entropy and BIC improve), qualify your reporting in the results section by noting the warning, constrain parameters, or accept the 1 class solution?
First, the WARNING is always issued - so if you use STARTS =1000 20; as you do and you get replicated best LLs, then you are ok - ignore the warning.
Second, the STANDARD ERROR/non-identification message does not tell you how many classes you should work with. You have to understand why it happens before you can go on and decide on the number of classes.
That definitely clears things up!!! I was just confused about the WARNING command--didn't realize it was always issued. Your STANDARD ERROR comments also make perfect sense and are in keeping with what I had read previously in the message boards.
In the context of a fully latent LGM (as in the "multiple indicator" example on top of manual page 546), the usual specifictation is to fix the intercepts of f1-f3 and the LGM intercept factor to zero [f1-f4@0i@0 s].
This is because f1-f4 are estimated on the basis of y11-y24 intercepts all beeing freely estimated (and constrained to equality across time points) - this implies that at least one of the f1-f4 means should be constrained to zero for identification (given the equality cosntraints).
However, for the CFA part of the model, a referent indicator can be chosen per factor and its intercept be fixed to zero (the same indicator for which the loading is fixed to 1) thus allowing f1-f4 means to be freely estimated. Then, since this is an LGM, f1-f4 means are still constrained to 0 but in this case the LGM intercept can be estimated.
What I would like to know is whether they are inconvenients or limitations of doing it this way versus doing it as on page 546... To be clear, only two lines of the full input will be changed: [y11@0y12@0y13@0y14@0]; [f1-f4@0 i s]; Thank you very much in advance
What I dont get in this case is what would be the substantive advantage of working with the LGM intercept (i) constrained to zero when this parameter can be freely estimated. Or am I missing something obvious ?
You get the same information in both approaches. There is no substantive advantage or disadvantage to either approach. You don't learn anything new by the alternative approach of estimating the i mean [i]. Typically [i] carries no important information anyway but is only related to the scale of the variable (say 1-10 or 10-100). Think of the single-indicator growth model - the i mean is simply the same as the outcome mean at the time point with time score 0. Here the outcome intercept, say [y], is typically fixed at zero at all time points, but you could instead hold them equal across time points and fix [i@0] and get the same results. - What have you gained by estimating [i]? Nothing - you have simply moved [y] into [i].
The default approach in Mplus is used because it is in line with the classic Sorbom multiple-group approach where one group is a reference group with factor means zero and intercepts are held equal across groups. In the growth context, group is replaced by time point.
Your example with the single-indicator growth model is very helpfull. But let me push it further if anyone has been following this.
Doing what you propose, the information that you get in [i] is the same information that you obtain in the constrained [y intercepts] from the second case (with [i@0]). In other words, [i] in the first (typical) case is equal to the [ys] in the second case.
What I was missing is that in the default fully latent LGM (with [i@0]) you do get the same information in the intercept of the referent indicator of the factors... Sorry for that.
I am having a surprisingly difficult time finding a reference for addressing my specific issue, perhaps because I am not using the best search terms to find it.
I have 4 measurement occasions of a latent construct with 6 indicators. I also have 3 experimental conditions. Comparing a single time point across groups is very easy, but I also want to compare changes over time, with Time 1/condition 1 as the reference point:
Time 2 vs Time 1 (for condition 2 vs condition 1 and condition 3 vs condition 1) Time 3 vs Time 1 (for condition 2 vs condition 1 and condition 3 vs condition 1) Time 4 vs Time 1 (for condition 2 vs condition 1 and condition 3 vs condition 1)
I don't want to do a single trajectory because each time point is marked by an experimentally induced event, which can cause abrupt increases and decreases in scores.
Should I do latent change scores for each comparison with a grouping analysis? Where can I find example syntax? I'm even having a difficult time finding an example of comparing two measurement occasions without a grouping variable, from which I think I would easily be able to generalize.
Why not do a 2-time point factor analysis and test for factor mean differences?
That means you have 6+6 indicators and 2 factors. Hold the measurement parameters equal across time, let the factors correlate, fix the factor mean at zero for the first time point, and estimate the factor mean for the second time point.
As a followup to my question above, I found a paper that says a latent difference approach is "...less restrictive with regard to [Measurement Invariance] than the latent means model..." (Geiser, Burns & Servera, 2014). I can only get partial scalar invariance across time points, so I think I should use a latent difference approach.