Erika Wolf posted on Wednesday, May 14, 2008 - 1:31 pm
Hi, I've been running CFAs and SEMs for a while now but am new to running biometric analyses. I'd like to run a standard ACE model on two latent factors, each with multiple categorical indicators. All of the syntax examples that I'm finding seem to show how to run an ACE model on single indicators. Can you point me in the direction of an example of syntax for an ACE model on latent constructs? I know this is what the Harden paper did, but there are no related syntax examples. Thanks!
I don't have an example but you would treat the factors in the same way as the observed variables are treated in Example 5.18. You would just have one extra level.
Erika Wolf posted on Wednesday, May 14, 2008 - 6:32 pm
I've simplified my analysis to just 1 latent factor(INT; derived from 5 categorical indicators) with the AC model until I get this figured out. When I run the analysis I get no parameter estimates for the C1 BY INT1 and C2 BY INT2 statements (stars appear for the standard error and everything else is 0).
I set each threshold to be freely estimated and held equal across twins. Each indicator loading is also freely estimated and held equal across twins(omitted from syntax below).
MODEL: A1 BY INT1* (11); A2 BY INT2* (11); C1 BY INT1* (12); C2 BY INT2* (12);
Erika Wolf posted on Wednesday, May 21, 2008 - 4:07 pm
With your help, I was able to make my model run and work well. So I proceeded to the analysis I really wanted to run in which I ran an ACE model on two different latent factors (INT and EXT), one with 5 categorical indicators and the other with 3 categorical indicators. I ran the ACE model first on INT and EXT separately and both models worked well in separate analyses. However, when I combined them into 1 analysis (so there are 4 A, C and E factors--one each for INT Twin 1 and Twin2 and for EXT Twin 1 and Twin 2), I get one negative genetic loading (A loads on INT negatively). The loading is the same value as when I ran the model with just the INT factor, but it is negative. Any suggestions on how to remedy this? thanks!
Iím running ACE analyses on variables that are factors, with 3 items per factor as indicators. The items are ordinal (5-point scale). Initially I treated the items as if they were continuous and I tested the ACE, AE, CE, and E models using ML. The chi-squares and dfs made sense; that is, df increased by 1 with each parameter that was set to zero, and chi-sq tended to increase as parameters were dropped. But now Iím running into difficulty when I re-do the analyses using MLR and treating the indicators as categorical (ordinal) variables. My script essentially identical to example 7.29 (p. 181) of fifth edition of the Mplus manual (two-group IRT model for factors with categorical factor indicators using parameter constraints). The script runs OK and all of the results make sense -- except for the chi-sq values and dfs, which seem to be wrong: Model... Chi-sq... df ACE.... 1013.945... 31183 AE..... 1022.960... 31185 CE..... 997.175... 31182 E...... 986.820... 31181
Any advice would be greatly appreciated. Steven Taylor
I'm not sure I see the problem. The model with the highest degrees of freedom is the most restrictive model and should have the highest chi-square. When I order the information you give from the highest to lowest degrees of freedom, chi-squares decreases as degrees of freedom decrease as expected.
I am also trying to conduct a univariate twin analysis using binary indicators to find the best fitting A.C.E. model. I have looked through and replicated the syntax/results from the Prescott (2003) example. My question is that with categorical data, you do not define an "E" component in the syntax, and she specifies that to do the comparisons between the full ACE model and the CE model, for example, you just remove the lines of syntax that define A. This was specified in the context of continuous indicators. Is this also true of categorical indicators (thus you would basically just be specifying an "A" model)? Do you then use the Difftest option to do the comparisons between models to find the best fitting model? Finally, if this protocol is correct, how do you then define just the "E" model? If anyone has any advice or if there is a reference I can be pointed towards, I would really appreciate it.
In regards to my post above, I have been able to successfully estimate the ACE, AE, CE, and E models using categorical data, but cannot get model comparison estimates using the DIFFTEST option. I saved the derivatives in the ACE model, and used this file in the ANALYSIS section of syntax for the AE and CE models. The error I am receiving is:
THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL MAY NOT BE NESTED IN THE H1 MODEL. DECREASING THE CONVERGENCE OPTION MAY RESOLVE THIS PROBLEM.
The only change that I made when going from the ACE model to the AE model, for example, was removing all syntax specifying the C component. Thus, there is an increase in the df by 1 in the AE and CE models. Is there any clear reason why I am getting this error message?
There are 2 requirements for nestedness: fewer parameters (higher df) and higher fitting function value (worse fit). You can check for the latter by requesting Tech5 and looking at the last value in the left column. Perhaps you have switched the less restrictive and the more restrictive model.
If this doesn't clear it up, send relevant info to support.
Thank you again for your help. The problem was that although the AE model met the criteria for the nestedness in having fewer parameters, the fitting function value was slightly lower than the ACE model. I think this is because the C parameter in the ACE model was almost nonexistent.