Hi, and thanks again for the answer to my previous post. I had a quick question that I think merited another thread, so I'll pose it hear, it might be simple, it might not.
Currently, I'm working on an analysis where I compare the fit of different specifications concerning the number and structure of latent variables for the same set of indicators. Essentially, it investigates whether all the indicators measure the same thing, or whether they measure different, associated latent constructs. As such, it's a mixture of confirmatory and exploratory methods, with the exploratory part coming in with maximizing the fit through increased classes on each variable.
Previously, I've specified the models with different multiple latent variables first, and pushed up the number of latent classes in each simultaneously until the fit stopped improving. I wondered recently whether this was an appropriate approach, or whether it would be better to assess the number of classes in each variable independently first, and then include them in the multiple latent variable models used to assess fit.
This would be easier than the first method, but I am concerned that the number and structure of classes identified seperately would not adequately capture each latent construct as it is associated with the others.
Any thoughts on this would be greatly appreciated.
When you say "classes in each variable", I assume you mean "classes for each latent variable". So, this sounds like a factor mixture analysis. I wonder which of the many versions of that analysis you are using. I am just writing an overview for the Maryland mixture conference, trying to summarize the various ways to go about this. Also, I assume that you are working with multiple latent class variables, one for each latent variable (factor).
The answer depends on how you think the latent classes come about. If it is the profiles for the items loading on a specific factor that determine the latent classes for that factor, you have one answer. If it is the combined profile across different items sets, you have another answer.
Hi; Thanks for the input, I'm not sure which version one would label it. I'd like to see your paper if you're ok with letting me see it, it might help.
I have a conceptual model with competing theoretical expectations about how many factors there are for a given set of indicators, and am running a confirmatory analysis. I'm taking a set of indicators, then running a model say, with all loading on one factor. Then other models with them loading differentially on different numbers of factors (which I allow to be associated), as specified theoretically. Then, based on model fit between the various models, I want to make a judgement about which theoretical model fits the data best.
If I understand you correctly, the first generating process is the one that I would assess the number of classes in each variable independently, the second where I would look at the number in each variable simultaneously across the multiple factors? For example, let's say I had an analysis of obesity (I'm not a health researcher, I'm completely making this up ad hoc) and had indicators for say, genetic elements (family history), child hood experiences, and adult socioeconomic status. My model one says they're all part of the same factor for obesity, and all of them load on one factor. My second model says they are all seperate, and each set of indicators loads on their own, associated latent variable. So since I think they're associated, If I think the factors are different but influence each other, I should assess the maximum number of classes for each factor in the context of the 3 variable model, correct?
That was a bit convoluted, I hope it made sense, perhaps there's a better example -
I don't hear any reason for using latent classes - perhaps you are using the term latent classes in another way than I am. Take a look at my recent factor mixture analysis papers posted on our web site under Recent Papers.
Hi Bengt: Thanks again. I took a look at your papers on mixture factor analysis, and I don't this isn't quite what I'm after. If I read them right, the continuous factor represents a scaling in each class, which does provide an interesting path for future work.
In my project, I have a list of theoretical models concerning how many latent variables there should be. What I'm interested in doing here is testing, as in a confirmatory factor analysis, which one is best, that is how many latent variables represent the data best.
Since the indicators are dichotomous latent trait analysis is an option, but I wanted to avoid (rightly or wrongly) assuming continuous latent variables and using tetrachoric correlations (does MPLUS use the 2 step WLS technique for this - I perused my V3 User's guide but did not see it).
So, my idea was, for the multiple latent variable models to simply specify mulitple latent variables using LCA, allowing the classes in each to be associated. This leads to the problem of deciding on the number of classes in each variable.
If all this is wildly innappropriate, I could just go back to the continous factor model.
You don't have to use tetrachorics when analyzing dichotomous indicators of continuous latent variables. Mplus also offers standard ML estimation in line with conventional Item Response Theory.
If you switch to LCA and categorical latent variables, you are asking a different question involving finding groups of individuals instead of finding dimensions of variation. That's ok, and you can work with several latent class variables.