I have modelled 3 formative latent factors caused by 6 latent variables (which are reflective and have their corresponding indicators). The three formative variables predict another latent factor which is reflective again.
Indicators are all 5 scale Likert.
1- I test the model when all indicators are specifies as categorical and estimator is WLMSV; and when nothing is categorical. Results are similar but factor loading are slightly higher compared to non-categorical model. Could you please explain why?
2-how would I interpret the standardised regression weights in the categorical model? Do I interpret them as odds, like a binary dependent variable? or do I look at them the same way as non-categorical model?
3- Do I report the correlation matrix between latent variables the same way as I report the correlation matrix for non-categorical? should I change something? mention something?
p.s. I thought categorical variables could only be dependent, but my independent variables (indicators of reflective variables) could also be defined as categorical! Am I missing something?
1. I assume you compare standardized coefficients given the differences in metrics. See also
Muthén, B., & Kaplan D. (1985). A comparison of some methodologies for the factor analysis of non-normal Likert variables. British Journal of Mathematical and Statistical Psychology, 38, 171-189. [Available as PDF]
2. They are not odds because probit is used with WLSMV, not logit. You can interpret them the same as for continuous variables if you consider the latent response variable underlying a categorical DV. See the handout and video of Topic 2 on our website and also Chapter 5 of our new book.
4. For IVs you can ignore that they are categorical and just treat them as continuous.
There is another concern in the model, and ongoing comment from peers:
People have asked me “why not looking at the alternative models where there is an indirect effect from the first level latent constructs on disengagement”. People have also asked about the mediating role of three formative constructs and that I should run the adequate tests (such as bootstrapping, Sobel, or Hays).
I have tried to explain that the formative latent construct is a composites proposed based on the theory and "it is merely a set of dimensions combined (Cadogan & Lee, 2013, p. 244. Nevertheless, people aren’t convinced, and I find it difficult to send my message across.
Am I making a mistake? and if not, could you please suggest a way to communicate this issue in the way that makes more sense?
Finally, in your opinion how receptive are the journal editors toward formative constructs?
Seems like your model makes sense. I don't get your quote because I don't know your variable names but your quote is understandable if I change indirect effect to direct effect - direct effects from the 6 factors to the distal outcome factor can be estimated. You can do bootstrapping for the indirect effects if they are important. I would think journal editors are ok with formative factor modeling - there have been several papers by Ken Bollen for instance. But it is better if you discuss these general matters on SEMNET.
Estimating the direct effect of the 6 (first level)latent variables on distal outcome : would I run the model when all 6 variables have direct effect simultaneously, or would I estimate one direct effect at the time?