I am performing a latent profile analysis of 15 items that were selected from 3 different questionnaires. They are based on Likert-scales with 4,5 and 6 response options respectively.
How are items with different scales managed by MPlus? I am concerned that having different scales for different items implies that the means and variances that comprise class response patterns would differ across class both due to inter-class differences in response patterns, but also due to scale ranges.
If this is indeed a problem that the user needs to address, how would you suggest to normalise/standardise each items' data before inputting into Mplus? Or does Mplus do some form of item normalisation/standardisation for the user?
Most of my item data shows a skewed distribution (in the univariate/total sample sense), so I am not sure how I would normalise it without changing it's overall shape, and (I assume) I want to retain this.
If you are not comparing LPA results across the 3 questionnaires, I don't see that the differences in scales matter. But you are right that having different scales makes it hard to compare LPA results.
Yes, defining the variables is something the user has to do.
You can treat the variables as categorical (ordinal) to deal with the skewness (perhaps there is also floor/ceiling effects which would be handled that way).
Just to clarify, this LPA model contains 15 items derived from three different questionnaires.
For this set of items, I am not comparing LPA models (i.e. 2 class vs 3 class, or 2 class with constraints vs 2 class unconstrained). I have accepted that a 2 class model is the best fit, based on theory.
So, do I not need to standardise the variables?
What I will be doing is comparing this 2-class LPA model to a completely different 2-class LPA model (which is based on items from a single scale). It turns out that the two classes in both models classify essentially the same cases, and relate to the same theoretical latent variable.
In the comparison of the LPA models (three questionnaire LPA model vs one questionnaire LPA model), I will only be comparing the models in their distribution of posterior probabilities within classes. It is a test of the measurement precision for classes in each model, using different indicators.
So, can you confirm that I don't need to normalise the 2-class LPA model comprised of items from 3 questionnaires?