LPA with covariates/Multiple group LPA PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Rebecca  posted on Thursday, November 20, 2003 - 5:18 am
Hello. I have been able to develop an interesting 5-class latent profile model in the analyses I’ve been running. Now I am thinking of adding covariates to the model. To see if this might be worthwhile, I quickly ran a few chi-squares to see if at this level there would be significant statistical differences (e.g., race/ethnicity with the 5 classes). Unfortunately I did not find significant differences with these analyses. My question is this- may it still be worthwhile to run a LPA with covariates? Would this type of multivariate analysis find nuanced significant differences that I am not picking-up in the straightforward chi-square? Thanks in advance for any guidance you can provide.
 Linda K. Muthen posted on Thursday, November 20, 2003 - 9:14 am
I think what you did is classify people into their most likely class, and then do chi-square for classmembership by each covariate. If so and you found no significance, there probably is not any. Doing it stepwise like you did, you will obtain standard errors that are too small. Thus you will falsely find signficance. You can always add covariates to the LPA model and see what happens.
 Rebecca posted on Thursday, November 20, 2003 - 10:43 am
Yes. This is exactly what I did. Thank-you for your guidance. As you suggest, I think I will add a covariate to the LPA model and see what happens because I have substantive questions that this type of analysis may help answer.

Now I have one follow-up question based your comment about the too small standard errors and false significance-

In order to assess the LPA model’s interpretability and usefulness, I have been running follow-up analyses (e.g., MANOVA’s) using substantively relevant auxiliary variables as outcomes to determine if there are significant differences among the classes on these variables. (I’ve run these after creating the LPA model and confirming that I do not have a local solution.) Is this appropriate? Or will I also have too small standard errors for such analyses because I am using a stepwise procedure here as well? If it helps to know, I am using cross-sectional data for these analyses.

Again thank-you for your helpful guidance and speedy response!
 Linda K. Muthen posted on Thursday, November 20, 2003 - 1:16 pm
Whenever you assign a person to the most likely class and treat class membership as a given, i.e., ignore sampling variability, you are going to have bias in your standard errors. They will be too small. It is always best to estimate the entire model at the same time. You can add your auxiliary variables to your analysis as covariates or distal outcomes whichever is most appropriate.
 Rebecca posted on Thursday, December 04, 2003 - 2:14 pm
Thanks for your comments from a few weeks ago. I have continued to work on the latent profile analysis and now have a follow-up question.

I have three continuous background variables that I wanted to add to the LPA to determine if class membership varies as a function of these background variables. I have been using the latent class analysis with covariates example (example 25.10 in the Mplus manual) as my guide in this analysis, although my class indicators are interval- not binary (i.e., this is an LPA model, not LCA). Is this acceptable? I have been able to obtain an identified model, but I want to make certain that I am on the right track. And if I am on the right track, can I interpret the output in the same way for the LPA as I would for a LCA with covariates? That is, is this still multinomial logistic regression?

In advance, thanks very much for your help!
 Linda K. Muthen posted on Thursday, December 04, 2003 - 2:27 pm
Yes and yes.
 Anonymous posted on Wednesday, August 18, 2004 - 1:02 pm
I have created latent classes using factor mixture modeling. When I add covariates (children's scores on mental health measures), the classes change. I want to continue to examine the scores as covariates (rather than including them as indicators), as this fits best with my theory. In other words, children's scores are not part of the latent construct I wish to model, but I am interested to know how scores vary according to latent class probability.

Am I going about this the correct way? Thank you.
 bmuthen posted on Wednesday, August 18, 2004 - 1:06 pm
Covariates can and should influence the class formation, not only indicators. Think of it this way - any observed variable correlated with class membership carries information about class membership. In factor analysis with covariates you have the same situation and in fact ETS uses an extensive list of covariates to produce their factor scores (called "proficiences" and printed in your morning paper now and then). The issue of changing class membership due to covariates is discussed explicitly in Muthen (2004) which is in pdf on the Mplus home page.
 Sarah Schmiege posted on Thursday, February 01, 2007 - 2:49 pm
I've run a latent profile model and end up with a 3 class solution. I’ve also included covariates to predict class membership, but am unsure of how to interpret the covariate output. In the output below, my initial assumption was that the first column represents the parameter estimate, the second a standard error, and the third a test statistic. If this is correct, my second question is in regard to interpretation of the test statistic. I was originally thinking it could be evaluated on a z distribution (e.g., absolute values of 1.96), but am now confused because this is a multinomial regression, right? For example, from this output, my interpretation was that both Class 2 and Class 3 had significantly lower scores on the “lastsex1” variable compared to Class 1 and that Class 3 had significantly lower scores than Class 2, but that none of the three classes differed on the “relation” variable. Could you please tell me if that is an accurate assessment, or if this should be interpreted differently?

Parameterization using Reference Class 1

C#2 ON
LASTSEX1 -1.526 0.570 -2.680
RELATION -0.211 0.464 -0.456

C#3 ON
LASTSEX1 -3.710 0.520 -7.141
RELATION -0.665 0.462 -1.439

Parameterization using Reference Class 2

C#3 ON
LASTSEX1 -2.184 0.579 -3.771
RELATION -0.454 0.501 -0.905
 Linda K. Muthen posted on Friday, February 02, 2007 - 9:09 am
The test is a z-score. You interpreation sounds correct but it is not lower values but less likely.
 Sarah Schmiege posted on Tuesday, February 13, 2007 - 3:56 pm
Thanks for your reply - that makes perfect sense and thank you for clarifying that I'm still dealing with likelihoods. As a quick follow-up question, I was wondering if it would be appropriate to calculate the odds ratios and confidence limits from the parameter estimates and standard errors to report in the manuscript I am writing.
 Linda K. Muthen posted on Tuesday, February 13, 2007 - 6:07 pm
We give odds ratios as part of the results and if you ask for CINTERVAL in the OUTPUT command, you will obtain confidence intervals.
 Elizabeth K. Ward posted on Tuesday, August 04, 2009 - 7:40 am
Good morning,

I am conducting a LPA with 4 classes and 2 continuous predictor variables. I would like to change the order of the classes so that I have a different reference class so that I have the odds ratios. I have read numerous threads in the discussion, and I know that I need to use the ending values of the desired reference class as starting values for the last class. I also know that these values can be found in the output. I have two questions. 1) Which values do I use? and 2) What is the input syntax that I need to use? It seems that example 7.10 is the closest example of what I want to do. I have included my syntax below.

TITLE: PE LPA with gender as a covariate

DATA:
FILE IS D:\data\Masterdata.dat;

VARIABLE:
NAMES ARE EUID CMExp CSExp D1stGen
CertType YOB Ethnic Gender Mk12ex
Sk12ex MPtotal SPtotal k12ex Cex
hstotal hmtotal cmtotal cstotal
pmte1 mtoe1 pste1 stoe1 pmte3
mtoe3 pste3 stoe3;
IDVARIABLE = EUID;
MISSING are all(-9);

CLASSES = c(4);
USEVAR = Gender pmte1 pste1;

DEFINE:
IF (Gender eq 1) THEN Male = 0;
IF (Gender eq 2) THEN Female =1;


ANALYSIS:
TYPE = mixture;
STARTS = 0;

MODEL:
%OVERALL%
c#1 ON Gender;
c#2 ON Gender;
c#3 ON Gender;

Thanks for your help!
 Linda K. Muthen posted on Tuesday, August 04, 2009 - 11:43 am
You use the values under means and the syntax shown in Example 7.10.
 Elizabeth K. Ward posted on Tuesday, August 04, 2009 - 12:58 pm
Linda,

Thanks so much for the help! One point of clarification. Do I want to use the means from the baseline model (the 4-class LPA without covariates)or the means from the first run with a particular covariate?
 Linda K. Muthen posted on Tuesday, August 04, 2009 - 3:13 pm
You want to use the means for the analysis for which you want to change the reference class.
 Melinda Gonzales-Backen posted on Thursday, September 20, 2012 - 11:23 am
I am running a multigroup LPA model using the KNOWNCLASS command. I've run the groups separately and in both cases a 3-profile solution was the best fit, based on the VLMR. The interpretation of profiles was the same across groups as well. These profiles were also the same for the total sample.

Is there a way to get the VLMR for a multigroup LPA model? I get the following Warning: TECH11 option is not available for TYPE=MIXTURE with the TRAINING option.
Request for TECH11 is ignored.

Is there a way to confirm the number of profiles that are the best fit in a multigroup LPA model?

Thanks.
 Linda K. Muthen posted on Thursday, September 20, 2012 - 3:00 pm
I would do the LPA with KNOWNCLASS for 2, 3, 4, etc. classes and look at BIC to see which is best.
 Melinda Gonzales-Backen posted on Friday, September 21, 2012 - 6:10 am
Thank you!
 Christine McWayne posted on Thursday, March 14, 2013 - 8:58 am
We are using MPlus to run a LPA to see if different profiles of family engagement exist and if there are relations between these profiles and child and parent demographic characteristics and child outcomes.

When we looked at the results,all but 2 of the auxillary variables are not in the expected metric. When we looked at class membership information that was saved, we also found the variables did not seem to be in the order that was identified in the output.

Can you help us understand why this happened and how this can be resolved?

Thanks!
 Linda K. Muthen posted on Thursday, March 14, 2013 - 9:32 am
Are the variables in the NAMES statement in the order of the columns of the data set. This is the first thing I would check. Also are the number of variable names in the NAMES statement the same as the number of columns in the data set. It sounds like you may be reading the data incorrectly. Use TYPE=BASIC with no MODEL command to investigate this.
 anonymous posted on Tuesday, March 19, 2013 - 12:10 pm
When including covariates in an LPA, is there ever a time when you would interpret the intercepts that are presented in the output below the covariate information? (For example):

Categorical Latent Variables

C#1 ON
GRADE -0.174 0.231 -0.754 0.451
SEX 0.287 0.502 0.572 0.567

C#2 ON
GRADE 0.347 0.355 0.978 0.328
SEX 1.662 0.950 1.749 0.080

C#3 ON
GRADE -0.054 0.249 -0.215 0.830
SEX -0.121 0.520 -0.233 0.816

Intercepts
C#1 1.754 1.180 1.482 0.138
C#2 -4.329 1.545 -2.801 0.005
C#3 1.970 1.274 1.547 0.122
 Linda K. Muthen posted on Tuesday, March 19, 2013 - 12:14 pm
No, these would not be interpreted. They are simply related to the class probabilities which you know.
 Jonathan Steinberg posted on Friday, March 22, 2013 - 1:32 pm
I have recently completed an LPA with auxiliary variables and am trying to obtain more detailed information from my model results. My specific questions are:

1. Since my auxiliary variables are categorical, I know I can’t show means on these added variables, but how can I show frequency distributions on each variable by class membership produced from the LPA?

2. Do you have any recommendations on how to show or discuss the chi-square results? My output shows p-values between pairs of classes, but I can't infer any differences in representation that are greater than expected as you would say from standardized residuals in a chi-square analysis.

3. How do these chi-square tests differ methodologically from a standard chi-square test based on class membership? I only ask this because my chi-square values produced in trying to get answers using SPSS were much larger for one of my variables almost by a factor of 4.

Thanks so much,
Jonathan Steinberg
 Bengt O. Muthen posted on Friday, March 22, 2013 - 2:21 pm
Please send your output to Support@statmodel.com.
 Andrea Norcini Pala posted on Wednesday, October 01, 2014 - 11:51 am
Dear Professors Muthen,
I have run a Latent Profile Analysis with 8 variables. The sample is composed by two subsamples (1 recruited online and 1 recruited offline). The two subsamples differ on 2 of the 8 variables (p<.001). Therefore, I have run a multiple-group LPA in order to account for the subsamples differences (the observed classes correspond to the sub-samples online/offline). Does it make sense to you or would you suggest another solution?
Thank you very much!
Andrea
 Bengt O. Muthen posted on Wednesday, October 01, 2014 - 12:39 pm
When you say "differ on 2 of the 8 variables", do you mean that the means of those variables are different or that the variables themselves are different?

And when you do the knownclass run, how are the known and unknown class variables specified to be related?
 Andrea Norcini Pala posted on Thursday, October 02, 2014 - 9:52 am
Hi,
Thank you for the prompt reply.
Yes, their means differ (I have run a t-test).
The variables are the same in the two groups.

If I regress the unknown on the known, I find a significant association.
 Bengt O. Muthen posted on Thursday, October 02, 2014 - 5:04 pm
You may also want to explore direct effects from the Knownclass variable to those 2 variables, i.e. mean variation for the 2 variables as a function of Knownclass classes.

Instead of Knownclass you can use an observed binary covariate x, with c on x and y on x for those 2.
 S Ernestus posted on Sunday, March 15, 2015 - 12:52 pm
Hello,

I am having trouble understanding how to interpret and report covariate effects for an LCA. For a 4 class solution with 2 covariates:

C#1 ON
STRESS1 -5.087 7.635 -0.666 0.505
RACE -0.225 0.630 -0.357 0.721

C#2 ON
STRESS1 -0.674 0.322 -2.094 0.036
RACE -0.342 0.255 -1.343 0.179

C#3 ON
STRESS1 -0.607 0.410 -1.483 0.138
RACE -0.416 0.439 -0.948 0.343

Intercepts
C#1 2.036 0.744 2.737 0.006
C#2 2.109 0.670 3.145 0.002
C#3 0.731 0.856 0.855 0.393

From what I understand, each of these is providing the statistics to compare each class against the reference class. So for example, when compared to class 4, the probability of being in class 1 decreases as stress increases but this is not significant. Is that correct?

Is there a way to interpret or report the significance of the covariate influence overall (e.g. overall did stress predict class membership)?

Thank you so much for your time.
 Bengt O. Muthen posted on Sunday, March 15, 2015 - 3:05 pm
Q1. Yes.

Q2. You can use Model Test to test if the 3 stress coefficients are jointly zero.
 Katrina Brewsaugh posted on Thursday, January 28, 2016 - 1:28 pm
I've read through a lot of posts regarding when covariates should be included in the models, but I'm still confused.

Setup: Develop sexism profiles (LV) as measured by 4 sexism scales (continuous). Determine if sexism profiles are a greater predictor of attitudes towards father involvement than demos.

Original plan: Conduct an LPA using sexism scales as observed y's to determine the best-fitting model. Assign cases to classes based on post probs, examine differences in the demographics of the classes using cross-tabs, etc. Ultimately class assignment will be entered as the first block in a HMR, with demos in a second block, to examine relationship with father involvement attitudes.

Alternative plan: Conduct an LCA using sexism scores as y's and the demos as u's. After selecting the best fitting model, examine classes to determine if demo differ between classes. Then continue with the HMR as planned.

This is where I start to get confused, because some posts say remove covariates (e.g. demos) one at a time and see if it changes the model solution while others say to add them in one at a time. Which is the best/accepted practice?

Also, I know that when doing an LPA, I have to run each class enumeration 4 times to account for each of the main within-class var/cov structures. Can I still do this with the y's if I'm including categorical u's?
 Bengt O. Muthen posted on Friday, January 29, 2016 - 6:16 pm
I am guessing that "demos" are demographic variables. If you don't want to do a single-step analysis, I would follow the Section 3.2 manual 3-step approach in the paper on our website:

Asparouhov, T. & Muthén, B. (2014). Auxiliary variables in mixture modeling: Using the BCH method in Mplus to estimate a distal outcome model and an arbitrary second model. Web note 21.
 Kim, Ae-Kyoung posted on Wednesday, February 03, 2016 - 2:46 am
Hello! I'm new to Mplus and getting the help of your homepage a lot. Thank you so much. However, there are still problems I'm dealing with and they are as follows.

My research model - conditional LPA model - contains
(1) 3 variables used for LPA: These are continuous variables and correlated each other.
(2) 8 covariates .
1. Is it possible to use 3 variables from different time for LPA? For example, among 3 variables which are used for LPA in my study, one is the data measured in 2014 and the others were measured in 2004.
2. I think the data used in the analysis is truncated because I have only chosen the data of people who have jobs and did list-wise deletion of the others. Then, I have acknowledged that it can be a problem of the selection-bias in OLS. In this case, do I need to use Heckman model in my analysis?
3. Should all variables in LPA meet the normality assumption?

Thank you very much in advance for your help!
 Bengt O. Muthen posted on Thursday, February 04, 2016 - 7:00 pm
1. No problem if they are measured on the same sample.

2.That would be too complex to try. Just do your inference (draw the conclusions for) your particular, selected sample.

3.No, mixtures implies no normality.
 Chee Wee Koh posted on Tuesday, February 23, 2016 - 1:25 pm
Hi,

I intend to use BCH for a LPA with auxiliary variables (antecedents/distal outcomes). Three questions:

1. Does the "mixture implies no normality" argument apply only to indicator variables of LPA, or does it apply to all variables in the model?

2. I have an outcome variable (salary) which distribution is negatively-skewed, leptokuric, and possibly contains outliers. Would you recommend any pre-processing?

3. I read in several parts of this forum that as long as the antecedent has a direct effect on any of the profile indicators, there would be not be measurement invariance. However, according to Lubke & Muthen (2005), if the direct effect does not vary across profiles, MI will still hold - is this correct?

Thank you.
 Bengt O. Muthen posted on Tuesday, February 23, 2016 - 6:20 pm
1. Only to variables influenced by the latent class variable, so not x's inm c ON x.

2. Only if there are clear outliers that you don't want to devote to a class.

3. Did we really say that? Seems like there would be measurement noninvariance in all classes. I am thinking of measurement noninvariance as a difference in response mean/prob across x values even when conditioning on latent class - and that's the direct effect.
 Chee Wee Koh posted on Tuesday, February 23, 2016 - 7:34 pm
Thank you. Page 29 of the article states that when the direct path is specified to be class invariant, the latent classes can still be compared in a straightforward way... I just took that to mean MI but I see your point now.

1. My model has 2 latent profile indicators and the covariate (gender) in my model appears to have a profile invariant effect on one of the indicators when I explored the data using R3STEP (Appendix O to Webnote 15). Will I still be able to use BCH in this case?

2. If I can, do I need to take any special care when I interpret the inter-profile differences on distal outcomes?

3. Gender correlates with some of my distal outcomes as well so I also intend to regress outcome on it. This is like the example in section 3.2 of Webnote 21 - but I also want to specify a profile-invariant direct effect u1 on x2 (where x2 is a duplicate of x). Is this feasible?

Thank you!
 Chee Wee Koh posted on Wednesday, February 24, 2016 - 3:48 pm
Dear Dr Muthen,

I have done some further reading (Kankaras, Moors, & Vermunt, 2010) and analyses, and I would appreciate your advice on the way forward please:

1. I ran LPA for males and females separately. A 4-profile model is the best fit for both groups.

2. As mentioned, I have two indicator variables. The indicator means within each profile did not replicate across the gender groups. Also, the entropy in the male group was higher (.79 vs. .71).

3. I ran LPA for the pooled sample (with gender as covariate). Now a 5-profile model provides the best fit (the increase in no. of profiles with pooled sample is consistent with the example in Kankaras et al.). The model with gender influencing class assignment only (structural equivalence, MI) has a BIC of 11785, whereas the model with direct effect from gender to one of the indicator variables (no MI, just metric equivalance) has a slightly better BIC (11780).

My questions remain the same as those in my earlier post. My study is exploratory. I am more interested in the impact of profiles on distal outcomes rather than gender differences but if the non-MI means that I should analyze male and female data separately, I can go that way too.

Thank you!
 Bengt O. Muthen posted on Wednesday, February 24, 2016 - 4:48 pm
First post:

1. I don't know what this means:

" a profile invariant effect on one of the indicators when I explored the data using R3STEP"

But I don't think you can use BCH correctly when there are indicator effects.

3. I think BCH assumes conditional independence of the indicators and other variables given the latent class variable, so no.

Second post:

Try this more general analysis question SEMNET.
 Harmen Zoet posted on Tuesday, October 25, 2016 - 1:43 am
I'm planning to do an LPA of treatment outcome, after which I will try to find predictors of class membership (using continuous as well as categorical predictors). Is there basically a difference between inserting my predictors in the LPA as covariates vs. first conducting an LPA and then using the latent variable as the dependent variable in a multinomial logistic regression, which is conducted afterwards?
 Bengt O. Muthen posted on Tuesday, October 25, 2016 - 10:06 am
See

Nylund-Gibson, K. & Masyn, K. (2016). Covariates and mixture modeling: Results of a simulation study exploring the impact of misspecified effects on class enumeration. Structural Equation Modeling: A Multidisciplinary Journal, DOI: 10.1080/10705511.2016.1221313
view abstract contact author
 Stephanie Craig posted on Tuesday, December 20, 2016 - 2:08 pm
I completed a LPA on a mixed-gender sample (n=408, 50% male). Using the steps in Finch, 2015, I then checked to see if the 4 class model I found was consistent across males and females. I got the same 4 class profiles for both males and females (and as a whole). When doing this, someone suggested I check to make sure people did "jump" groups when the analyses were done together or by gender. Although the girls did not jump much, about 50 boys moved groups. Some from an extreme group into a non-extreme group. My question is 1) is this a valid analysis and 2) does this mean anything from a statistical level or is the important part that the same 4 groups were found?
 Bengt O. Muthen posted on Tuesday, December 20, 2016 - 6:12 pm
I think it is important that boys jumped when you analyzed them together with girls compared to analyzing them alone even if 4 classes and perhaps also the class percentages are the same. That seems to speak to model misfit in the joint analysis. I assume you checked measurement invariance in the joint analysis.
 Stephanie Craig posted on Tuesday, December 20, 2016 - 6:25 pm
I tried to do the gender invariance using
cg (2) c(4)
and
c(4) cg (2)
but I am not sure how to interpret these analyses. I cannot seem to find any reference that tells me whether to look at BIC differences, or model fit, or what not.

For the parameters being free the BIC is 7918 with an entropy of .866

For the parameters contrained the BIC is 7923 and the entropy is .853.

These are so similar that from what I can tell the models are the same and therefore there is no variance across gender. Am I understanding that correctly?
 Bengt O. Muthen posted on Wednesday, December 21, 2016 - 5:49 pm
By default c and cg are uncorrelated. If you say

c on cg;

you allow the c class percentages to vary with cg.
 Stephanie Craig posted on Thursday, December 22, 2016 - 8:29 am
Where would I find out to understand whether I have gender invariance with that analysis? I still come out with 4 groups for both makes and females. I have been looking for resources on how to determine this and I cannot find anything.
 Bengt O. Muthen posted on Thursday, December 22, 2016 - 10:10 am
You can use the dot command

%cg#1.c#1%

etc for the combination of classes. And then within each such combination impose equality constraints on the outcome means/thresholds across the cg classes. This is the run for measurement invariance across cg classes. Then run without those equalities. The create the usual likelihood ratio chi-square test from the two loglikelihood values.
 Stephanie Craig posted on Thursday, December 22, 2016 - 4:40 pm
As suggested I now have the following.

MODEL:
%overall%
c ON cg;
%cg#1.c#1%
%cg#1.c#2%
%cg#1.c#3%
%cg#1.c#4%
%cg#2.c#1%
%cg#2.c#2%
%cg#2.c#3%
%cg#2.c#4%

How do I constrain these to be equal? What is the best place for me to find literature on how to do this? I have been using the mplus user guide and these forums and I am feeling really lost.
 Bengt O. Muthen posted on Thursday, December 22, 2016 - 5:35 pm
Say that you have 5 latent class indicators y1-y5. Measurement invariance across the cg classes is then stated via equalities of parameters labels p for the y means as

%cg#1.c#1%
[y1-y5] (p1-p5);
%cg#1.c#2%
[y1-y5](p11-p15);
%cg#1.c#3%
[y1-y5] (p21-p25);
%cg#1.c#4%
[y1-y5] (p31-p35);
%cg#2.c#1%
[y1-y5] (p1-p5);
%cg#2.c#2%
[y1-y5](p11-p15);
%cg#2.c#3%
[y1-y5] (p21-p25);
%cg#2.c#4%
[y1-y5] (p31-p35);

No measurement invariance is obtained by deleting the in parentheses parameter labels.
 Fan Xizhen posted on Wednesday, February 22, 2017 - 1:27 am
Dear Prof. Muthen,
I¡¯m working on a multigroup profile analysis with known classes with two cross-national samples, and I¡¯m confused about how to interpret the analyses. I ran 4 MLPA models, (1) a completely unrestricted MLPA in which within-profile means and variances were allowed to vary freely over group, in addition to profile size, (2)a semi-constrained model-1 in which profile size was still allowed to vary freely, but conditional means and variances were constrained to be equal across groups, (3)a semi-constrained model-2 in which profile size and variances were still allowed to vary freely, but conditional means were constrained to be equal across groups, and (4) a fully constrained model, both profile size and within-profile means and variances were fixed to be equivalent across groups. I used BIC to select the best fit model and it showed that semi-constrained model-2(e.g., profile size and variances were still allowed to vary freely, conditional means were constrained to be equal across groups). So, does this mean there was no measurement variance across nations? What is the difference between semi-constrained model-1 and semi-constrained model-2? Do I have to constrained the means and variances at the same time to obtain measurement invariance?
Thanks in advance, your reply would be highly appreciated!
 Bengt O. Muthen posted on Wednesday, February 22, 2017 - 12:01 pm
Knownclass is a way to do multiple-group analysis in the mixture context.

Regarding MLPA, I assume you impose measurement invariance - which implies that you should use these results for 3-stepping.
 Julie Nguyen posted on Wednesday, September 06, 2017 - 9:34 pm
I ran a latent profile analysis without covariates using the the entire sample. I found 4 profiles to be the best model. Now I want to determine if there are group differences but I have 4 groups I want to compare.

1)Am I able to compare the 4 latent profiles across the four groups in one model using KNOWNCLASS or can I only compare two groups in a model at a time?I've only found examples with two groups.

2)Rather than comparing the same number of latent profiles (4) for each group in a model, is there a way I can allow the number of latent profiles to vary (be estimated separately) per group for 4 different groups but done all in one model (ie. In one analysis/model output, group 1 could have 4 profiles, group 2 could have 5 profiles, etc.)? Or is it best to run the latent profile analysis for each group separately? I want to be able to have 4 groups in one model to later see if there are differences in covariates relating to a profile per group.
 Bengt O. Muthen posted on Thursday, September 07, 2017 - 3:16 pm
1) Yes, 4 grps is not a problem.

2) If you have no parameters held equal across groups, a joint analysis of all groups gives the same results as separate analysis of each group. So nothing is gained.
 Julie Nguyen posted on Thursday, September 07, 2017 - 3:59 pm
Thank you! I have some follow up questions.
1) would I need to dummy code the 4 group variable?

2)what changes would I make to a multigroup latent profile syntax to run a joint analysis with all 4 groups in the model without specifying the number of profiles but rather explore how many latent profiles there are per group?
 Bengt O. Muthen posted on Sunday, September 10, 2017 - 1:10 pm
1) No, you can use Define to create a grouping variable which you then use for Knownclass.

2) See a UG example that has cg in it, such as ex 8.8.
 Mitchell Rhodes posted on Tuesday, July 17, 2018 - 11:07 am
Good morning,

I am running an LPA with covariates. I have found that a 4 class solution best fits the data. When I include the covariates, the proportions in each class changes significantly. I read in another discussion that when this happens, this disparity in class proportions requires direct effects between class indicators and covariates. When I include these direct effects, class proportions are still different. Do you have any suggestions available to help fix this issue?

Thank you for your help.
 Bengt O. Muthen posted on Wednesday, July 18, 2018 - 6:59 am
You don't include direct effects for the results to be the same as with no covariates, you include them to get good estimates for the model. You can also use R3STEP if you don't want the covariates to influence the classification.
 Abeer Alamri posted on Saturday, December 15, 2018 - 5:34 am
Hello,
I am doing LPA (5-profile) and using gender as a covariate. I selected 5 profiles and used R3Step. it seems the algorithm cannot converge on an estimate for one of the levels (*****). In the first set, it can’t come up with an estimate of 4 vs 5.
We need to figure out what is happening here is we want to speak about gender.
1) Could you please explain why we have **** and can't have an estimate?
2) How we could interpret this? Thank you.

Here is a part of the output:
THE 3-STEP PROCEDURE
Two-Tailed

Estimate S.E. Est./S.E. P-Value
C#4 ON

G ********* 0.000 999.000 0.000
Parameterization using Reference Class 1
C#4 ON
G ********* 1.303 -9834.578 0.000
Parameterization using Reference Class 2
C#4 ON
G ********* 24.407 -527.947 0.000
 Bengt O. Muthen posted on Saturday, December 15, 2018 - 10:54 am
Maybe the variable G has a very small scale (very small variance) or maybe class 4 is very small or maybe G is binary and has nobody or only 1 person in class 4.

If this doesn't help, send your output to Mplus Support along with your license number.
 Jilian Halladay posted on Tuesday, April 23, 2019 - 9:08 am
I am trying to test for invariance across gender within a multilevel LPA with 7 indicators and 5 profiles. Could I have someone confirm the following coding to ensure I am interpreting the forum correctly? Thanks!

cluster = idschl;
use variables are v1 v2 v3 v4 v5 v6 v7;
KNOWNCLASS = cg (fem=0 fem=1);
classes = cg(2) c(5);
missing are all (999);

Analysis:
type=mixture;
type=complex;
start= 1000 100;
Siterations=50;

Model: !for Constrained parameters
%overall%
C on cg;

%cg#1.c#1%
[v1-v7] (p1-p7);
%cg#1.c#2%
[v1-v7] (p8-p14);
%cg#1.c#3%
[v1-v7] (p15-p21);
%cg#1.c#4%
[v1-v7] (p22-p28);
%cg#1.c#5%
[v1-v7] (p29-p35);
%cg#2.c#1%
[v1-v7] (p1-p7);
%cg#2.c#2%
[v1-v7] (p8-p14);
%cg#2.c#3%
[v1-v7] (p15-p21);
%cg#2.c#4%
[v1-v7] (p22-p28);
%cg#2.c#5%
[v1-v7] (p29-p35);

Model: !for FREE parameters
%overall%
C on cg;

%cg#1.c#1%
[v1-v7] ;
%cg#1.c#2%
[v1-v7];
%cg#1.c#3%
[v1-v7] ;
%cg#1.c#4%
[v1-v7];
%cg#1.c#5%
[v1-v7] ;
%cg#2.c#1%
[v1-v7];
%cg#2.c#2%
[v1-v7];
%cg#2.c#3%
[v1-v7];
%cg#2.c#4%
[v1-v7];
%cg#2.c#5%
[v1-v7];
 Bengt O. Muthen posted on Tuesday, April 23, 2019 - 3:09 pm
Looks correct.
 Jilian Halladay posted on Wednesday, April 24, 2019 - 11:30 am
Hi Dr. Muthen,

Thank you so much for confirming. Prior to testing the multi-group differences, I ran the LPAs without defining sex/gender groupings and found the 5 profile LPA fit best. Then I stratified the sample and conducted LPAs separately - a 5 profile model fit best for males (similar to pooled sample) and 6 for females (although 5 of the 6 profiles were similar to pooled/male and 6th may not be distinct enough to be meaningful).

I have now also run 4 models using the above code.
1. 5 profile LPA constrained
2. 5 profile LPA freed
3. 6 profile LPA constrained
4. 6 profile LPA freed

I am wondering how to compare the best fitting model. In my readings, it was suggested to calculate a change in G2 = G22 - G21 with df=df2 – df1 and that model fit cannot be directly compared as they are estimating different #s of parameters. My output does not seem to have G2 or any X2 results. Is there a particular Tech I need to request for this, or does Mplus not provide these stats for multigroup LPAs? If the latter, is there a way to use the loglikelihood and the scaling correction factor (even though they are technically nested models)?

It is also suggested to look at BIC- my BIC continues to decrease with every parameter estimated (since I have such a large sample size). Is there a way to test for significant reductions in BIC?

Thanks in advance!
 Bengt O. Muthen posted on Wednesday, April 24, 2019 - 4:52 pm
G2 and X2 (if I understand your notation) is only for categorical outcomes but you say you have continuous ones. I would use BIC. There is not a way to test significant reduction in BIC. You can instead try to find a minimum BIC by adding WITH statements where needed.
 Jilian Halladay posted on Thursday, April 25, 2019 - 9:01 am
Hi Dr. Muthen,

The predictors are continuous but the profiles are categorical and I am trying to compare the fit of the profiles across gender/sex. When I run the model without multi-group, I do get both G2 and X2.
 Bengt O. Muthen posted on Thursday, April 25, 2019 - 5:38 pm
Send the 2 outputs - the one that has G2 and X2 and the one that doesn't - to Support along with your license number.
 Jill Rabinowitz posted on Sunday, July 14, 2019 - 6:25 pm
Hi,

I am interested in running an LPA and controlling for a covariate that I suspect will influence the profiles observed. I also would like to examine whether some predictors predict profile membership. When I run the syntax below, the LPA that is being generated is not taking into account the covariate. In other words, the independent variables are predicting profiles generated from an LPA that does not control for the covariate. What do I need to change in the syntax below to make sure that when estimating the relationship between the predictors and profile membership, the LPA that’s being generated reflects one that has accounted for the covariate?

USEVARIABLES ARE ind1 ind2 ind3 ind4
covariate1
predictor1
predictor2;

classes=c(4);

MODEL:
%OVERALL%
c#1-c#3 ON covariate1;
c ON
predictor1
predictor2;

ANALYSIS:
TYPE=mixture;
 Bengt O. Muthen posted on Monday, July 15, 2019 - 5:08 pm
The LPA you specify does control for the covariate in that it is included among the predictors of c. But perhaps you have something else in mind such as

Step 1: Estimate the LPA with the covariate included.

Step 2: Fix the classes from Step 1 and see how predictors influence the class membership.

If this is what you have in mind, see our web notes 15 and 21.
 Stephanie posted on Wednesday, January 15, 2020 - 11:51 am
Hi there,

we want to analyse measurement invariance (multiple groups: intervention vs. control) in a LPA (class-varying, diagonal signs: 5 profiles, 6 indicators).
Unfortunately we were not able to replicate the best loglikelihood value. We used the following input command

Type = mixture complex;
Starts = 5000 100;
STITERATIONS = 200;

model: %Overall%
c#1 on cg#1; c#2 on cg#1; c#3 on
cg#1; c#4 on cg#1;

%cg#1.c#1%
[CCPT1 INT1 Einst1 Kraft1 SR1
HrFK1] (m1-m6);
CCPT1 INT1 Einst1 Kraft1 SR1
HrFK1 (v1-v6)
....

Do you have a suggestion how to deal with this problem?
Thank you very much in advance.
 Bengt O. Muthen posted on Wednesday, January 15, 2020 - 1:03 pm
We need to see your full output and, preferably, the data as well - send to Support along with your license number.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: