Latent Profile Analysis PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Monica Oxford posted on Monday, March 19, 2001 - 4:25 pm
Hi. I have a question regarding Latent Profile Analysis. I have several measures of child "executive function" that include behavior (e.g. impulsivity and attention) and language (e.g. expressive and reflective) that I am using in a profile analysis. These measures are popular in the field and are measured on different scales and thus have different variances. I was wondering if there was a general rule about the degree of difference (between smallest/largest) in variances for continuous items in a latent profile analysis (I know in this is an issue in other forms of "profile analyses" e.g. Tabachnick & Fidell and an issue in SEM e.g., Kline or Bentler). Is it a requirement that all items are measured on the same scale and have similar variances?? Thanks in advance.
 Linda K. Muthen posted on Thursday, March 22, 2001 - 9:39 am
No, it is not a requirement that all items be measured on the same scale and have the similar variances. Putting items on the same scale may, however, help convergence.
 Monica posted on Monday, May 07, 2001 - 1:41 pm
Hi again. Continued discussion on "child executive function" from the earlier post (3/19). I tested the adequacy of my three class latent profile model by giving each class different start values to make sure the solution I got was the "right" one. The model appeared stable (results and log likelihood). Next, I simply changed the order of class 1 and 2 and kept the same start values (so, in my mind the results shouldn't have changed just the order of the results--what were class "2" results should have become class "1" results).

I ended up with different results both on the means within class and on class sizes (prior I had 27%, 36%, 36% and now have 37%, 37%, and 25%, where the third class [the same class in both analyses] was 36% is now 25%, class one was 27% now 37%) leading to different conclusions, which makes me a little concerned.

Is there something I have overlooked or should be concerned about given the changes in results? My operating assumption was that the start values were important, not the order of the classes (by the way, I am using actual start values, instead of 1 and -1, for convergence, it helps because the items I am using are on different scales). Advice??

Thanks in advance.
 Linda K. Muthen posted on Tuesday, May 08, 2001 - 7:48 am
I would need to see both of the outputs to answer this question. Please send them to support@statmodel.com.
 Monica posted on Monday, May 21, 2001 - 3:15 pm
Thanks for the offer to look at my output. However, I discovered that the issue I raised was a mistake on my part. The models are the same even though I changed the order. Thanks anyway.
 Beth McCreary posted on Wednesday, November 21, 2001 - 10:26 am
I've completed a confirmatory factor analysis with three continuous LVs each represented by a set of indicators (which are items on a paper-and-pencil scale), and the fit appears acceptable. The indicators are each scored from "1" to "4" in a Likert-type response format. I have a preconceived hypothesis that the participants should fall into five separate categories based on their scores on the three continuous factors. For example, those "high" on the first factor and "low" on the other two will form one group, those "high" on the second and third factors will form a second group (regardless of scores on the first factor), etc. In addition, I expect to see certain gender differences in the proportions of participants assigned to each category. Is there a way to conduct a confirmatory latent profile analysis to test this hypothesis? Would this be an appropriate thing to do? If so, could you please route me to a reference and/or example in MPlus2? Thank you and happy Thanksgiving!
 Bmuthen posted on Thursday, November 22, 2001 - 8:55 am
You may be interested in a new paper by Lubke, Muthen, Larsen (2001), Global and local identifiability of factor mixture models. This can be requested from bmuthen@ucla.edu by mentioning paper 94.
 Anonymous posted on Friday, March 08, 2002 - 10:00 am
Hi -- I am doing a latent profile analysis, using six indicators of "social capital", each measured on a 1-10 Likert-type scale. My model converges with all variances constrained (the BIC continues to decrease up to a six-class model, but a 3-class model fits better with theory, gives better class probabilities, and the entropy measure is higher). When I free variances past a 1-class model I have a variety of problems -- including within class means that are outside the scale, and/or at the min or max, and with 0 variance (and a variety of error messages re. the model not converging). Also, the class sizes change and patterns between the variable means within classes change when any variances are freed. My data is negatively skewed (less skewed within the classes than in the full group), but within the limits recommended by Kline. Should I trust my results with the variances constrained? Or, can you recomment how to proceed?
 bmuthen posted on Friday, March 08, 2002 - 5:47 pm
Latent profile analysis can have these types of behaviors when the variances are allowed to vary across classes. The literature so far seems to have little guidance to offer in this area. You may want to consider the following approach. Using the model with class-invariant variances, you can classify individuals into the latent classes using their posterior probabilities. You can then go back to the raw data and study the variation of each variable in each class. If a variable is considerably more or less variable in a certain class, you can modify the model to allow that variable to have a class-specific variance for that class.
 Anonymous posted on Monday, June 10, 2002 - 3:43 pm
I want to classify respondents from several ethnic groups into classes, three classes for each ethnic group. There are 24 5-point Likert variables (never to always) that measure five latent constructs. The classification will be based on the five latent constructs. The minimum and maximum subsample sizes are 190 and 300, totaling about 1,000. I want to see how the class proportions differ from one another group.
Could you give some guidence on how to run this analysis. Thanks!
 bmuthen posted on Tuesday, June 11, 2002 - 9:31 am
Let me first ask you if by classes you refer to a latent class(latent profile; LPA) analysis using the 5 latent constructs? If so, have you done preliminary LPA analyses of the factor scores within each ethnic group?
 Anonymous posted on Tuesday, June 11, 2002 - 10:04 am
I have tried LPA with the factor scores for one subsample and the result looked ok! I am not quite sure if I should procede with this approach. Should I obtain the factor scores from a Multigroup CFA or from a single group CFA? If multigroup CFA is preferred, what constraints are needed on what parameters?
 bmuthen posted on Tuesday, June 11, 2002 - 11:26 am
A multiple-group analysis is very valuable to do first because you want to make sure that you have a sufficient degree of measurement invariance before you compare the latent variables (or classes from them) across groups. You should use the default Mplus setup for a multiple-group meanstructure analysis which holds intercepts and loadings equal across groups. You can then look at modification indices to see if some items are not invariant wrt to either parameter type (intercept or loading).
 Anonymous posted on Saturday, February 28, 2004 - 6:51 pm
Hello
I am running a Latent Profile Analysis using a set of 15 behavioral characteristics. Some of the characteristics are highly correlated (e.g., .7 to .8), but the majority of characteristics have moderate to low relationships. Only 10 pairs of variables from the entire correlation matrix showed correlations above .7. Also, all variables are on the same metric (T scores).

In one run, the variables were considered independent, where the latent class variable was driving the relationship between the observed variables. In a second run, those variables which were highly related were allowed to correlate (using the WITH statement).
In the run which considered the variables to be independent, the results were much more meaningful (e.g., lower BIC, higher entropy, MUCH easier to interpret) than the results in which the selected variables were correlated.

Can the LPA solution which considers the variables to be independent be interpreted? Or, is this solution 'invalid' due to the high correlations between some of the variables? How strong is the assumption of independent variables when running/intepreting LPAs?

Thank you for your comments and also for MPLUS.
 bmuthen posted on Sunday, February 29, 2004 - 7:49 am
The sample correlations should be signficant for LPA. It is the within class correlations that are zero. Although LPA specifies zero within-class correlations among the variables, it reproduces correlations among the variables because the variables are all influenced by the latent class variable, so the variables become correlated when mixing across the classes. If some variables correlate more than others this can be due to these variables differing more in means across the classes than other variables. This means that you don't have to include WITH statements to make your model fit. Perhaps you need to include more classes, which have particularly high across-class mean differences on the highly correlated variables. It is also the case that IF you allow WITH for some variables, you may be able to use a smaller number of classes and still get the same model fit. WITH represents within-class correlation and should have a well-interpretable substantive meaning such as a measurement methods effect. So, to some extent classes and WITHs have similar effects on model fit, and substantive arguments will have to be brough in to make a choice. Related to this, you may also study chapter 3 of the Hagenaars-McCutcheon latent class book of 2002 published by Cambridge Univ Press, "Applied Latent Class Analysis".
 Tom Hildebrandt posted on Tuesday, November 30, 2004 - 10:25 am
As I've seen LPA used and described as a way to identify homogeneous populations within a larger heterogeneous population, indicator variables are usually either all continuous or all binary/categorical. What are the potential problems of combining binary/categorical indicators and continuous indicators in the use of LPA?
 Linda K. Muthen posted on Tuesday, November 30, 2004 - 10:46 am
This should not present a problem. It can be done in Mplus.
 Tom Hildebrandt posted on Tuesday, November 30, 2004 - 10:54 am
Thank you for the quick response.

Do you know of a good example of where this mixed model has been applied using LPA to describe subpopulations within a heterogeneous group? I'm currious as to how descriptions of the differences between groups among the indicator variables are made (means for continuous and item endorsement probabilities for binary/categorical)?
 Linda K. Muthen posted on Tuesday, November 30, 2004 - 10:57 am
I don't know of any reference for this.
 bmuthen posted on Tuesday, November 30, 2004 - 11:00 am
Some of this is discussed in the Vermunt-Magidson chapter 3 in the Hagenaars-McCutcheon book Applied Latent Class Analysis.
 Tom Hildebrandt posted on Tuesday, November 30, 2004 - 11:48 am
Thank you both again.

I'm still waiting for the book to arrive. I'm anxious to get a chance to read through it, given your recomendations previously for LCA related questions
 Scott Roesch posted on Tuesday, November 30, 2004 - 6:25 pm
Can anyone point me to a resource in which latent profile analysis was used with MPlus, and/or a general introduction to latent profile analysis including a description of the parameters that the analysis generates to determine these profiles? Thanks!
 bmuthen posted on Tuesday, November 30, 2004 - 7:49 pm
Although not using Mplus, the Vermunt-Magidson chapter 3 in the Hagenaars-McCutcheon book Applied Latent Class Analysis is useful in this regard. An introduction using Mplus has yet to be written.
 Scott C. Roesch posted on Friday, January 28, 2005 - 4:41 pm
We have just run a latent profile analysis using Mplus. We have 18 variables that are continuous in nature and 1 variable that is categorical with 4 levels or groups. With respect to the output, we understand how to interpret the output for the 18 continuous variables. However, the output for the 1 categorical variable is unclear to us. Values for this variable are listed under the heading Means, and give us values only for 3 of the 4 groups that compose this categorical variable. Our questions include (a) why are these categories list under Means?
(b) shouldn't we be getting proportions for this variable since it is categorical? and (c) in general, if these means are interpretatively
meaningful, what do negative means tell us? Thank you for any help you can provide.
 Linda K. Muthen posted on Friday, January 28, 2005 - 8:13 pm
If this is an observed categorical variable, then you should get thresholds. This variable should be on the CATEGORICAL list. If this is a categorical latent variable, you should get means. I think you mean the former but am not totally certain.
 Scott C. Roesch posted on Saturday, January 29, 2005 - 9:18 am
I now change the categorical variable to be listed as CATEGORICAL rather than NOMINAL, and received the thresholds. I guess I am still confused why I did not receive probabilities for these as well, like one receives in an LCA. Thanks!
 Linda K. Muthen posted on Saturday, January 29, 2005 - 1:46 pm
With binary outcomes, CATEGORICAL and NOMINAL should yield the same results. I suggest that you send the two outputs and data to support@statmodel.com to be checked. You may not be using the most recent version of Mplus or there may be another explanation. I would need more information to determine this.
 JJ posted on Friday, February 04, 2005 - 4:34 pm
I have a question regarding the determination of the appropriate number of classes in an LPA. For example, if the Vuong-Lo-Mendell-Rubin Likelihood test is not significant for a 3 class solution (compared to a 2 class solution) but the BIC is smaller for the 3 class solution, which should trump? Meaning...how do go about evaluating whether the 2 or 3 class solution is superior?
 bmuthen posted on Friday, February 04, 2005 - 6:03 pm
This does not have a simple answer. BIC and LMR can disagree. You may also want to consider sample-size-adjusted BIC which has shown superior results in some studies. When fit indices do not give a clear answer I would go with interpretability - often a k-class solution is merely an elaboration of a (k-1)-class solution, not a contradictory finding.

Also, are you sure you are interpreting the MLR p value correctly? See the User's Guide.
 JJ posted on Monday, February 14, 2005 - 8:30 pm
Could you tell me how MPlus sorts results files (.dat).? I have imported a results file into SPSS and want to be able to link subjects to their original case id’s—I should be able to do this if I can figure out how Mplus is sorting the file. Just so you have a little background (if necessary to answer the question), the LPA that I conducted includes only a subset of the total subjects in the original data file. The original file includes 3 sets of subjects, and I used the command syntax to include only those subjects with a code=1 on a categorical variable in the data set. Thus, the LPA was only conducted on these subjects in this specific analysis.
 Linda K. Muthen posted on Tuesday, February 15, 2005 - 6:57 am
Sorting varies. If you are saving individual data, you should be able to use the IDVARIABLE option of the SAVEDATA command.
 JJ posted on Tuesday, February 15, 2005 - 9:34 pm
I have now been able to save the ID it is saving it like this 10.000**********, which is not the proper format. The subject ids are supposed to look like this 030100102. Can you suggest how I might change the commands so that the subject id's are accurately saved?
 Linda K. Muthen posted on Wednesday, February 16, 2005 - 7:21 am
See the Mplus User's Guide where it states that the length of the ID variable can not exceed seven. You will have to shorten this variable. There is usually a unique part that does not exceed seven.
 Anonymous posted on Thursday, February 17, 2005 - 4:56 pm
Regarding the interpretation of the MLR, discussed on Feb.4...If the p value of the MLR is less than .05, this means that the solution is superior to the k-1 solution? Conversely, if the p value is greater than .05, the k-1 solution is superior. Is this correct? Thank you.
 bmuthen posted on Thursday, February 17, 2005 - 5:08 pm
You mean LMR (Lo-Mendell-Rubin). Yes, your description is correct.
 Anonymous posted on Saturday, March 19, 2005 - 1:51 pm
I am running a latent profile analysis (LPA) of four count variables that index health care utilization (e.g. # ER visits). Initially I plunged ahead and did the LPA and found that a two class solution was indicated by the Vuong-Lo-Mendell-Rubin and Lo-Mendell-Rubin Likelihood Tests (i.e the two class colution was superior to the one class solution, and the three class solution did not improve on the two class solution). At the same time the BIC argued for a single class. I became concerned with the inconsistency and (as I should have done originally) I investigated the "Poissoness" of the utilization variables. On convexity plots three of the four variables showed deviations from poissoness. I suppose my initial question is, "In the latent mixture model context of an LPA how robust are findings to violations of dispersion for count (poisson) variables?" I took the additional step of running LPA's with inflation parameters. This showed more consistent results in terms of the likelihood ratio tests and the BIC, and argued for the existence of three groups. The problem with this is that I cannot seem to test 3 vs. 4 groups in order to establish this classification scheme with more certainty. I am receiving several error messages and do not think I am going to get the model to run. So I suppose my next question is this,"Assuming that I do not get the 3 vs. 4 class model to run, would it be reasonable to acknowledge the existence of three classes, establish that the three class solution is a variation on the two class (k-1) model and move on with my analyses using two groups?"
 bmuthen posted on Saturday, March 19, 2005 - 4:12 pm
It sounds like you needed the zero-inflated version of the Poisson model. But you say you don't get a solution for 4 classes - or perhaps you don't get a tech11 (LMR) result in the 4-class run; I am not sure from your message. If you have tried to use many random starts (say starts = 100 5) and still fail, it may be due to 4 classes being too ill defined in these data, and staying with 3 is the way to go. So my inclination would be to say yes to your last question.
 Anonymous posted on Friday, April 22, 2005 - 10:02 am
Hello
I have run a k-means cluster analysis and a LPA analysis on the same set of data. I found an 8 cluster solution that made sense. But, I only found 4 classes (5 classes would not converge)
I've tried varied start values for the LPA, allowed variables to correlate within class, etc. in attempts to try to get the same number of groups across both methods.

My question is: should I expect the procedures to uncover the same number of classes/clusters or could I find different solutions because one method is uncovering latent groups/subpopulations and one method is working more on the observed level?
 bmuthen posted on Friday, April 22, 2005 - 11:07 am
I think k-means clustering uses a more restrictive model that LPA - doesn't it also assume equal variances across variables (in addition to the assumption of equality of variances across clusters)? See for example McLachlan's new Wiley book on Microarray analysis. In Mplus you can add the equal variance restriction.
 Anonymous posted on Friday, April 22, 2005 - 11:51 am
Thank you for your reply.
You're correcct - in k-means variables should be roughly equal across variables.
I was wondering if the differences in solutions was related to "level" of results (latent classes vs. observable clusters). In general, I haven't seen LPA models uncover as many groups as Cluster Analysis (mainly 2-4 classes found). I know that hierarchial cluster methods (e.g. Wards) let you 'see' the different cluster solutions & was wondering if this was similar to differences between k-means and LPA.

In MPLUS, the default is equal variance across cluster, correct? Is this relaxed with the WITH Statment to allow correlations between variables?
 bmuthen posted on Friday, April 22, 2005 - 3:20 pm
I don't see the "level" of results as being different between the two approaches. You can "vizualize" the LPA results by using Mplus to plot the observed variable mean profiles for the different classes. You probably get more LPA classes when you hold variances equal across variables (try it). Yes, the Mplus default is equal variances across classes (but not across variables). And adding WITH statements relaxes the conditional independence assumption, allowing correlations. See also the Vermunt-Magidson article in the Hagenaars-McCutcheon Applied LCA book (Mplus web site refs).
 Anonymous posted on Monday, April 25, 2005 - 8:03 am
Thank you again for your reply.

How do you hold the variances equal across variables?
I'm not sure if this is needed, since I am dealing with T-scores, but the variances should differ by class.

Also, on p.121 of the MPLUS (Ver 3) manual, an example mentions that by mentioning the variances of the latent class indicators, the default equality constraint of equal variances (across classes) is relaxed. Will this allow for estimates of different variances within each class as well as different variances for individual variables?

However, to compare to k-means, which creates groups based on minimum w/in cluster error, shouldn't the MPLUS default be imposed?
 Linda K. Muthen posted on Monday, April 25, 2005 - 2:56 pm
To hold variances equal across variables, give the variable names which is how you refer to variances and use parentheses with a number inside to represent equality,

y1 y2 y3 (1);

holds the variances of y1, y2, and y3 equal.

In the example, the equality constraint on a regression slope is relaxed. If you want to relax the equality constraint on another parameter such as a variance, then you would mention that parameter.

If you want to compare to k-means, then you should place the same constraints as k-means does.
 bmuthen posted on Monday, April 25, 2005 - 3:01 pm
You mention T scores so it sounds like you are standardizing your observed variables. This may be necessary for k-means clustering. I would, however, recommend not doing that in the LPA - and if the variables have different metrics then also not hold the variances equal across variables (only across classes).
 Anonymous posted on Tuesday, April 26, 2005 - 1:36 pm
Regarding yesterday's discussion about comparisons between LPA & k-means - Thank you very much. You both cleared up a lot of questions.

Dr Muthen, you mentioned that I would probably get more LPA classes when variances were held equal across variables (4/22 note)-- and this did produce results very similar to k-means. (up to 8 classes found before nonconvergence)

However, when variances were allowed to vary across classes (but not across variables), there were fewer number of classes found (up to 4 classes found).

Why would relaxing an assumption lead to finding fewer classes?
Thanks again for your assistance
 bmuthen posted on Tuesday, April 26, 2005 - 2:02 pm
The more flexible the model is for each class, the better it can fit data and therefore the fewer classes you need. Your finding suggests that the "true" classes have different variances (across classes). If class-varying variances is the true state of nature and you force classes to have equal variances in your analysis, you have to have more classes in order to fit the data. Same thing if the true state of nature is within-class covariance - if you force classes to be formed with uncorrelated variables within class, then you need more classes to fit the data (this can be vizualized if you draw a 2-dimensional plot with a single correlated pair of variables - that 1-class data situation need 2 or more uncorrelated classes to be fit).
 Anonymous posted on Wednesday, April 27, 2005 - 1:07 pm
Re: yesterday's conversation: Thank you very much.
So, if I have this right, with LPA we may want to start with a restrictive model (essentially K-means) and systematically "relax" assumptions (allow different variances across classes, allow covariances w/in class) until we find the model that fits the best in terms of parsimony, interpretablity, and fit indices -- correct?

Is there any reference for this procedure or is it just standard practice?
Thanks again -- this conversation has been most helpful.
 BMuthen posted on Wednesday, April 27, 2005 - 5:58 pm
Sounds correct.

See Chapter 3 by Vermunt and Magidson, Latent cluster analysis, in Hagenaars and McCutcheon's book Applied Latent Class Analysis.
 Anonymous posted on Saturday, May 21, 2005 - 5:29 pm
I am trying to specify a latent profile analysis with covariates. I want the the latent class variable to be measured by one set of variables, and class membership to be "predicted" using a *different* set of variables. Most of the examples in Chapter 7 of the User's Guide have the covariates ALSO affecting (or covarying with)the indicators of class membership.

I've tried this:

model: %overall%
c#1 by Fsamed FsameBm AshrCC blauCCm blauCCv;
c#1 on meanphd acadappl psoc pmale quant sameIB samephdB;

But MPLUS output tells me it is no longer allowed, and I should see chapter 9, which is about multilevel modeling and complex data ... I couldn't see the link. Can you tell me how to model this?
 Linda K. Muthen posted on Saturday, May 21, 2005 - 5:45 pm
The BY option was used in Version 1 for latent profile analysis. It is no longer used. See Example 7.12 for the Version 3 specification. Just delete the CATEGORICAL option because your indicators are continuous and delete the direct effect u4 ON x; from the MODEL command.
 Anonymous posted on Sunday, May 22, 2005 - 6:26 pm
Thanks for the quick reply, that worked!
I am now wondering about how to get all possible contrasts for the multinomial logistic regression of the latent class variable on the covariates. I am working with 3 classes.

When I type:

c#1 on meanphd acadappl psoc pmale quant sameIB samephdB;
c#2 on meanphd acadappl psoc pmale quant sameIB samephdB;

MPLUS appears to give me the effect that each of these covariates has on the probability of being in the stated class (1 or 2) relative to being in class 3. But what about the probability of being in class 2 relative to class 3? MPLUS would not allow me to make any reference to the "last" class (#3) at all.
 Linda K. Muthen posted on Sunday, May 22, 2005 - 6:39 pm
c#2 on meanphd acadappl psoc pmale quant sameIB samephdB;

gives the probability of being in class 2 relative to class 3. You can't make reference to the last class. It is the reference class with coefficients zero. See Chapter 13 of the Version 3 Mplus User's Guide for a description of multinomial logistic regression.
 Anonymous posted on Thursday, May 26, 2005 - 2:52 pm
oops, sorry, I wasn't clear.

c#1 on meanphd acadappl psoc pmale quant sameIB samephdB;
gives the probability of being in class 1 relative to class 3.


c#2 on meanphd acadappl psoc pmale quant sameIB samephdB;
gives the probability of being in class 2 relative to class 3.

How do I get the probably of being in class 1 relative to class 2? (In STATA, "Mcross" gives you such results).

thanks!
 Linda K. Muthen posted on Thursday, May 26, 2005 - 4:18 pm
You would have to make class 2 the last class to do this. You can do this by using the old class 2 ending values as user-specified starting values for class 3 in the run where you want to compare class 1 to class 3.
 Stephen Gilman posted on Tuesday, September 20, 2005 - 8:05 am
Hello, I am considering estimating a latent profile analysis using a set of behavior ratings measured on a 5-point Likert scale. An alternative to this would be treating the items as ordinal, and estimating a latent class analysis. Another alternative is to consider the items as nominal. Is there any empirical way to determine which parameterization is most appropriate? The BIC from the 3 models is: 289368.688 from the LPA of the ratings treated as continuous indicators; 290569.173 from the LCA of the ratings treated as ordinal/categorical; and 290953.619 from the LCA of the ratings treated as nomial (2 class solution for each model). Thanks for your advice.
 Linda K. Muthen posted on Wednesday, September 21, 2005 - 7:37 am
I don't think you can make this determination by comparing BIC's. I would need to know more about these variables to answer this but basically if this is an ordered polytomous variable, it is best to treat it that way. If it does not have strong floor or ceilling effects, you may be able to treat it as continous. I am not sure why you would want to treat it as nominal.
 Sandra posted on Thursday, October 20, 2005 - 7:41 am
Hello,

I’m working on a latent profile analysis using seven scales which measure different life goals. I tried some mixture models where I allowed variables to be correlated within classes and with variances allowed to vary across classes.

My problem is that even in the two-class solution I receive a class in which one scale (a_aibz) has a variance of zero. This scale measures relationship goals and has already in the empirical data set a very small variance. Fixing the variance to zero in one class does not solve the problem, because Mplus tells me that the covariance matrix could not be inverted.

Is there anything I can do to avoid this? Shall I drop the scale from the analysis? If not, what is the reason for this problem?

I attach the output of the two class mixture solution with variances set free across the classes:

VARIABLE: Names are
a_aipw a_aibz a_aigs
a_aige a_aiws a_airu a_aiat a_aihe;
Usevar are a_aipw a_aibz a_aigs a_aige a_aiws a_airu a_aiat;
missing are all (-99);
classes = c(2);
Analysis: Type = mixture;
start = 50 10;
miterations = 1000;
Model:
%overall%
a_aipw a_aibz a_aigs a_aige a_aiws a_airu a_aiat;
%c#1%
a_aipw a_aibz a_aigs a_aige a_aiws a_airu a_aiat;
%c#2%
a_aipw a_aibz a_aigs a_aige a_aiws a_airu a_aiat;

Output:
sampstat tech1 tech2 tech3 tech11 tech13 stand;
Plot:
Type = plot3;
Series = a_aipw a_aibz a_aigs a_aige a_aiws a_airu a_aiat(*);
SAVEDATA:
file is AI_2cl.dat;
save = cprobabilities;

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A CHANGE IN THE
CLASS COUNTS DURING THE LAST E STEP.

AN INSUFFICENT NUMBER OF E STEP ITERATIONS MAY HAVE BEEN USED. INCREASE
THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE CLASS COUNTS CHANGED IN THE LAST EM ITERATION FOR CLASS 1.

FINAL CLASS COUNTS AND PROPORTIONS FOR THE LATENT CLASSES
BASED ON THE ESTIMATED MODEL

Latent Classes
1 2591.21830 0.60727
2 1675.78170 0.39273

FINAL CLASS COUNTS AND PROPORTIONS FOR THE LATENT CLASS PATTERNS
BASED ON ESTIMATED POSTERIOR PROBABILITIES

Latent Classes
1 2591.21830 0.60727
2 1675.78170 0.39273

CLASSIFICATION OF INDIVIDUALS BASED ON THEIR MOST LIKELY LATENT CLASS MEMBERSHIP

Class Counts and Proportions
Latent Classes
1 2593 0.60769
2 1674 0.39231

Average Latent Class Probabilities for Most Likely Latent Class Membership (Row)
by Latent Class (Column)
1 2
1 0.999 0.001
2 0.001 0.999

MODEL RESULTS

Estimates

Latent Class 1

Means
A_AIPW 3.692
A_AIBZ 4.000
A_AIGS 3.137
A_AIGE 3.567
A_AIWS 2.663
A_AIRU 2.047
A_AIAT 2.692

Variances
A_AIPW 0.089
A_AIBZ 0.000
A_AIGS 0.280
A_AIGE 0.128
A_AIWS 0.453
A_AIRU 0.380
A_AIAT 0.383

Latent Class 2

Means
A_AIPW 3.358
A_AIBZ 3.497
A_AIGS 2.828
A_AIGE 3.195
A_AIWS 2.605
A_AIRU 1.945
A_AIAT 2.402



Variances
A_AIPW 0.204
A_AIBZ 0.196
A_AIGS 0.319
A_AIGE 0.261
A_AIWS 0.420
A_AIRU 0.333
A_AIAT 0.377

Categorical Latent Variables

Means
C#1 0.436



I’m looking forward to hearing from you, thank you very much.
 Linda K. Muthen posted on Thursday, October 20, 2005 - 9:53 am
Your options are to increase the MITERATIONS as the error message suggests, hold the variance of the problem variable equal across classes, or remove the variable from the analysis.

As a rule, if it is necessary to show output to describe a problem, you should send your input, data, output, and license number to support@statmodel.com. We try to reserve Mplus Discussion for shorter posts.
 Kris Anderson posted on Monday, March 20, 2006 - 2:23 pm
I would like to run a LPA on personality trait information we have collected. This data includes both probands and siblings. I would like to examine the LPAs but feel the siblings relations should be modeled. How would I best do this?
 Bengt O. Muthen posted on Monday, March 20, 2006 - 3:44 pm
See the LCA section of my paper under Recent Papers on our web site:

Muthén, B., Asparouhov, T. & Rebollo, I. (2006). Advances in behavioral genetics modeling using Mplus: Applications of factor mixture modeling to twin data. Forthcoming in the special issue "Advances in statistical models and methods", Twin Research and Human Genetics.
 Kris Anderson posted on Wednesday, April 05, 2006 - 11:30 am
I have downloaded this paper and am trying to recreate these models. However, I am new to LPA, and I am not accounting for the presence of two latent class variables (and two groups of individuals) in the model. Is there another resource you might recommend?
 Linda K. Muthen posted on Thursday, April 06, 2006 - 8:36 am
See Example 7.18 in the Version 4 Mplus User's Guide which is available on the website. You can also email bmuthen@ucla.edu to request the inputs.
 Kris Anderson posted on Thursday, April 06, 2006 - 10:10 am
Thank you. I haven't had much luck using the sample for this model. I'll e-mail him.
 Michael Beets posted on Wednesday, August 02, 2006 - 5:05 am
I am running a LTA for two time points on 10 likert-scale items at each time point and have arrived at a 4 class model (2 at each wave). I am attempting to run the model by freeing the variance to be estimated for each class separately. I am unsure if I have the correct model commands specified to request this.

Model c1:

%c1#1%
[s3ptp1-s3ptp12*] ;

s3ptp1-s3ptp12 ;

%c1#2%
[s3ptp1-s3ptp12*] ;

s3ptp1-s3ptp12 ;


Model c2:

%c2#1%
[s4ptp1-s4ptp12*] ;

s4ptp1-s4ptp12 ;


%c2#2%
[s4ptp1-s4ptp12*] ;

s4ptp1-s4ptp12 ;

Further, when I run this I receive the following error message:

THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.

WARNING: WHEN ESTIMATING A MODEL WITH MORE THAN TWO CLASSES, IT MAY BE NECESSARY TO INCREASE THE NUMBER OF RANDOM STARTS USING THE STARTS OPTION TO AVOID LOCAL MAXIMA.

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.

Any suggestion would be appreciated.
 Linda K. Muthen posted on Wednesday, August 02, 2006 - 9:40 am
I would need to see your input, data, output, and license number at support@statmodel.com to answer this.
 Phil Herzberg posted on Monday, October 02, 2006 - 1:30 am
Hello,

I have two questions concerning LPA:
1) In the LCA & Cluster Analysis discussion bmuthen posted on Wednesday, February 08, 2006 - 6:29 pm that LRT can be bootstraped in M+4. How do i bootstrap the LRT (I assume it is not possible with the MLR estimator?).

2) In the output we got that IT MAY BE NECESSARY TO INCREASE THE NUMBER OF ANDOM STARTS USING THE STARTS OPTION TO AVOID LOCAL MAXIMA.
We have 5 continious indicators with a range from 1 to 5, what is a good way to obtain starting values and are the starting values means in this case?
Can you give an example?

Any suggestion would be appreciated.
 Bengt O. Muthen posted on Monday, October 02, 2006 - 2:42 pm
1) Use TECH14 in the OUTPUT command.

2) See the "STARTS" option in the version 4.1 UG on our web site.
 Kelly Hand posted on Tuesday, October 03, 2006 - 6:57 pm
Hello

I have run both a latent class analysis and a latent profile analysis using 5 ordinally scaled items (5 point scale of agreement with 3 indicating "mixed feelings") about mothers' attitudes to employment and child care to create a typology of mothers employment "preferences". I plan to test this typology with a subsample of qualiative interviews.

I have found that the LPA solution is easier to interpret and is a better solution (although they are both good). But am concerned that it may not be acceptable to use 5 ordinal items in this manner. Is this ok to do in your opinion?

Unfortunately the survey only used a very limited number of items about this topic so I am unable to include any more items or create a scale.

I have also tried to search for a reference to support this but have had no luck. If you think it is an appropriate approach to take do you have any suggestions for a reference I could include in my paper?
 Bengt O. Muthen posted on Tuesday, October 03, 2006 - 8:16 pm
This boils down to the usual choice of treating ordinal variables as categorical or continuous. I think treating them as continuous, using linear models, is often reasonable unless you have strong floor or ceiling effects. I would however worry if the two approaches gave different interpretations - if they do, I would be more inclined to rely on the categorical version. I would check that I had used a sufficient number of random starts (STARTS=) to make sure you have obtained the correct maximum likelihood solution. I can't think of relevant literature here.
 Phil Herzberg posted on Wednesday, October 04, 2006 - 6:37 am
Dear Dr. Muthén,

We would like to test four models (4 variables):
1) Variances are held equal across classes, covariances among latent class indicators are fixed to zero
2) allowed for class-dependent variances but constrained covariance terms to zero
3) allowed for class-dependent variances and held selected covariances equal across classes
4) allowed for class-dependent variances and allowed free estimation of selected covariance estimates within class

Are these the corresponding model-inputs?

Ad 1) %OVERALL%
Ad 2) %OVERALL%

%c#1%
y2 y3 y4 y5;

%c#2%
y2 y3 y4 y5;

We have no idea how to write the syntax for models 3 and 4, respectively. May you help with an example for model 3 and 4?


For model 2 we get the message: All variables are uncorrelated with all other variables within class. Check that this is what is intended. Does the model 2 syntax correspondent to what is intended by hypothesis 2?

Thank you very much in advantage,
Phil
 Bengt O. Muthen posted on Thursday, October 05, 2006 - 7:02 am
Class-specific variances are obtained by mentioning them within each class, e.g.

%c#1%
y2-y5;

Free covariances are obtained by saying e.g.

y2 with y3;
 Phil Herzberg posted on Thursday, October 05, 2006 - 8:24 am
Dear Dr. Muthén,

thank you very much for your help.
Can we ignore the message for model 2 (Does the model 2 syntax correspondent to what is intended by hypothesis 2?)

All variables are uncorrelated with all other variables within class. Check that this is what is intended.

Thank you!
 Linda K. Muthen posted on Thursday, October 05, 2006 - 9:45 am
Yes, you can ignore it if that is what you intended.
 Phil Herzberg posted on Tuesday, October 24, 2006 - 3:25 am
Dear Linda,

thank you for you help, I was successful in reordering the classes and thereby maintainig all other parameters. My last question is how to reorder (last class as the largest) a model with this structure:

MODEL:
%OVERALL%

What happens with the bootstrapped Lo-Mendell-Rubin Likelihood ratio when the last class is not the largest one (for this model and in general)?

Thanks again, this conversation has been most helpful.
 Linda K. Muthen posted on Tuesday, October 24, 2006 - 7:34 am
If you do not have class-specific MODEL parts, then you can't make the largest class last.

We delete the first class when testing the k and k-1 classes. This is why we suggest putting the largest class last. You would not want it to be deleted.
 Bruce A. Cooper posted on Tuesday, February 20, 2007 - 5:31 pm
I'd like to obtain an LPA but allow correlations/covariances within class to be nonzero. Is this the way to do it?
E.G.:
...
CLASSES = c(3);
ANALYSIS:
TYPE = MIXTURE ;
MODEL:
%OVERALL%
MODEL:
%OVERALL%
y1 WITH y2 y3 y4 y5 y6 y7 y8 ;
y2 WITH y3 y4 y5 y6 y7 y8 ;
y3 WITH y4 y5 y6 y7 y8 ;
y4 WITH y5 y6 y7 y8 ;
y5 WITH y6 y7 y8 ;
y6 WITH y7 y8 ;
y7 WITH y8 ;
 Thuy Nguyen posted on Wednesday, February 21, 2007 - 11:12 am
Yes, this will free the covariances within class while holding them equal across class.
 anonymous posted on Thursday, March 29, 2007 - 11:41 am
Good day,

I'm sorry in advance if my question appear naive, I am new to these methods in an geographic area were few "coaches" exist.

I trying to allow for conditional dependance (within class correlations) in a latent profile analysis of seven different variables. I found at least four different ways of allowing conditional dependance:

(1) Including within class WITH statements between all of my indicators.
(2) Running a model with conditional independance and relying on modification indices to allow for partial conditional independance (including within class WITH statements between the variables that "could" be correlated according to the modification indices).
(3) Doing a factor mixture model without within class BY statements (fixing the factor loadings to remain equivalent accross classes).
(4) Doing a factor mixture model with within class BY statements (allowing for differential factor loadings across classes).

I believe that the main advantages of models 3 and 4 is that they result in less parameters beeing estimated.

However, I believe that the real "essence" of conditional dependance is more clearly captured by models 1 or 2. Am I right ?

Are there any other arguments or advantages and disadvantages of doing it one way or the other ?

Thank you very much for your time.
 Bengt O. Muthen posted on Thursday, March 29, 2007 - 8:59 pm
1. Leads to an unstable model in line with the Everitt-Hand book that we cite under Mplus Examples - not recommended.

2. MI's don't work very well with mixture models, probably due to non-smooth likelihood surface - not recommended

3. Good idea; works well

4. Ok; not always needed beyond 3. Class-varying factor variances can be introduced instead.
 anonymous posted on Friday, March 30, 2007 - 3:20 am
Thank you very much for this answer.

It clarify things a lot.

Could you please expand a bit on your answer to 4 (or suggest a reading on this topic). I'm not sure that I properly understand why the freeing up of within class factor variance would be equivalent to the model with free within class "BY" statements or why the more complexe model will not be needed past 3 classes.

Best regards
 Bengt O. Muthen posted on Friday, March 30, 2007 - 8:34 am
Letting factor variances vary across classes is not the same as letting factor loadings vary across classes. However, I have found that a model with class-invariant loadings and class-varying variances often is suitable. I have tried several variations on the factor mixture modeling theme in my articles listed under "Papers" on our web site - see especially articles under the topics General Mixture Modeling and Factor Mixture Analysis.
 Alex posted on Tuesday, April 03, 2007 - 9:17 pm
If my goal is to do a LPA (with two classes) of 3 variables (XX, XY, XZ). After trying the classical model, I can restrict indicators variance to be equal within class. Then I can try a less retricted model by allowing the variances to vary between class.

Following on the previous discussion, if I want to try for conditional dependance, I should rely on a factor mixture model letting the factor variances (and maybe the loadings) vary across classes.

My question is how I can combine conditional dependance (factor mixture) with the previous modifications of equal wihin class variances (A) and of unequal between class variances (B) ? Can I use commands such as theses or is there any additional "twist" ?
A:
%OVERALL%
f BY XX XY XZ ;
[f@0];
%c#1%
f;
[XX XY XZ];
XX (1);
XY (1);
XZ (1);
%c#2%
f;
[XX XY XZ];
XX (2);
XY (2);
XZ (2);

B:
%OVERALL%
f BY XX XY XZ ;
[f@0];
%c#1%
f;
[XX XY XZ];
XX XY XZ;
%c#2%
f;
[XX XY XZ];
XX XY XZ;

If theses commands are right, it would means that example 7.27 reflects a traditional LCA with conditional dependance, equal between class variances and unequal within class variances ?
 Bengt O. Muthen posted on Wednesday, April 04, 2007 - 7:10 am
Yes, you can do this. A couple of comments:

- it seems unusual to restrict variances to be equal across variables within classes as you do in Model A. Typically, the variables are different and therefore variances are not comparable

- your equality statements in Model A can be simplified to, say

xx xy xz (1);

- the residual variance differences across classes that you specify in Model B may not be easy to achieve because this is a less well-defined model

- Ex 7.27 is different in that it has class-varying loadings. This may not be needed, however.
 Alex posted on Wednesday, April 04, 2007 - 7:32 am
Thank you very much for this answer.
In fact, the variance restrictions do indeed fit less, suggesting differences.
 Michael Giang posted on Saturday, April 28, 2007 - 12:24 am
I ran a LPA w/6 continuous indicators (values ranging between -2 to +5). Here's the dilemma:

The LMR indicates a 5 class model, and this makes substantive sense.

However, the AIC/BIC/ABIC values continue to decline (never rising) and i've tested this up to a 8 class model. BUT like a scree-test, the differences in IC values between models do decline greatly after the 5 class model.

In addition, the BLRT remains non-significant at every step/model.

No warnings were found and i did "start 500 20".

I'm in the process of correlating the variables (which I am not a fan off), but thus far, no resolve.

What do i make of this? Advice/ Suggestions?
 Linda K. Muthen posted on Saturday, April 28, 2007 - 8:46 am
Sometimes statistics do not provide a clear indication of the number of classes. In this case, you need to rely on substance. It may be that LPA is not the best model for the data.
 Michael Giang posted on Saturday, April 28, 2007 - 11:10 am
I'm satisfied with the 5 class model. In addition to substantive sense, it was choose it based on 1) LMR being & remaining non-significant after the 5 class model, and 2) the IC values begin to level off after the 5 class model. and what do i make of BLRT being non-significant at all steps? I plan on reporting BLRT, but indicating that it is potential limitation of the study? is this all sufficient?
 Bengt O. Muthen posted on Saturday, April 28, 2007 - 12:06 pm
Your statement that BLRT is non-significant for all classes confuses me. In a k-class run, BLRT gives a p value for a k-1-class model being true versus the k-class model. So, a non-significant result (p >0.05) says that the k-1-class model is acceptable. So your statement implies that the k=1-class model is acceptable as judged by BLRT. Is this what you mean? If so, I would think BLRT is not applied correctly because it would imply that your variables are uncorrelated.
 Michael Giang posted on Saturday, April 28, 2007 - 12:13 pm
apologies. i meant the BLRT remains significant at all classes/models, and was tested up to the 8 class model.
 Bengt O. Muthen posted on Saturday, April 28, 2007 - 12:39 pm
If BLRT is correctly applied (no warning messages), that could be a sign of having a lot of power due to a large sample size, in which case I would rely on the substantive reasons for choosing number of classes.
 Michael Giang posted on Saturday, April 28, 2007 - 1:29 pm
Thanks for the quick replies. The sample size was large (2000+). However, there was one warning "to increase the number of random starts using the starts option to avoid local maxima". I got this warning after increasing starts (500 20, and 1000 20), but read that this warning is typically issued?

So the take messages are to rely on substantive reasons, report LMR & IC values for statistical support, and also report the the BLRT (being significant at each model) but that it is sensitive to large sample size (thus power)?
 Linda K. Muthen posted on Saturday, April 28, 2007 - 3:11 pm
The starts referred to in the warning is the LRTSTARTS option not the STARTS option. The default is LRTSTARTS = 0 0 20 5; You might try LRTSTARTS = 0 0 40 10;

I don't consider a sample size of 2000+ to be that large.

I assume that you have replicated the loglikelihood of your analysis model.
 Michael Giang posted on Saturday, April 28, 2007 - 3:57 pm
I've tried increasing the LRSTARTS = 0 0 40 10, and the warning remains. No change in IC/LMR/loglikelihood values.
 Linda K. Muthen posted on Saturday, April 28, 2007 - 4:33 pm
This sounds like a problem that is specific to your model and data. If you would like us to look at it further, please send your input, data, output, and license number to support@statmodel.com.
 Sanjoy Bhattacharjee posted on Wednesday, May 09, 2007 - 1:10 pm
Dear Dr. Muthen(s),

We have Y11 ….. Y1T, Y21 ….. Y2T, …………………..,Yn1 …. YnT;
Yit is continuous and Yit=f(Xit) where “i” indicates unit and T is the Tth time period.

We want to extract the possible grouping using Y’s as indicators.

I believe our Panel-mixture analysis will be in the line of Latent-profile mixture analysis (with covarites) rather than Latent-cluster mixture analysis since Y’s are continuous. Am I right? …. However there is a serial correlation or at least it is likely to be so and we need to test that.

Q1. Could you kindly suggest any established research on Panel-mixture analysis (rho across the error terms has to be calculated)?
Q2. Could we estimate the model using MPlus?

Thanks and regards
Sanjoy
 Linda K. Muthen posted on Wednesday, May 09, 2007 - 2:35 pm
Yes, if the outcomes are continuous it is referred to as a Latent Profile Analysis rather than a Latent Class Analysis.

Q1. I don't know of any literature.
Q2. Yes.
 Sanjoy Bhattacharjee posted on Wednesday, May 09, 2007 - 2:59 pm
Thank you Madam.
Sanjoy
 Alex posted on Tuesday, May 22, 2007 - 3:01 pm
Greetings,

I'm doing a latent profile analysis (7 indicators) with covariates (4). I'm running models with conditional independance and models with conditional dependance (CD). For CD models, I rely on factor mixture models with class varying intercepts (only).

Is there any problems if I run these analyses with standardized variables (indicators and covariates)?

Thanks in advance.
 Linda K. Muthen posted on Tuesday, May 22, 2007 - 3:23 pm
I would work with the raw data. I don't know why you want to standardize. If it is because the variables have large variances, I would rescale them by dividing them by a constant. A constant is not sample dependent as are the mean and standard deviation used for standardizing.
 Alex posted on Tuesday, May 22, 2007 - 8:38 pm
Greetings and thanks for the fast answer,

In fact, suppose that we already did the analyses with standardized variables following the suggestion of a colleague (because it made it easier to compare the latent classes).
What kind of problems might it cause ?

Thank you very much in advance.
 Linda K. Muthen posted on Wednesday, May 23, 2007 - 12:56 pm
When you standardize variables, you are analyzing a correlation matrix not a covariance matrix. This is fine if your model is scale free but not if it is not. One example of a model that is not scale free is a model that holds variances equal across classes. If a model is scale free, the same results will be obtained whether a correlation or covariance matrix is analyzed. If I were you, I would rerun the analysis using the raw data.
 Alex posted on Wednesday, May 23, 2007 - 3:06 pm
Thank you again, so much for our laziness...

I imagine that the default LPM model is not scale free since variances are held equal between classes.

Does this holds for the covariates, the indicators or both ?
 Linda K. Muthen posted on Wednesday, May 23, 2007 - 3:20 pm
I believe if the model is not scale free, all results would be affected. Analyzing raw data is your solution.
 Alex posted on Wednesday, May 23, 2007 - 3:29 pm
Thank you again. The day we will manage to get these analyses done, MPlus support will clearly be at the begining of our thank you list.
 Michael P. Marshal posted on Monday, June 18, 2007 - 12:11 pm
Hello Bengt and Linda,

Thanks once again for this valuable resource and for the workshops you have conducted. I am attempting to estimate an LPA with four continuous indicators which are age variables ranging from (age) 5 to 40. The variables represent how old participants were when they reached each of four developmental milestones. Our analyses are attempts to identify sub-groups of individuals who progress through these milestones at different paces. Not an ideal way to model developmental phenomena, of course, but the best we can do with the cross-sectional data we have. My question is whether or not this seems conceptually and statistically reasonable (assuming the models fit well, etc.) and if you know of any other published data that uses similar (age) indicator variables in LPA? I'm a little concerned with the validity of our approach.

Mike
 Linda K. Muthen posted on Tuesday, June 19, 2007 - 8:17 am
Bengt and I discussed this and see no objections. It sounds like an interesting approach to looking at developmental milestones. Neither of us know of any articles that use age in this way.
 Michael P. Marshal posted on Tuesday, June 19, 2007 - 12:38 pm
Thanks for your quick response. Glad to hear that you don't see any major red flags! Whew. We passed the first test... :-)
 Selahadin Ibrahim posted on Tuesday, August 07, 2007 - 2:04 pm
Hello,

My statistician is helping me with a Latent Class analysis. We are looking at latent classes in a group of workers with low back pain. The variables we are using to distinguish between the classes are pain, functional status, depression, fear and some workplace factors. We used age, duration of complaint and time on the job as predictors of class membership. We know that pain, fs, depression and fear probably correlate so we added these correlations to the model. In the output I see that there is (amongst others) a significant covariance between pain and f.s. within the 1st class (in a 2 class solution). Estimates: 24.463 SE: 5.0079 ESt/SE: 4.816. How should I interpret this? Are assumptions violated?

We also saw in earlier analyses that a 4 class solution turned out to be the best fit.

Help is much appreciated, with kind regards,

Ivan Steenstra
 Matthew Cole posted on Tuesday, August 07, 2007 - 3:57 pm
Hi Selahadin,

The covariance suggests that among the members of class 1, there is a relationship between pain and functional status. If the correlation is negative, then its a relationship moving in different directions. If the correlation is not significant in class 2, or if its in a different direction then class 2, then that's a really neat finding and supports the contention that there is heterogeneity in your sample

Regarding your 4 class solution question, are you saying that in an earlier analysis without the covariates you found a 4 class solution fit best, but that after adding the covariates only the 2 class solution fit? Bengt notes that class fit will change when covariates are added, and he has advocated that you should consider using the solution when covariates are added.

Matt
 Selahadin Ibrahim posted on Wednesday, August 08, 2007 - 7:38 am
Hi Matt,

The correlations are as expected. Somewhat different between the classes. IN a few cases present in one class and not in the other.

We haven't done the 3, 4, (and 5) class analyses yet. (In previous analyses the 6 class solution didn't converge.) We realised we should do the analyses with adding the covariates after reading Bengts opinion on it and to us his point makes sense. (Starting out with SPSS K-means, it's getting better all the time:-) We expect that again the 4 class solution will be best, since the individual membership doesn't change that much, but it gives great info on how the constructs fit together. I expect to find more heterogeneity in the four class solution. We were a bit worried about our n (approx. 400)when adding all these extras. We might want to look into a subgroup in our next step.

Thanks,

This is great help.

Selahadin and Ivan
 Bengt O. Muthen posted on Tuesday, August 14, 2007 - 6:51 pm
Note that whenever there are at least 2 latent classes, the observed variables will correlate. If in a latent class analysis you choose in addition to correlate variables *within* classes, saying e.g.

%overall%
y1 with y2;

then this means that your y1 and y2 variables correlate more than their common influence from the latent class variable can explain - so it is like a residual correlation. Often this comes about due to similar question wording or variables logically tied to each other.

Note that this is not a model violation. Although the standard LCA assumption of "conditional independence" no longer holds, you are using a perfectly legitimate generalized latent class model.
 Selahadin Ibrahim posted on Wednesday, August 15, 2007 - 12:16 pm
Dear Bengt,

OK thanks very much. This helps a lot. The variables seems to be logically tied together. The LCA shows that some do in certain classes and some don't, which is good information.

A different question. We are now looking into latent classes within a subgroup (from n=441 to n=183). Only those people that haven't returned to work at baseline interview are now included in the LCA. Same variables, same observed independent variables, but without the (residual?)correlations in the model. The 4 class solution now doesn't converge (minimum number in 1 class is 33). I expected the 3 class solution to be optimal because the " low risk class" seemed to overlap with those who had returned to work. And that's an easier variable compared to the 5 we've used to determine classes. Unfortunately we now don't get information on model fit. Is there a way to get around this? Or does it just tell us that the results should be interpreted with caution?

With kind regards,

Ivan
 Linda K. Muthen posted on Wednesday, August 15, 2007 - 2:52 pm
I would not expect a four-class solution to be optimal if you have basically removed one of the classes. I would expect the three-class solution to be better. You will not get any fit statistics if the model does not converge. Is this what you mean?
 Selahadin Ibrahim posted on Thursday, August 16, 2007 - 9:14 am
Hi Linda,

I also expected the 3 class solution to be the best fit.

Yes, that is what I mean. In the previous analysis (n=441) we also could get fit statistics for a model with 5 (optimal fit +1 class) classes. I was hoping to get it (for the 4 class model) in this analysis (n=183)as well. We changed the setting to 500 iterations, but that doesn't help. Well, it's a sensitivity analysis anyway.

Thanks for the great support!

Ivan
 Linda K. Muthen posted on Friday, August 17, 2007 - 8:44 am
Try STARTS = 8000 800; Five hundred starts may not be enough.
 Selahadin Ibrahim posted on Monday, November 19, 2007 - 7:55 am
OK that worked, thanks for that. When submitting a paper a reviewer might ask: Why 8000 iterations, any suggestions? By the way: it seems that the SE decreases.

Another question:
One of the people in the team asked me what the main drivers for the class solution where, so we now have a description of what the classes look like and we know that the total model has a better fit in the 3 (and the 4) class solution, but which factors predict or drive the class membership. "Predict" might be a bit confusing since we also are using some counfounding variables as "predictors" . We where think of using a multinomial logistic regression to get the estimates. Do you have any other sugggestions, perhaps how we should model this using M-Plus?

Thanks,

Selahadin and Ivan
 Linda K. Muthen posted on Monday, November 19, 2007 - 11:01 am
It is not 8000 iterations. It is 8000 sets of initial random starts and 800 solutions carried out completely. Read in the user's guide under STARTS. You may not need that many. You should not compare standard errors from a local solution to a replicated solution.

The model is estimated with the objective of conditional independence of the latent class indicators within each class. If you want to use covariates to predict latent class membership, you can regress the categorical latent variable on a covariate or set of covariates.
 Selahadin Ibrahim posted on Thursday, November 22, 2007 - 8:23 am
OK thanks, it's now quite obvious tha tit would be useful to do the course next year :-). You have been great help.

Ivan
 Vilma posted on Tuesday, November 27, 2007 - 6:35 am
I was running LPA with 3 continuous indicators. My sample is 210. According to fit criteria, it seems that could be 5 profiles, but the one of the profiles has only few people. My choice was 4 profiles (it makes more sense from theoretical point of view). These profiles differ from each other. The reviewers give me a hard time about LPA with a small sample and only 3 indicators. Basically, they said that I cannot do with 3 indicators 4 profiles (it is stretching data too far). Might be it is the truth. But could I check somehow that. Or should be better to have more indicators?
 Bengt O. Muthen posted on Tuesday, November 27, 2007 - 4:57 pm
I think what you are trying to do is possible, although the solutation may not be very stable. The classic Fisher's Iris data had n=150 with 4 continuous indicators and 3 latent classes - see the Everitt & Hand (1981) book. That model did not use the LPA assumption of zero correlation within classes and so is harder to fit. Perhaps the reviewers are thinking LCA with binary indicators in which case only 2 classes can be obtained with 3 indicators.

To convince the reviewers (and yourself) you can do two things. You can use the Mplus Tech14 facility to test for the number of latent classes. You can also use the Mplus Monte Carlo facility to simulate data with exactly your parameter values and see how well or poorly the model is recovered.

Having more indicators, however, certainly helps.
 Vilma posted on Wednesday, November 28, 2007 - 12:49 am
Thanks a lot!
 Julie Mount posted on Friday, May 16, 2008 - 2:54 pm
I have a question related to variable types (continuous, categorical, count) within LCA/LPA. I'm working with 11 variables that are neuropsychological test subscales. These subscale scores are really sums of successes on a number of binary items (eg remember name y/n). The subscale variables range in levels from 2 to 37 and some are very skewed. I fit an initial LCA model with binary variables, dichotomising the subscale scores at the median values in this sample. A 4-class model seemed to fit the data well (with interesting results) but the modelling approach was criticised for not using all the information available in the data. I then tried modeling all the variables as count variables but ran into problems with the variables with fewer than 3 levels. A mixed categorical and count variable model ran without errors but the results are difficult to interpret (variables are on pretty different scales)and not terribly interesting, plus I have concerns with model fit (one class with very low probability and Lo-Mendell Rubin and BIC fit results conflict).

In short, I’m more comfortable with the initial binary model. My question, then, is whether I really should be concerned over potential loss of information in the binary classification model – how much value does using the full scales really add? Could my binary model findings be invalid?
Apologies if this is obvious; I’m new to latent variable modelling and Mplus.
 Bengt O. Muthen posted on Friday, May 16, 2008 - 5:26 pm
No obvious decision here. Seems to me that the reduced-information binary approach is fine as long as the 11 subscales are a good summary. You could finesse it by using ordered polytomous representations of the number of successes (e.g. low, medium, high). Or you could stay with binary items, but go fancy by working with the original, total set of binary items used to create all of the 11 subscales. That large original set can be used for (1) LCA, or (2) factor analysis to see if 11 dimensions - or fewer - turn out, and then perhaps do LCA on the factors (either in 2 steps, or better still, by having a mixture for the factor means).
 Julie Mount posted on Thursday, May 29, 2008 - 3:15 am
Thanks very much for your response.

Unfortunately I don't have access to the original item-level questionnaire responses, only the 11 aggregated subscale scores. Am sure there would be an interesting underlying factor structure as many of the questions could measure multiple domains of cognition. May aim to look at this in a replication analysis!
 devin terhune posted on Tuesday, July 29, 2008 - 11:53 am
I am running a lpa that is very similar to example 7.9 with only minor changes. I keep getting an error that reads: Mplus was unable to start. Please try again.

I have double-checked to make sure that my .dat file and my data path are acceptable. Any solutions would be great. Many thanks in advance.
 Linda K. Muthen posted on Tuesday, July 29, 2008 - 1:07 pm
This means that the directory where Mplus.exe is stored is not part of your path environment variable. See System Requirements - The Path Environment Variable.
 Mark LaVenia posted on Sunday, September 21, 2008 - 5:37 pm
We are running a latent profile analysis with 12 continuous variables. The Tech11 and Tech14 p values are very dissimilar:

Profiles TECH11 TECH14
2.........0.027....0.000
3.........0.498....0.000
4.........0.197....0.000
5.........0.773....0.000
6.........0.467....0.000
7.........0.722....0.000

Are these results interpretable, that is, do they suggest going with 2 profiles, 3 profiles, or continuing until Tech14 is no longer significant? More to the point, do these divergent results suggest that there is something fundamentally wrong with our data or input specifications?

Gratefully yours, Mark
 Linda K. Muthen posted on Monday, September 22, 2008 - 8:42 am
If you are not using Version 5.1, I would do that. If you are, please send your files and license number to support@statmodel.com.
 Mark LaVenia posted on Saturday, September 27, 2008 - 8:07 am
Dear Dr. Muthen - Thank you for your reply. I regret to report that we ran it on version 4. Could I bother you for a brief explanation on why 5 might give different results? Gratefully yours, Mark
 Linda K. Muthen posted on Saturday, September 27, 2008 - 9:07 am
We are constantly making improvements to the program and correcting problems.
 Kelly Schmeelk posted on Tuesday, January 13, 2009 - 12:06 pm
I have found the proportions of membership in each class under the output for the LPA, but I was wondering how I could find out where each case was placed among the classes. Is there specific syntax for output I could request? Thanks!
 Linda K. Muthen posted on Tuesday, January 13, 2009 - 1:02 pm
See the CPROBABILITIES option of the SAVEDATA command.
 Alexander Kapeller posted on Thursday, February 05, 2009 - 5:38 am
small number of observations

Hi,
I am performing a lca with items measured on a 6 point likert scale. my number of observations is for one sample 70, for another sample 170. How can I check via monte carlo if the model is appropriate (you stated in an answer above:You can also use the Mplus Monte Carlo facility to simulate data with exactly your parameter values and see how well or poorly the model is recovered.)

thanks
Alex
 Linda K. Muthen posted on Thursday, February 05, 2009 - 6:47 am
You can use the parameter values from an LCA as population values in a Monte Carlo study with sample size 70 for the parameter values from the analysis of the sample with 70 observations and sample size 170 for the parameter values from the analysis of the sample with 170 observations. Use mcex7.6.inp as a starting point.
 Alexander Kapeller posted on Friday, February 06, 2009 - 5:24 am
HI Linda,

sorry that I didn't express myself precisely.
Which indicators in the results are those I should especially look for. Or is it just enough to look after the power indicator?

As I read already for some power studies with MC I have problems to interpret different outcomes for e.g. 70 observations:

a) alpha is sign. and power over 0.8 --> everything is fine

b) alpha is sign. and power is below 0.8 --> increase of power is necessary by reducing complexity or more obs.

c) alpha is not sign. and power is over 0.8 --> no clue

d) alpha is not sign. and power is less 0.8 --> no clue

Could me give a hint if these interpretations are correct or/and how to interpret these cases?

thanks in advance
 Linda K. Muthen posted on Friday, February 06, 2009 - 11:09 am
The type of power study I was referring to is described in the following paper which is available on the website:

Muthén, L.K. & Muthén, B.O. (2002). How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 4, 599-620.

I am unclear about what the rest of your message means.
 Alexander Kapeller posted on Tuesday, February 10, 2009 - 2:47 pm
hi Linda,

I try to explain with a exemple. I hope being more clear now.
i conducted a sem with sample size 65 the sem results are:

V25 ON [last number=p-value]
LOY_ACC -0.210 0.424 -0.495 0.621
REPURCH 1.318 0.551 2.394 0.017
AUSDEHN -0.817 0.458 -1.786 0.074

to check for power I ran a monte carlo with nobservations=400 and nreps=500. here are the results for % Sig coeff

V25 ON
LOY_ACC 0.054
REPURCH 0.106
AUSDEHN 0.251

the interpretation gives me some headache:
for loy account and ausdehn: is a non sign path, and shows a low power; so increase of sample size might help - this is clear for me - is this right?
for repurch: is a sign path, but also low power: this is puzzling me.

Shouldn't be a sign. path also have a high power or is the positive path only 10.6 % of the replications and so nearly a artefact?
And if there is the case that there is a non sign path in the sem but a power of more than 0.8? Would that then also be one case in the 20% of not rejecting H0 although it is wrong?

I would be really glad if you could shed some light on that.

Thanks in advance

Alexander
 Linda K. Muthen posted on Wednesday, February 11, 2009 - 10:04 am
The method we proposed is used to assess the power of a single parameter not a set of parameters.

It sounds like you are making a mistake in your setup. Please send your output and license number to support@statmodel.com.
 Anne Chan  posted on Monday, September 21, 2009 - 4:23 am
Hello! I am doing a study which compares boys and girls in relation to their motivation, parental support and their learning outcomes. I have two questions:

1) I applied LPA analysis to classify students into different motivational groups. I included Gender and Parent Support as the covariates, the 6-class solution is perfect, both in terms of model fits and theoretical meanings. However, if I run the LPA without covariates, no theoretical meaning solutions can be generated. How should I interpret these results?

2) I am planning to save the LPA class membership (with Gender and Parent Support) of individuals and conduct further analysis to study the differences between boys and girls, both within and between classes. However, I am still a bit confused of how to understand having gender as a covariate in LPA analysis. Is it appropriate for me to use Gender as a covariate in LPA, particularly the goal of my study is to compare the two genders? I mean, if gender is included in the LPA, then gender will affect the classification result, and is it methodologically inappropriate to use this “biased” classification to conduct subsequent gender comparison? Or, instead of thinking the classification is “biased”, it is actually more robust in using Gender as a covariate, as it can more accurately reflect the data?
 Bengt O. Muthen posted on Tuesday, September 22, 2009 - 10:52 am
1) Differences in latent classes when using and not using covariates usually is a sign that there are direct effects of the covariates onto the outcomes, not only indirect effects via the latent class variable. Try exploring direct effects (you cannot indentify all of them). Although that may move you away from your favorite solution, your 2 runs (without and with covariates) may agree more.

2) My opinion is that if Gender influences class membership you are fine including it in the model - the estimates will be better. The same is true for factor scores in MIMIC models.

However, doing analyses in several steps is not always desirable, particularly not with low entropy. Why not do your "further analysis" as part of this model?

For related topics, see also

Clark, S. & Muthén, B. (2009). Relating latent class analysis results to variables not included in the analysis. Submitted for publication.

under Papers, Latent Class Analysis on our web site.
 Anne Chan  posted on Thursday, September 24, 2009 - 4:20 am
Thanks a lot for your kind suggestion. As a follow-up of question (1), I will explore the direct effects. May I ask how can I do it? Can you please kindly point me to some examples or references? Thank you very much!
 Linda K. Muthen posted on Thursday, September 24, 2009 - 10:56 am
A direct effect is the regression of an outcome on a covariate. I would do each outcome one at a time to see if there is signficance.
 Matt Thullen posted on Friday, October 16, 2009 - 10:45 am
Hello-

I am running LPA with 12 dimensions of relationship quality(N = 180) using a common set of 6 dimensions for two different relationship to assess patterns across the relations.

I have a problem similar to others I have seen posted where BIC, adjBIC, and BLRT are not useful in selecting class solutions. The LMR is providing some guidance but interpretation is also an issue.

class BIC aBIC LMR BLRT
2 6203.87 6086.69 <.001 <.001
3 6070.96 5911.67 .0577 <.001
4 5975.35 5775.83 .145 <.001

I figured I could justify a 3-class solution given close to sig. LMR but the means for the 3-class solution are providing almost no differentiation among the indicators for one relationship. I am not expecting much variation in terms of shape but I think at least two levels is more accurate. LPA for the six indicators for that relationship alone provides fairly clear (again LMR only) 2-class solution.

The means for the 4-class solution provide some differentiation among the indicators for that relationship and a more interpretable set though there is a small group (n = 10). I recognize the small N overall and in one class and the possibility the LMR tends to overestimate (Nylund etal 2007), but I am considering using the 4-class solution based on substantive meaning. Could I please have any insights you may have?
Thank you
 Bengt O. Muthen posted on Saturday, October 17, 2009 - 12:27 pm
The non-significance (p=0.0577) for LMR in the 3-class run says that 2 classes cannot be rejected in favor of 3 classes.

Personally, I tend to often simply listen to what BIC says, in a first step. In your case it suggests to me that because you don't have a minimum BIC you may not be in the right model ball park. Perhaps you need to add a factor to your LPA ("factor mixture analysis") and then you might find a BIC minimum.
 Matt Thullen posted on Saturday, October 17, 2009 - 7:18 pm
Thanks Bengt-

What do you mean by "add a factor"? I am basically familiar with how FMA integrates factors and classes but did you mean something specfic other than "try FMA"?

I do question whether a latent variable approach is appropriate here. The dimensions are rather skewed in the positive direction for one relation and mostly bipolar for the other. With a relatively small sample for LCA this is probably why a 2-class solution emerges.

Also Ive run separate LPA for each relationship to look at different class combinations as across relationship patterns. With this I get a clear 2-class solution for one relation and clear 3-class for the other but again with these there is no lowest BIC, BLRT is not useful (just .000), and LMR is my only solid indicator with more definitive p-values this time.

So if I continue to not find any lowest BIC is that evidence that latent variable approach may not be appropriate even if the LMRs are suggesting reasonable classes?

I did find that k-means cluster analyses provided an almost identical set group as the 4-class run (means and proportions) but not the 3-class run.

thanks for your help
mjt
 Bengt O. Muthen posted on Sunday, October 18, 2009 - 12:44 pm
Yes, I meant try FMA. Such as a 2-class, 1-factor FMA where the item intercepts vary across the classes (factor means fixed for identification). So you could try 1-4 classes and see if you find a BIC minumum (where 1 class is a regular 1-factor model).
 Matt Thullen posted on Sunday, October 18, 2009 - 7:25 pm
I ran FMA with 1 factor for classes 1-5. Still no lowest BIC, though after 3-class it decreases much less in each subsequent run. LMR points to 3-classes with this approach

2 questions:
1) Should factor means be fixed across classes? If so where?
2) How would using FMA with the one factor change my interpretation of the classes compared to LPA?

thank you
 Bengt O. Muthen posted on Sunday, October 18, 2009 - 8:17 pm
Will shortly be able to send you a new paper which answers your questions. Email me your email address.
 Matt Thullen posted on Monday, October 19, 2009 - 7:46 am
Thanks Bengt - I emailed you and look forward to reading that paper.

As a backup plan do you have any concerns about doing separate LPA for each relationship and looking at how combinations of classes are associated with an outcome.

The LPAs in that strategy give definitive LMR for 2-classes for one relation and 3-classes for the other which makes more sense substantively but still no lowest BIC.

thanks
 Bengt O. Muthen posted on Monday, October 19, 2009 - 10:16 am
That may be a reasonable approximation.
 Matt Thullen posted on Tuesday, October 27, 2009 - 8:05 pm
Bengt - that paper you sent on FMA was great...very helpful.

One question I have is what to make of residual variances greater than 1.

I searched the board and did not find anything about this. It comes up for one indicator in a 2f/2c FMM-5.

thanks
 Alexandre Morin posted on Wednesday, October 28, 2009 - 7:31 am
Greetings,
Would it be possible to also receive a copy of this paper.
Thank you in advance.
 Linda K. Muthen posted on Wednesday, October 28, 2009 - 7:50 am
It will be posted on the website in the next week.
 Alexandre Morin posted on Wednesday, October 28, 2009 - 8:11 am
Thanks Linda,
What would be the reference so I can find it.
By the way, I just realized that Matt Thullen did ask another question in his last post. I just wanted to make sure that my posting did not "erase" this question.
Thanks again
 Linda K. Muthen posted on Wednesday, October 28, 2009 - 8:40 am
The topic is Factor Mixture. The authors are Clark and Muthen.
 Bengt O. Muthen posted on Wednesday, October 28, 2009 - 10:53 am
Answering Matt Thullen - residual variances can be greater than 1 when they correspond to raw estimates.
 Robin Segerer posted on Monday, April 26, 2010 - 8:10 am
Hi,

I'd like to build my doctoral thesis on Latent profile Analysis, but I don't know whether I have enough statistical power to identify all relevant classes.
So I'm looking for any recommendations about sample size in Latent Profile Analysis.
Are there any articles discussing that issue? Thank you very much in advance and
best greetings from Germany...

Robin
 Linda K. Muthen posted on Tuesday, April 27, 2010 - 10:14 am
You may find something in the following paper which is available on the website:

Marsh, H.W., Lüdtke, O., Trautwein, U., & Morin, A.J.S. (2009). Classical latent profile analysis of academic self-concept dimensions: Synergy of person- and variable- centered approaches to theoretical models of self-concept. Structural Equation Modeling, 16:2,191-225.

Sample size depends on the characterisitics of the data and the separation of the classes. Doing a Monte Carlo study may be helpful.
 Linda K. Muthen posted on Tuesday, April 27, 2010 - 11:19 am
Here are a few references you may find useful:

Lubke, G. & Muthén, B. (2007). Performance of factor mixture models as a function of model size, covariate effects, and class-specific parameters. Structural Equation Modeling, 14(1), 26–47.

Lubke, G.H. & Muthén, B. (2005). Investigating population heterogeneity with factor mixture models. Psychological Methods, 10, 21-39.

Nylund, K.L., Asparouhov, T., & Muthen, B.
(2007). Deciding on the number of classes in latent class analysis and growth mixture modeling. A Monte Carlo simulation study.
Structural Equation Modeling, 14, 535-569.
 Mike Gillespie posted on Tuesday, November 02, 2010 - 10:44 am
I used two sets of four dichotomous items as measures of two factors per set (for a total of four factors) in seven-class factor mixture models in separate analyses of data from seven national election surveys. (I didn't combine the surveys in an known-group analysis because I believed the computation would be too heave, and, in any event, I also conducted a separate stacked SEM of the same data.)
The factors have no variance, so I assume that I did a LPA? My computer doesn't have enough memory to estimate the variances (using algorithm = integration), so I guess I'm stuck with LPA.
Three (more) questions:
(1) Is the Vermunt-Magdison article still the best reference? In particular, has anyone else used probit factor analysis in a mixture model?
(2) How does one interpret a factor with no variance? Would it be correct to say that the factor means + the (largely invariant) item intercept determine the class-specific probility-dist. of an item associated with the two factors it measures?
(3) How does one interpret the variance and covariance of the two continuous vars. that I also used (along with some additional categorical variables)? Ideally, I would like to interret these quantities as the measurement error in these variables.
 Jason Chen posted on Tuesday, November 02, 2010 - 12:32 pm
I would like to conduct a Latent Profile Analysis to form clusters of students based on 4 variables (x1, x2, x3, and x4). These 4 variables are considered "sources" of another variable (y). There are theoretical arguments that another variable (m1) might moderate the relationship between the sources and y. I would like to use a person-centered approach because these 4 sources do not operate in isolation. However, if I wanted to test whether m1 moderated the relationship between the sources (x1-x4) and y, how would I test that if the sources are clustered within a person?

In regression, I could compute y = x1 + m1 + x1*m1. And if the interaction term was significant, that would be evidence of moderation. But If I'm clustering the 4 sources and exploring how m1 moderates the relationship between these clusters and y, how could that be done?
 Bengt O. Muthen posted on Wednesday, November 03, 2010 - 12:31 pm
It sounds like you want the latent class variable (say c) behind the x's to influence y. With a continuous y this implies that the mean of y changes across latent classes.

If you have a binary moderator m1 you can simply use that to form a Knownclass latent class variable (say cg) and let the y means change over both latent class variables (that is the default) - and then use Model Test to see if the y means for the c classes are the same across the cg classes.
 Bengt O. Muthen posted on Wednesday, November 03, 2010 - 12:43 pm
Answer to Mike Gillespie:

Seven classes and 2 factors is a lot of latents. Typically, when factors are added to a latent class model you don't need as many latent classes. Conversely, if you have a lot of a latent classes, the factor variances can go to zero. I would use BIC to compare the alternative models, varying the number of classes and factors.

1) You might consider my overview:

Muthén, B. (2008). Latent variable hybrids: Overview of old and new models. In Hancock, G. R., & Samuelsen, K. M. (Eds.), Advances in latent variable mixture models, pp. 1-24. Charlotte, NC: Information Age Publishing, Inc.

which is on our web site under Papers.

2) Typically a model with no intercept (or threshold) invariance across classes has a much better BIC than letting only the factor mean vary. If you don't have factor variance, the factor is not motivated except as a non-parametric device of describing the factor by a mixture - and again it may be due to having too many classes.

3) With LPA there is no within-class covariance between the continuous outcomes. The variance is a within-class variance, but not necessarily measurement error, perhaps just "severity variation".
 Jason Chen posted on Wednesday, November 10, 2010 - 9:11 am
Thanks very much for the reply, Bengt. If my moderator (m1) is not binary, I'm assuming that there is no other way to test for this moderation effect other than artificially creating one on my own (e.g., median splits?).
 Bengt O. Muthen posted on Wednesday, November 10, 2010 - 12:58 pm
Continuous moderation (m1) of the effect of a latent class variables on a distal y? Can't you think of that as the m1 influence on yvarying over the latent classes (at the same time as the latent classes influence y by the y means varying over the classes)? So a c-m1 interaction. That's doable in Mplus.
 luke fryer posted on Monday, November 29, 2010 - 6:55 am
Dr. Muthen would you please expand on your comment from " on Saturday, October 17, 2009 - 12:27 pm...":

"Personally, I tend to often simply listen to what BIC says, in a first step. In your case it suggests to me that because you don't have a minimum BIC you may not be in the right model ball park. Perhaps you need to add a factor to your LPA ("factor mixture analysis") and then you might find a BIC minimum."

I am facing a problem similar to the original post--arriving a minimum BIC for my analysis--BLRT is also not proving to be useful, entropy is occasionally useful. Would it be worth adding categorical variables (Gender, department, etc) to my LPA in order to create a more decisive model? What other alternatives might I have?

Thank you,

Luke
 luke fryer posted on Monday, November 29, 2010 - 7:16 am
Dr.s Muthen,

One more question... At what point does the software's request for more starts--WARNING: THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED. THE SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA. INCREASE THE NUMBER OF RANDOM STARTS.--start to be an indication of anything other than "time to increase the number of starts". The Mplus Manual gives clear advice up to 500 starts. If the warning persists, does one just continue to increase the number of starts? I have never had an analysis fail to converge, but I consistently get this warning.

thank you

luke
 Linda K. Muthen posted on Monday, November 29, 2010 - 9:53 am
Not being in the right model ballpark means that LPA might not be an appropriate model for your data. Perhaps a factor analysis model is more approprite or a factor mixture model.

You should increase your starts keeping the second number about 1/4 of the first until the best loglikelihood is replicated. It may be that you have the wrong model.
 luke fryer posted on Monday, December 06, 2010 - 10:02 pm
Dr. Linda Muthen,


Could you point me, and anyone else in a similar predicament, in the right direction with regard to making this distinction between LPA and Factor Mixture Models (when using continuous variables).


I consistently fail to get a minimum BIC...


Thank you

luke
 Linda K. Muthen posted on Tuesday, December 07, 2010 - 1:49 pm
It seems like the models you are trying do not capture your data. This can happen with some data.
 Elizabeth Bell posted on Saturday, April 16, 2011 - 11:01 am
Hello,
I have a question in response to Robin Segerer's post on Monday, April 26, 2010 - 8:10am about determining sample size for an lpa.

I am submitting a grant for funding for my dissertation work. I will be conducting a latent profile analysis using continuous indicators of children's behavior and using demographic covariates to predict class membership. In addition, I plan to simultaneously estimate the lpa as well as a lgca of longitudinal distal outcomes to examine mean differences across the profiles in intercept and slope parameters of this distal outcome.

I read through the articles that you suggested for Robin but am wondering how to determine sample size needed for estimating the lpa and lgca simultaneously. In addition, are there any examples that you know of where this has been done to determine profile differences in intercept and slope parameters of a distal outcome?

I would greatly appreciate any guidance.
Elizabeth Bell
 Linda K. Muthen posted on Sunday, April 17, 2011 - 2:17 pm
I don't have such an example. You would need to put the two together yourself. Start with the Monte Carlo counterpart inputs for the two user's guide examples that come closest to your LPA and LCGA.
 Alma Boutin-Martinez posted on Tuesday, May 03, 2011 - 1:57 pm
I'm running a cross-sectional LCA with continuous (4-point Likert items) and dichotomous items. In this analysis, when I specify a 4-class solution, the mean of one of the continuous variables is fixed for three of the four classes. Why would this happen? I know that with dichotomous variables, when the logit is very small it is fixed at -15 or +15, is this similar to what is happening with these continuous variables? If so, how is the value it is fixed at chosen? Is this problematic?

Below is the output with one mean fixed for class 2.

Latent Class 2

Means
B2E 2.833 0.028 99.436 0.000
B2G 2.703 0.034 79.621 0.000
B2D 3.000 0.000 ********* 0.000
B2B 4.087 0.062 65.951 0.000
 Linda K. Muthen posted on Tuesday, May 03, 2011 - 2:42 pm
I think this happens when there is no variability for the item in a class. All members have the same value.
 Alma Boutin-Martinez posted on Thursday, May 05, 2011 - 3:30 pm
Thank you Dr. Muthen.
 Stata posted on Friday, June 17, 2011 - 9:29 am
Dr. Muthen,

Is it possible to use ordinal variable for latent trait analysis? Thank you.
 Linda K. Muthen posted on Friday, June 17, 2011 - 1:09 pm
Yes.
 James L. Lewis posted on Wednesday, July 06, 2011 - 10:21 am
Hello,
I am doing LPA with N=2000 and 6 continuous indicators. Each of the indicators are 3(5-pt Likert)item parcels (created by taking the mean of the 3 items). In most solutions, certainly in all intepretable solutions, I have modification indices indicating residual within-class correlations among parcels. This would seem to indicate that conditional independence is violated.

My question is whether modification indices are the only way to get a look at conditional independence when doing LPA in MPLUS. Clearly if I had categorical items I could use TECH10, but is their anything like that for the continuous indicator case or is there otherwise another way within MPLUS? I have considered freeing within class bivariate correlations, but it seems in a previous correspondence that this was not recommended in general as a way of modeling conditioal dependence (note these models are largely exploratory) -- [see post from anonymous on March 29, 2007 11:41a.m]. When I allow residual correlations %overall% (which I believe constrains the corrs to be equal across classes), the within-class corrs often remain to a lrage extent in the modification indices. Note I have also attempted FMA but would prefer to stay with "manifest parcels" if possible.

In a related question, is there a recommendable way to create categorical indicators using the parcels (15 levels seems too many)?

Thanks much.
 James L. Lewis posted on Wednesday, July 06, 2011 - 10:29 am
Sorry I may have been a bit unclear above -- each parcel consists of taking the mean of three 5-pt Likert items.

Also, when I say at the end that "15 items seems to many" I was thinking in terms of summing (which I guess would be a maximum of 12 levels - still seems too many), but did not mean to indicate there may not be a recommendable way using the means of the items within parcels or some other way as well.

Thanks again.
 Bengt O. Muthen posted on Wednesday, July 06, 2011 - 12:53 pm
It sounds like you have 6 variables, and each variable is created a the mean of 3 5-p Likert items. So you have 18 Likert items - is that right?
 James L. Lewis posted on Wednesday, July 06, 2011 - 1:41 pm
Yes that is correct. Thanks.
 Bengt O. Muthen posted on Wednesday, July 06, 2011 - 1:46 pm
But you see within-class correlations among the 6 variables, not among the 18?
 James L. Lewis posted on Wednesday, July 06, 2011 - 1:48 pm
I am doing the LPA based on the 6 parcels - so I mean within-class corrs between the parcels (in other words lack of conditional indpendence).
 Bengt O. Muthen posted on Wednesday, July 06, 2011 - 3:42 pm
You can use the estimated LPA model and classify subjects according to it. Then for each class see how correlated the variables are - and for which pairs.

But if the model which allows class-invariant within-class covariances has a much better BIC, or if a one-factor FMM has a much better BIC, then it is questionable to stay with the conditionally independent LPA.
 James L. Lewis posted on Wednesday, July 06, 2011 - 5:15 pm
Thanks. I will try this. I of course do not mind modeling the dependencies as long as it doesn't mean my class membership assignments and probabilities are trustworthy???

Did you have any thoughts on whether there is a recommendable way to create categorical indicators using the parcels?

Thanks very much.
 Jean Christophe Meunier posted on Sunday, July 31, 2011 - 7:40 am
LPA with N=900 and 4 continuous indicators on parenting to explore profile of parents on both parenting and differential parenting. As parenting is intrinsically related to children's characteristics, it seems that some children variables (for.ex. age) must be incorporate as endogenous cov. However the literature on covariates in LPA is quite vague. Here are some options that seem to me ok. I would be pleased to have your advice about the best option:
1.Making latent cl regress on the cov.
The most commonly used approach. However, the covariates I’d like to incorporate are uniquely related to some indicators but not to others (for ex, child’s age is related to parenting whereas age gap between siblings is related to differential parenting). I’m wondering if it wouldn’t be better to ‘link’ more specifically the covariates to their relevant indicators. Other options that I imagine :
2.Making each indicator regress on its ‘relevant’ covariates.
3.Making both the latent and the indicators regress on the covs (latent on each covariate ; indicators on their specific covariates)
4.Residualizing the indicators after regressing them on their specific covs.
Also I’d like to test the role of some predictors (not endogenous) on the latent classes.
What is the best way to test models that would include both covariates and predictors ?
Thanks a lot in advance.
 Linda K. Muthen posted on Sunday, July 31, 2011 - 9:42 am
I would choose 3.
 Melinda Gonzales-Backen posted on Monday, August 01, 2011 - 10:02 pm
Hi,
I am looking at a 5 factor, 4 cluster solution. I requested the cluster membership variable using
SAVE = CPROBABILITIES;

I then read it into SPSS and examined the descriptives of each cluster.

First, the number of cases it labeled as each cluster differs substantially. In addition, although I would interpret each cluster based on the estimated parameters given in MPlus, when I select each cluster and examine the means, the interpretation would be much different. For example, clusters that were once classified as "low" on variable X are now classified as "high" oh variable X.

Any help would be greatly appreciated.
 Linda K. Muthen posted on Tuesday, August 02, 2011 - 8:04 am
You won't get the same results unless your entropy is extremely high. You are using most likely class membership not the fractional class probabilities used in the analysis.
 Melinda Gonzales-Backen posted on Tuesday, August 02, 2011 - 8:39 am
Thank you, Dr. Muthen. So this is not a problem at all, correct? I should just be using the estimated parameters from the LPA model, correct?
 Linda K. Muthen posted on Tuesday, August 02, 2011 - 8:43 am
Yes.
 Meredith O'Connor posted on Monday, August 22, 2011 - 4:24 pm
Hi everyone,
I recently conducted an LPA and identified six profiles of positive functioning in my sample of 19 year olds. I then used MANOVA to compare the profiles on a number of variables measured when they were 17 years old.
A journal editor has asked me to consider using "conductional LPA analyses" rather than MANOVA, however I am unfamiliar with this technique. Would anybody know of a paper that would point me in the right direction?
Thank you!
 Bengt O. Muthen posted on Monday, August 22, 2011 - 5:13 pm
I have not heard of this technique.
 Meredith O'Connor posted on Monday, August 22, 2011 - 5:19 pm
Thank you for replying Dr Muthen, this clarifies for me that it must be a little used technique.
Thanks again!
Meredith
 Bengt O. Muthen posted on Monday, August 22, 2011 - 5:22 pm
It doesn't Google either.
 Anto John Verghese posted on Tuesday, September 06, 2011 - 1:11 pm
Dear Dr.Muthen,

I am using 22 continuous predictors (based on a 5 point Likert scale) to carry out latent profile analysis.

Is there any way to obtain the probability of each item (indicator) to each class?

Thanks!
 Linda K. Muthen posted on Tuesday, September 06, 2011 - 1:45 pm
If you treat the variables as continuous, probabilities are not relevant. If you treat them as categorical, you will obtain the probabilities automatically.
 Bengt O. Muthen posted on Tuesday, September 06, 2011 - 2:01 pm
I assume that when you say 22 predictors, these are actually latent class indicators. You can get a plot of the means for the indicators.
 Anto John Verghese posted on Tuesday, September 06, 2011 - 2:34 pm
Thanks!
 Julia Lee posted on Wednesday, September 07, 2011 - 8:15 pm
I am new to latent class analyses. I have been reading about issue of 'minimum BIC' on the discussion board. I have n = 521. I conducted a LPA with 5 indicators (all are continuous variables).

My interpretation of the fit indices below suggest that the 4-class model is the best model. I used VLMR and LMR to help me make the final decision on the number of classes because BLRT was significant for models 2 to 6. VLMR and LMR suggested 4-class model was better than the 3-class model. In addition, the Entropy for the 4-class model seems closer to 1.

One researcher in one of the posts mentioned his concern about the declining BIC/AIC/ABIC. What does minimum BIC mean? My BIC values were declining all the way from model 2 to 6. I did not continue to check model with 7 classes because it didn't make sense to me to continue without a substantive reason to do so. Should I be concerned about my results and consider using FMA? I'm unclear whether I am on the right track or not.

classes loglikelihood AIC BIC ABIC BLRT VLMR LMR Entropy
2 -2836.91 5705.82 5773.91 5723.13 0.000 0.000 0.000 0.925
3 -2518.82 5081.63 5175.26 5105.42 0.000 0.604 0.608 0.893
4 -2297.61 4651.22 4770.38 4681.50 0.000 0.003 0.003 0.908
5 -2198.93 4465.85 4610.55 4502.63 0.000 0.117 0.121 0.903
6 -2120.59 4321.18 4491.41 4354.45 0.000 0.142 0.147 0.901

Thank you, in advance, for your expert advice!
 Bengt O. Muthen posted on Wednesday, September 07, 2011 - 8:50 pm
It looks like you are not reading the LMR results correctly. The first instance that you get a high p-value implies that one less class should be chosen. If I am reading your table correctly, you have the p-values 0.000 (for 2 classes), and 0.608 (for 3 classes), which then implies that you should choose 2 classes.

In my experience, when BIC continues to decline with increasing number of classes without hitting a minimum, better models can be found. For instance, an FMA should be explored.
 Julia Lee posted on Thursday, September 08, 2011 - 11:53 am
Dr. Muthen, thank you for your feedback regarding my fit indices. I will try FMA. Is the syntax is similar to the CFA Mixture Modeling syntax in Example 7.17 of the Mplus version 6 manual? The factor means in this example for class 1 of c was fixed to 1. The factor means was fixed for identification purposes, correct?
 Bengt O. Muthen posted on Saturday, September 10, 2011 - 8:49 am
For ex 7.17, the factor mean is fixed only in the last class. In the first class the factor mean is given a starting value of 1 to show that it is free in this class. Note that @ means fixed and * means free.
 craig neumann posted on Thursday, September 22, 2011 - 11:40 am
Are there any materials which discuss conducting LPA when the sample is restricted to only those with high scores?

Given that some clinical measures have cut-offs, some might argue that only those above the cut should be examined to explore subtypes. However, I am wondering about finding spurious latent classes when sample variance is significantly restricted from using only cases with high scores. Put another way, shouldn't the high score classes naturally emerge when the entire sample is used?

My take from the Bauer and Curran papers is that LPA (and more generally LCA) could result in spurious latent classes if high score only samples result in nonnormal data.

Any help and guidance would appreciated.
 Bengt O. Muthen posted on Friday, September 23, 2011 - 9:44 pm
I don't know of any papers on this, but we have had similar concerns in analyzing ADHD symptoms in general population surveys versus treatment samples. It seems that when a treatment sample is used, you get subclasses of ADHD such as hyperactive only, inattentive only, whereas with a population sample some of that detail gets lost due to broader distinctions being made.

I wonder what would happen if you oversampled the high scorers.
 Li xiaomin posted on Monday, October 03, 2011 - 9:07 pm
Dear Dr. Muthen,
I have a question. Suppose there are 3 data files, naming "file1.dat","file2.dat", and "file3.dat", and 3 input file, "file1/2/3.inp". how can i use Mplus to analysis the 3 data automatically and generate the associated output file (file1/2/3.out)?

Thank you in advance!
 Linda K. Muthen posted on Tuesday, October 04, 2011 - 6:50 am
You cannot do this. You could create a bat file with the set of inputs that you want and you will receive a set of outputs. You may want to check if MplusAutomation can help you. See the website under Using Mplus Via R.
 Li xiaomin posted on Saturday, October 08, 2011 - 8:10 pm
thanks for the suggestions!
 Junqing Liu posted on Thursday, October 27, 2011 - 11:56 am
Dr. Muthen,

I used the following command to save the class membership based on a LPA into a seperate dataset.

SAVEDATA: SAVE=CPROBABILITIES;
FILE IS ebppostprobs.dat;

I need to do analysis using the class membership and some other variables that are included in the original dataset, but not in the class memberhsip dataset.

How may i merge the two datasets or possibly directly save the class memership into the original dataset? I am new to Mplus. Thanks a lot!
 Linda K. Muthen posted on Thursday, October 27, 2011 - 1:29 pm
You should use the AUXILIARY option for the variables from the original data set that were not used in the analysis. Then they will be saved also.
 AnneMarie Conley posted on Monday, January 09, 2012 - 3:58 pm
(I apologize if this has been answered elsewhere, but I can't figure it out from the userguide.)

Is there an easy way to save the class means (and variances) from an LPA? I'm using 6.1 on a mac and need to export the means to plot the solution in a separate program. I know I can save the parameter estimates to an outfile using the ESTIMATES option, but it's not an ideal way to extract just some of the parameter estimates. If there's a faster way to do this, I'd appreciate knowing about it.

Thanks,

AnneMarie
 Linda K. Muthen posted on Monday, January 09, 2012 - 4:00 pm
No, there is no way to export only certain parameter estimates. You need to save them all.
 Claudia Recksiedler posted on Thursday, January 19, 2012 - 3:05 am
Dear Dr. Muthén,

I have a question concerning the treatment of missing cases in Latent Profile. I am dealing with cross-sectional data of three same-aged birth cohorts (18-29 years old) on four transitions marking adulthood: moving out of the parental home, starting the first job, getting married, and becoming a parent. For each transition, I have a status variable stating if a person already experienced the respective transition and if yes, the precise age. First, I analyzed the timing of the transitions separately using Cox regression because of the large number of censored cases for marriage and first child.
Second, I am interested in looking at all four transitions simultaneously to explore different pathways/patterns into adulthood using Latent Profile (or Latent Class Analysis) for each cohort separately in Mplus.
I am just concerned about the large number of missing cases because many subjects did not marry or get children yet. Is mixture modeling capable to handle the censored cases, do I need to address it specifically in the program? Moreover, is it possible to run a Latent Profile Analysis based on the precise age at transitions or do I have to run Latent Class Analysis based on categorical status variables?

Thank you and kind regards,
Claudia
 Bengt O. Muthen posted on Thursday, January 19, 2012 - 8:55 am
You can do LCA/LPA with continuous age and/or categorical status - that is, you can mix scale types in Mplus mixture modeling.

Mixture modeling does not handle censored cases as in survival analysis. It seems complicated to come up with a model that both determines when an event happens and then apply LPA/LCA to it, so some simpler approach is needed. For instance, restrict your analysis of marriage, child timings to the older subjects to reduce the amount of missing data.
 Melinda Gonzales-Backen posted on Thursday, January 26, 2012 - 2:47 pm
I am running an LPA with 6 indicators. I want to see of the profiles differ based on ethnicity and gender. Can I use the KNOWNCLASS command for this? Should I run seperate LPA models for each group first to make sure that they have a similar latent profile structure?
Thank you!
 Bengt O. Muthen posted on Thursday, January 26, 2012 - 4:15 pm
Yes, you should first run separate group analyses. You can use Knownclass, but it is somewhat simpler to have the 2 variables be covariates. If the covariates influence only the latent class variable ("c ON x" in Mplus language), then you have measurement invariance, that is, the same profiles - but you allow for different class prevalences. If you have some direct effects from the covariates to the LPA indicators, then you don't have measurement invariance. The covariate analysis also shows you the class-specific means of the covariates.
 Melinda Gonzales-Backen posted on Saturday, January 28, 2012 - 10:17 am
Thank you so much for your response. In the case that the groups have different structure (exaple: I just ran the LPA for one group and found that a 2 profile model was the best fit, whereas when all data are used, an 8 profile model is the best fit) I would not use KNOWNCLASS or the covariate method, correct? In this instance, I would assume that it would be most appropriate for me to discuss these as seperate models from seperate subsamples, correct? Thanks so much for your help!
 Bengt O. Muthen posted on Saturday, January 28, 2012 - 11:15 am
Right.
 Melissa Kimber posted on Wednesday, February 15, 2012 - 9:00 am
Hello,
Is there an input example of an latent profile analysis somewhere on the website. I have 3 continuous indicatiors that I would like to run a LPA on.
Thank you.
 Linda K. Muthen posted on Wednesday, February 15, 2012 - 10:15 am
See Example 7.9 in the user's guide on the website. LPA is LCA with continuous latent class indicators.
 Anthony Rosellini posted on Tuesday, February 21, 2012 - 9:49 am
Hello,
I am using LPA to examine 7 indicators coming from a variety of self-report and clinical interview data (i.e., scales in different metrics) and had a few questions.

1) Under what circumstances would one override the assumption of conditional independence and allow freely estimated indicator covariances within classes?
Should this decision be made primarily based on model fit (e.g., if the conditional dependence model provides lower BIC)?

Based on prior posts it sounds like conditional dependence should be specified if method effects are suspected and not solely because of high correlations between indicators. What are other circumstances when a conditional dependence would be an appropriate approach?

2) It seems like the below syntax can be used to specify a model with conditional dependence (3 class model):

MODEL:
%OVERALL%
y1 y2 y3
y1 WITH y2 y3;
y2 WITH y3;

However, is it necessary to also specify freely estimated indicator covariances within each class, or would this be redundant coding e.g.,

MODEL:
%OVERALL%
y1 y2 y3
y1 WITH y2 y3;
y2 WITH y3;
%C#1%
y1 y2 y3;
y1 WITH y2 y3;
y2 WITH y3;
%C#2%
y1 y2 y3;
y1 WITH y2 y3;
y2 WITH y3;
%C#3%
y1 y2 y3;
y1 WITH y2 y3;
y2 WITH y3;

Thank you for the help,
Anthony
 Melinda Gonzales-Backen posted on Tuesday, February 21, 2012 - 10:10 am
Hi,
I have run an LPA model in which 2 profiles emerged. I would like to see if these profiles predict a continuous outcome and if this association is moderated by a continuous variable.

My entropy is only .68 so I don't think a class-analysis strategy would be particularly appropriate here. Is there a way to look at this interaction within the LPA framework (eg, by specifying the model when I specify the 2-profile solution)?

Thank you!
-Mindy
 Julia Lee posted on Tuesday, February 21, 2012 - 5:59 pm
I had my prospectus defense recently and I was asked to check the data set for nonlinearity by a committee. I am conducting LPA and LTA using Mplus to answer my research questions. I have read several book chapters and papers related to LPA and LTA prior to my proposal defense but I did not come across the issue to check for nonlinearity. Is it an assumption of these two statistical techniques to check for nonlinearity? Thanks.
 Bengt O. Muthen posted on Tuesday, February 21, 2012 - 6:21 pm
I would say no. To me, nonlinearity is something that is relevant for the regression of a continuous variable on other continuous variables or with regular Pearson Product-Moment correlations. The LPA model does not consider such regressions because the continuous latent class indicators are related to a categorical (latent) variable. Nor are correlations analyzed or fitted.
 Bengt O. Muthen posted on Wednesday, February 22, 2012 - 10:14 am
Answer to Anthony:

1) Note that LPA describes the correlations among the indicators. It does so as soon as you have more than one latent class. So conditional non-independence is a correlation among the indicators that is beyond what is explained by the latent class variable.

I would explore conditional non-independence if I had a priori reasons such as the methods effects that you refer to, or similar question wording.

2) It is not redundant coding but says that you believe the within-class correlations to be different in different classes. I would not recommend within-class WITH statements as a starting point - this is perhaps giving too much flexibility and may result in an unstable model (hard to replicate the best logL).
 Bengt O. Muthen posted on Wednesday, February 22, 2012 - 10:21 am
Answer to Melinda:

You can do this in a single analysis. Say that you have latent class variable c influencing continuous outcome y, moderated by continuous predictor x. Moderation is handled by letting y ON x be different in the different c classes. This is so, because moderation is an interaction between c and x.
 AnneMarie Conley posted on Wednesday, February 22, 2012 - 1:01 pm
I really want to use LPA for a person-centered analysis I'm doing, but I'm having trouble getting it to perform as well as cluster analysis. This is frustrating, as I find mplus so much easier to use than programs like Sleipner. I have tried a variety of ways of specifying the LPAs (including fixing and freeing variances across classes and variables). After I decide on the number of classes, I compare the LPA solution with a cluster solution from a recently published paper using the same data. In all cases, across multiple outcome variables, the cluster solution does a better job of explaining variability.

Can you help me figure out what I'm doing wrong? Based on the readings suggested on this site, I feel like the right specification of a latent variable model should be as good (if not better) at producing a useful classification system. I understand the problems inherent in the clustering algorithms, especially in the presence of heterogeneity of variances. Still, why would the CA produce a more precise classification?

If you have any ideas or people I could talk to about this (I'm local), I'd appreciate it. Thanks.
 Bengt O. Muthen posted on Wednesday, February 22, 2012 - 1:20 pm
Doesn't one of the early chapters in the 2002 LCA book by Hagenaars & McCutcheon claim that LPA with equal variances across variables is similar to k-means clustering?

Regarding the published paper, how do you know which solution is better - what does "explaining variability" mean? That sounds like a principal component criterion.

You could do a small Monte Carlo simulation using Mplus and then see which approach best recovers the model.
 AnneMarie Conley posted on Wednesday, February 22, 2012 - 3:50 pm
Yes on the Hagenaars & McCutcheon book. I have it on my desk right now (open to Vermunt & Magidson's chapter 3). (You recommended it on this site. It was very helpful).

Regarding the published paper (mine--out this month in Jrnl of Ed Psych) I used CA to describe patterns of motivation at time 1, then tested for differences between clusters in affect and achievement outcomes at a later wave. Like others, I argue that a person-centered approach gives a better picture of what motivation is than traditional variable-centered approaches. However, CA is very time consuming and (based on the readings suggested here) likely to make assumptions about the data that may not be appropriate.

My next step is to use a similar approach to describe developmental changes in patterns of motivation by conducting CA (or LPA) by grade for kids in grades 7-12. I want some evidence that the LPA solution is as trustworthy (and useful) as the CA solution. It seems like it should be at least as good, but I'm having trouble finding evidence of it. Can you direct me to other ways of establishing the utility of an LPA solution (as compared with CA)? Or is that not even a question I need to answer, in your opinion? Thanks for your help, by the way.
 Bengt O. Muthen posted on Wednesday, February 22, 2012 - 4:28 pm
For a description of the advantages of LCA/LPA over k-means clustering, see also Magidson, J. and Vermunt, J.K. (2002). Latent class modeling as a probabilistic extension of K-means clustering. Quirk’s Marketing Research Review, March 2002, 20 & 77-80. (pdf)

with pdf at http://spitswww.uvt.nl/~vermunt/quirk2002.pdf

I have also seen more recent papers comparing the two approaches, with LCA/LPA not always winning, but I can’t find those published. Anyone else?


My prior would be to go with LPA instead of cluster analysis. Particularly if you want to study changes - I don't know of a longitudinal cluster analysis procedure that relates clusters over time.
 AnneMarie Conley posted on Thursday, February 23, 2012 - 9:42 am
Thanks for that 2002 ref. It does a great job summarizing reasons for preferring LCA/LPA over clustering.

Bergman, Magnusson, and El-Khouri (2003) describe few procedures for longitudinal CA (e.g., LICUR), but I agree with you that LPA is preferable. I think a better test may be to use the posterior probabilities instead of the most likely class when computing time 2 means for the LPA solution. In that way I could take advantage of the probability-based classification (the first point in the article you posted). If you can think of any papers taking this approach, I'd be grateful for the direction. Thanks again for the help. If you are ever in Orange County I'll gladly buy you lunch.
 Alexandre Morin posted on Thursday, February 23, 2012 - 10:30 am
Hi Dr. Muthen,
Is this paper? Steinley & Brusco (2011). Evaluating mixture modeling for clustering: Recommendations and cautions. Psychological Methods, 16(1), 63-79.
It shows that CA can outperform LPA. The full PM issue also includes answers by McLaghlan and Vermunt. Things are tricky with clean simulated data. From experience mixture analysing real-messy-data always involves interacting with the data through error messages, relaxing restrictions, etc. to get at the final model that is never more than the best “approximation” of the reality. In the end, I think the decision is practical. I prefer the flexibility of mixture models since they are part of the generic latent variable family. Assumptions can be relaxed & imposed, fully latent models can be specified (with various degrees of class invariance – see the 2011 special ORM issue on latent class procedures that include illustrations of CFA invariance testing across unobserved subgroups), factor mixture, and even cross-group LPA invariance. These can be implemented in mixture models (and result in substantively interesting new parm estimates), but not in CA. This is especially true of growth mixture models were the "developmental trends" cannot be clearly taken into account in CA models (see the recent Morin et al. aticle in SEM, 2011, 18, 4, pp. 613+ on the advantages of flexibility).
 AnneMarie Conley posted on Thursday, February 23, 2012 - 11:18 am
Alexandre, this is really helpful and answers many questions. Thank you. Offer of lunch extended to you, too.
 Bengt O. Muthen posted on Thursday, February 23, 2012 - 11:23 am
Yes, that's one of the ones I was thinking of.
 Anthony Rosellini posted on Tuesday, February 28, 2012 - 6:10 am
Hello,
I have conducted an LPA on 7 indicators in a sample of 1200 individuals and found that the BIC continues to decline up until a 9 class model (BIC increases for 10 and 11 class models). However, the LMR indicates a 4 class solution (i.e., first non- significant LMR was found for the 5 class model). It is noteworthy that I am using a clinical sample and the majority of the indicators are positively skewed

1) Should I be concerned that LPA may not be the appropriate model given that the BIC continues to decline up until a 9 class model but the LMR indicates a 4 class solution? Or is it safe to assumed that LPA is appropriate given that the BIC eventually did reach a minimum? I have also conducted a single factor FMA and found that the BIC declines up until a 6-class solution, but that LMR still indicates a 4-class solution.

2) Is the interpretation of LMR influenced by size of my sample or the fact that my indicators are positively skewed? Many LPA papers I have read seem to have convergence in deciding the number of classes using BIC and LMR, however many of these studies used much smaller samples (e.g., N=200 or 300). I also found one study using a larger sample that rescaled positively skewed indicators into ordered categories. Would you recommend doing something like this?

Thanks for the help!
 Linda K. Muthen posted on Wednesday, February 29, 2012 - 6:07 pm
1. Sometimes statistics cannot guide you. I would let the substantive interpretation of the classes guide me. Many of the class profiles may be very similar.

2. Not that we know of. I would not rescale skewed indicators into ordered categories.
 Anthony Rosellini posted on Thursday, March 22, 2012 - 9:48 am
I have arrived at a 6 class solution for my latent profile analysis, but have noticed that the saved class probabilities differ in the solution with random starting values vs. the solution in which I specify the last class to be the largest (e.g., to interpret tech11 and tech14).

Is this discrepancy expected?

Which class probabilities should I use for secondary analyses?
 Linda K. Muthen posted on Thursday, March 22, 2012 - 2:16 pm
Make sure you have replicated the best loglikelihood several times in the first analysis. When you specify the largest class to be the last class, be sure you obtain that loglikelihood.
 Julia Lee posted on Saturday, March 24, 2012 - 5:06 pm
I am conducting:

LPA cross-sectional data (spring of first grade) and LTA longitudinal data (fall and spring of first grade).
a) Are the LPA and LTA robust to floor effects and outliers? Is this an issue since mixture distribution is allowed but normality within each latent class is assumed? Is there a way to check for normality within each subgroup or should I assume in theory that normality was met for each subgroup? Because this is an unselected sample of first graders, some of the variables were positively skewed and there were outliers.
 Bengt O. Muthen posted on Saturday, March 24, 2012 - 5:54 pm
No, LPA and LT are not robust to floor effects and outliers. As my my other reply to you, it may be better to treat the outcomes as censored-normal (there are other alternatives too).

Outliers can be detected by several methods in Mplus, for example the likelihood contribution - see UG.
 Anthony Rosellini posted on Wednesday, April 04, 2012 - 1:31 pm
I am trying to include covariates in my latent profile analysis in order to evaluate meaningful between-class differences (e.g., multinomial logistic regressions) on various outcomes. I am noticing that the classes change substantially when regressed onto continuous covariates that are closely related to the latent profile indicators (using one self-report measure of depression as one of the profile indicators; regressing classes onto a different self-report measure of depression). In contrast, the classes do not change in an meaningful way when I regress them onto a related categorical covariates (e.g., depression diagnosis).

Is it appropriate to model direct effects between class indicators and closely related covariates in a situation such as this?

It seems like you sometimes recommend modeling direct effects between covariates and class indicators (i.e., if classes are changing substantially after including covariates). However, in other posts you also caution the acceptability of a mixture solution that changes substantially with the addition of covariates
 Linda K. Muthen posted on Wednesday, April 04, 2012 - 1:39 pm
You might find the following paper which is available on the website helpful:

Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345-368). Newbury Park, CA: Sage Publications.
 Junqing Liu posted on Tuesday, April 10, 2012 - 10:23 am
I run the following syntax in Mplus6 to conduct LPA. All the observed variables are continuous. I kept getting error message of “ ERROR in VARIABLE command
CLASSES option not specified. Mixture analysis requires one categorical latent variable.” How I may fix the problem? Thanks a lot.

VARIABLE:

Usevariables =turnov1 turnov2 turnov3 transf1 transf2 transf3 transf4 late1 late2r absent1 absent2r absent3 absent4 behave1 behave4 behave6 behave7 behave8 behave9 behave10 search1r search2r search3 search4 county;
Classes=c(3);
Cluster=county;
Within=turnov1 turnov2 turnov3 transf1 transf2 transf3 transf4 late1 late2r absent1 absent2r absent3 absent4 behave1 behave4 behave6 behave7 behave8 behave9 behave10 search1r search2r search3 search4;
Idvariable=pin;


ANALYSIS:
type = mixture twolevel;
Process=8(STARTS);

Model:
%WITHIN%
%OVERALL%
%BETWEEN%
%OVERALL%
C#1; C#2; C#1 WITH C#2;

output:

tech11 tech14;

SAVEDATA: SAVE=27var3cl2lvPROB;
 Julia Lee posted on Tuesday, April 10, 2012 - 2:06 pm
I have a question about LPA. Because I have missing data in the covariates, I used multiple imputation. However, Tech 11 and Tech 14 is not available with multiple imputation. Is there some other way I can get the VLMR, LMR, and BLRT p values? If these are not available, does this mean that entropy and looking for a reduction in LL, AIC, BIC, and ABIC would be the only way to decide how many classes best fit the data and compare it to substantive theory? Thanks. I appreciate your response.
 Bengt O. Muthen posted on Tuesday, April 10, 2012 - 6:21 pm
Answer to Junqing Liu:

You must include a NAMES= statement that tells Mplus about the variables in your data. USEV is for the variables to be used in the analysis.
 Bengt O. Muthen posted on Tuesday, April 10, 2012 - 6:22 pm
Yes, information criteria would be the only available approach. But BIC isn't bad.
 J.D. Haltigan posted on Thursday, April 19, 2012 - 12:07 pm
Quick question re: LPA that may have already been answered ad infinitum I just have not had a moment to do a thorough search.

In the case of a three class solution with 8 continuous indicators...how is that the estimated mean parameter for a given indicator yields a significant z-value in the LPA framework yet when I use the resultant groups to compare b/w group differences on the indicators in the ANOVA framework, the groups do not significantly differ from one another on a subset of the indicators that yielded significant z-values in the LPA framework?

Does this have to do with the local independence assumption? Or, is it that the z-parameter tells one that the estimated mean for that latent class is significantly different than zero (yet may not be significantly different between the classes).

Many thanks
 Linda K. Muthen posted on Thursday, April 19, 2012 - 12:24 pm
I assume that when you use the ANOVA framework, you are using most likely class membership. This is not what is used in the LPA where each person is in each class proportionally. Depending on entropy, these can be different.
 J.D. Haltigan posted on Thursday, April 19, 2012 - 12:47 pm
Thanks, Linda. Yes, in the ANOVA framework, I am using most likely class membership. When you say that in the LPA each person is in each class proportionally what do you mean exactly?
 Linda K. Muthen posted on Thursday, April 19, 2012 - 2:54 pm
A posterior probability is estimated for each person in each class. After model estimation, most likely class membership is determined by examining these model estimated posterior probabilities.
 J.D. Haltigan posted on Thursday, April 19, 2012 - 4:32 pm
Yes, I understand the posterior probabilities are used to derive class enumeration. So I guess my question is still why would the derived latent classes (used in an ANOVA framework as a manipulation check) show no significant b/w group differences on a given indicator which has evidenced a significant estimated mean parameter in the LPA itself. Apologies if I am missing the obvious.
 Linda K. Muthen posted on Thursday, April 19, 2012 - 6:14 pm
Because in the LPA the means are not compared across the most likely class a person is in but the posterior probabilities for all classes are used for each person. Only if classification is perfect will they be the same. What is your entropy?
 J.D. Haltigan posted on Thursday, April 19, 2012 - 8:13 pm
Entropy is .921
 Bengt O. Muthen posted on Thursday, April 19, 2012 - 9:09 pm
Your initial message talked about significant z-values for the indicators in an LPA. I assume that you meant significant differences in indicator means across classes? If so, I think you need to send relevant files to be able to answer this.
 J.D. Haltigan posted on Friday, April 20, 2012 - 9:11 am
Thank you. I actually think Linda's answer above is what I am trying to ask. Perhaps if I restate my question more clearly just to be sure.

I have k...8 indicators all continuous. I fit a 3 class model (as well as 2 and 4). 3 seems best from the perspective of all of the fit indicators available.

The profile plot clearly shows that it is the later 4 indicators that best separate the groups (the first four the lines are tightly packed together). Gives the impression of a lighting bolt across the sky.

The estimated means for each of the three classes all have significant z-value parameters for the first four indicators (the ones whose lines are tightly packed in the plot).

I then ran the ANOVA on the derived classes and sure enough, the three groups did not show b/w group differences on the first four indicators but did on the later four (as the plot would suggest). This got me confused as to the following:

Why would the estimated mean parameter values in the LPA for the first four indicators be significant, yet fail to reveal these differences in the context of the ANOVA (as a manip check). If the sig. value of the est. means in the LPA is a function of the posterior probabilities (rather than the most likely class membership) I follow. If not, I am still conceptually unclear.
 Linda K. Muthen posted on Friday, April 20, 2012 - 10:00 am
Yes, the LPA is a function of the posterior probabilities not the most likely class membership.
 J.D. Haltigan posted on Friday, April 20, 2012 - 1:34 pm
So, a significant z-value for an indicator in a given class in the output means that within that class the estimated mean for that indicator is significantly different than zero? In other words, what exactly does the significant mean estimate 'technically' mean as a function of class membership (particularly in the context of my current situation, where the estimated means by class are significant yet the resultant classes themselves are not different [ANOVA] on a subset of the indicators).
 Linda K. Muthen posted on Friday, April 20, 2012 - 1:44 pm
Please send files that show exactly what you are comparing and your license number to support@statmodel.com.
 J.D. Haltigan posted on Saturday, April 21, 2012 - 1:59 pm
I actually was able to reach out to a friend who clarified for me my question...Simply put (and perhaps my question was not clear) the significance value for a mean estimate for a given indicator references whether that mean is significantly difference from 0. In the context of an ordinary LPA (no covariates, no grouping variable) is this interpretation correct?
 Bengt O. Muthen posted on Saturday, April 21, 2012 - 2:18 pm
z values in the Mplus output are always testing against zero. That this does not test class differences in means is what I was trying to say in my response.
 J.D. Haltigan posted on Saturday, April 21, 2012 - 2:26 pm
I got it. Apologies for the lack of clarity in my question(s). I was switching from LCA to LPA and got a bit bewildered in the process moving from item response probabilities and thresholds to means and variances.
 Katy Roche posted on Monday, April 23, 2012 - 9:27 am
What is the best approach for conducting latent profile analysis with 20 imputed data sets (created in SPSS)? Do I need to create one combined data file from those in order to conduct the LPA?
 Linda K. Muthen posted on Monday, April 23, 2012 - 1:50 pm
They should be in separate files. See Example 13.13. Note that you can impute data in Mplus. See DATA IMPUTATION and examples in Chapter 11 of the user's guide.
 Maartje Basten posted on Friday, May 11, 2012 - 9:24 am
Dear Dr. Muthen,

I performed a latent profile analysis with 6 continuous variables, N=5000.
I examined 1 to 4 classes. For the 2,3 and 4 class solutions there were a
number of starting value runs that did not converge. In addition, for the
3 and 4 class solutions I got the following warnings:

THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-POSITIVE
DEFINITE FISHER INFORMATION MATRIX. CHANGE YOUR MODEL AND/OR STARTING
VALUES.

THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE
OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH.
THE CONDITION NUMBER IS -0.136D+00.
THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE
MCONVERGENCE OR LOGCRITERION OPTIONS.

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE
COMPUTED. THIS IS OFTEN DUE TO THE STARTING VALUES BUT MAY ALSO BE
AN INDICATION OF MODEL NONIDENTIFICATION. CHANGE YOUR MODEL AND/OR
STARTING VALUES. PROBLEM INVOLVING PARAMETER 2.
RESULTS ARE PRESENTED FOR THE MLF ESTIMATOR.

I increased the STARTS to 500 50 and the MITERATIONS to 5000, but this did
not help. I found that one of the variables is causing these problems, but
I do not want to exclude this variable. Do you know how I could solve
these problems? Thank you.
 Linda K. Muthen posted on Friday, May 11, 2012 - 10:24 am
Please send your output and license number to support@statmodel.com.
 Anat Zaidman-Zait posted on Wednesday, June 06, 2012 - 10:42 am
Hello, I am conducting a Latent Profile Analysis using a set of 8 behavioral characteristics. Based on the results I have identified 5 classes . currently, I am interested in including covariate in the model. When I run the model with the covariates, with 5 classes, I end up having a different number of participants in each of the classes in comparison to my initial analysis. Hence, I thought to set the classes means for each of the variables based on the initial analysis results. How do I set the classes means for each class in the syntax?
Thank you.
 Linda K. Muthen posted on Wednesday, June 06, 2012 - 4:21 pm
When this happens, it points to the need for direct effects. See the following paper on the website for more information:

Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345-368). Newbury Park, CA: Sage Publications.
 deana desa posted on Friday, June 08, 2012 - 8:12 am
Hello Dr. Muthen,

I have fundamental questions regarding latent class analysis and latent profile analysis, which is confusing me.

Here is the situation. The data that I have measuring 6 behaviors. Each behavior has 9 categories, and is evaluated (scored) for 12 different cases. Thus, the cases can be considered cross-sectional(?) measures.

Is LCA be a correct analysis to profile the data I have?

Thanks! I appreciate it.

Do you have recommendation for literature for me to start with this LCA for porfile analysis?
 Linda K. Muthen posted on Friday, June 08, 2012 - 11:24 am
The basic distinction between LCA and LPA is the scale of the latent class indicators. In LCA, they are categorical. In LPA, they are continuous. If you treat your latent class indicators as categorical, you have an LCA.

See the Topic 5 course handout and video on the website. There are many references there.
 Ting posted on Thursday, June 21, 2012 - 8:48 am
Hi Drs. Muthen,

I am interested in the latent profiles of students (n = 488). I have two questions about LCA:

1) If the variables (Likert-scale questions on epistemic beliefs) I use to generate the latent classes are multi-dimensional, (i.e., I ran a EFA with them, three factors were extracted), then should I still use LCA (Mplus ex 7.9)? If not, what model (and Mplus example) should I use?

2) The smallest BIC is when modeling 6 classes (class sizes ranging from 23 to 100). Is the 6-class result trustworthy?

Thanks!
 Linda K. Muthen posted on Friday, June 22, 2012 - 9:50 am
1. You could consider 7.17.

2. I would consider other factors. See the Topic 5 course handout and video.
 Robert Young posted on Friday, June 29, 2012 - 8:32 am
Hello,
I have a question concerning Latent Profile Analysis (LPA). J.D. Haltigan touched on the issue in an earlier post.

I have conducted a LPA of 12 different coping methods (12 items), each measured on a 1-5 scale. All fairly normally distributed.

In LCA the output reports which indicators differ between the latent classes (Latent class 1 vs. latent class 2 might significantly differ on three items). Is there an equivalent output for LPA? If not is there an easy way to make this comparison? I suppose a series of equality constrains could be used, but that seems incredibly cumbersome.

A related question is.... how can I determine which item contributes most to discriminating between the latent classes, or in other words I want to find out which indicators contribute most to discriminating between the groups and which are minor, ideally I would like to rank the importance/contribution of each item.

Thanks in advance.
Robert
 Bengt O. Muthen posted on Friday, June 29, 2012 - 1:24 pm
I don't know what you mean when you say that LCA output reports which indicators differ between the latent classes.

For both LCA and LPA the interpretation of the classes is most easily obtained by the PLOT command, asking for SERIES and getting the mean/probability profiles over the indicators for each class.

You can as you say test differences across classes between the means of the LPA, but that is cumbersome. There is not an automatic way that I know of to determine which indicators best discriminate between classes, but one has to look at for which indicators the mean estimates differ the most across classes. Some indicators may be good at discriminating between some classes and some other indicators good for discriminating other classes. It would be hard to get a simple summary of this I would think. To get at the significance you can use Model Test, but again it is cumbersome to do it for all possible differences.
 J.D. Haltigan posted on Friday, June 29, 2012 - 1:35 pm
Just as an addendum to Robert's post, the issue that I finally got straight after figuring out how to articulate my question properly is that the significance test of the indicator in the LPA analyses tests whether the indicator is significantly different from zero with respect to a given class. This is usually of little substantive import although I guess one could make the case that if the indicator is not significantly different from zero (for all classes?) then perhaps it could be dropped from the indicator set?
 Bengt O. Muthen posted on Friday, June 29, 2012 - 1:53 pm
Mplus gives the test of significantly different from zero as a standard and in some cases this test is not of interest at all - this is one such case. The means don't have to be significantly different from zero for the indicators to be useful.
 Robert Young posted on Monday, July 02, 2012 - 2:33 am
Dear Dr Muthen (and J.D. Haltigan), Thank you both for the comments. Very helpful.

I suppose one way would be to center and/or standardize the variables - then at least that way I will know which items in any latent class differ from the average/typical response!

RE: 'I don't know what you mean when you say that LCA output reports which indicators differ between the latent classes.'

Perhaps i am misinterpreting or misrepresenting this, but MPlus Latent Class analysis provide a test of differences in odds ratios between classes:

e.g.
http://www.ats.ucla.edu/stat/mplus/seminars/introMplus_part2/lca.htm

*******************************
LATENT CLASS ODDS RATIO RESULTS

Latent Class 1 Compared to Latent Class 2
*******************************

Regards

Robert
 Bengt O. Muthen posted on Monday, July 02, 2012 - 7:50 am
I see what you mean about the odds ratios - yes, something analogous can be added for continuous indicators.
 Maria Kapantzoglou posted on Tuesday, July 03, 2012 - 8:06 am
Hello,
I conducted a LPA and I was wondering if you could explain why:

(i) the FINAL CLASS COUNTS AND PROPORTIONS FOR THE LATENT CLASS PATTERNS
BASED ON ESTIMATED POSTERIOR PROBABILITIES

and

(ii) the CLASSIFICATION OF INDIVIDUALS BASED ON THEIR MOST LIKELY LATENT CLASS MEMBERSHIP

provide different estimates. I am not sure which one to report. Both are very similar but not the same.

Thank you,
Maria
 Linda K. Muthen posted on Tuesday, July 03, 2012 - 10:53 am
The first one is based on the model where individuals have a posterior probability for class membership in each class. The second is based on the largest class membership for each individual. I would report the first one.
 Mario Mueller posted on Thursday, August 09, 2012 - 5:15 am
Dear Linda,

I performed a LPA in a sample of about n=9.000 and got 2 classes, then, based on a sample of selected Hi-Scorer (10%), I performed another LPA and got 3 classes whereof two were similar to the total sample solution plus one more class that was differently shaped.
I'm interested in whether these two similar classes of both models are comparable? I tried to compare proportions via crosstabs but I'm uncertain what it reveals. Any suggestions?

Thanks, Mario
 Linda K. Muthen posted on Thursday, August 09, 2012 - 10:35 am
I would look at the profiles of the means/thresholds and compare them visually. You can use the PLOT command with the SERIES option to plot the profiles.
 Mario Mueller posted on Monday, August 13, 2012 - 4:50 am
Dear Linda,

Thank you for your reply.
We have already used the PLOT command to visualize the profiles of both models.
Visually the profiles of the two classes in solution #1 (all participants) look very similar indeed to the first two classes of solution #2 (subgroup of highscorer only).
We are just wondering whether there is a test to examine whether this similarity can be statistically proven? For example, could test X tell us that, yes, the two solutions, albeit stemming from different samples, are not significantly different?

Thank you,
Mario
 Linda K. Muthen posted on Monday, August 13, 2012 - 9:09 am
I don't know of any such test.
 Adam Myers posted on Thursday, August 30, 2012 - 11:53 am
In LPA, how important is it that the continuous variables used to estimate the class solution approximate a normal distribution? Is it customary to run the typical diagnostics (histograms, etc.) and correct for non-normality by taking the logs of the variables, etc.? Does doing this sort of thing make an important difference? I haven't been able to find advice on this matter in the literature. Your input would be much appreciated. Thanks in advance.
 Linda K. Muthen posted on Thursday, August 30, 2012 - 1:25 pm
I would deal with non-normality by using the MLR estimator which is robust to non-normality.
 Susan Pe posted on Thursday, September 20, 2012 - 12:10 pm
I am doing a Latent Profile Analysis. Other than using Vuong-Lo-Mendell-Rubin, Lo-Mendell-Rubin adjusted LRT tests, and parametric boostrapped likelihood ratio test, someone recommended that I also check with MANOVA to make sure groups differ as people do with the cluster analysis. Does that make sense for LPA? Thank you.
 Linda K. Muthen posted on Thursday, September 20, 2012 - 2:55 pm
To do a MANOVA you would need to use most likely class membership. I think a better approach is to test if the means are different across classes in your LPA model using MODEL TEST.
 Jung-Ah Choi posted on Friday, September 21, 2012 - 3:51 am
Dear Linda,

Always thanks for your help. I'm doing Latent Profile Analysis. 4 classes was most approprate. Next, I added predictors to the classes. However, all the coefficients were same among classes. I'm not sure what was problem. My syntax and output as follows.

Syntax: ...

MODEL:%OVERALL%
Zdep WITH Zanx Zagg;
Zanx WITH Zagg;
c#1 ON grade se sef fb fa sex school;

output :

Parameterization using Reference Class 1

C#2 ON
GRADE 0.079 0.029 2.760 0.006
SE -0.450 0.055 -8.114 0.000
SEF 0.033 0.060 0.547 0.585
FB -0.362 0.044 -8.166 0.000
FA -0.303 0.045 -6.787 0.000
SEX -0.042 0.051 -0.821 0.412
SCHOOL -0.201 0.091 -2.212 0.027

C#3 ON
GRADE 0.079 0.029 2.760 0.006
SE -0.450 0.055 -8.114 0.000
SEF 0.033 0.060 0.547 0.585
FB -0.362 0.044 -8.166 0.000
FA -0.303 0.045 -6.787 0.000
SEX -0.042 0.051 -0.821 0.412
SCHOOL -0.201 0.091 -2.212 0.027
......
 Linda K. Muthen posted on Friday, September 21, 2012 - 6:08 am
These coefficients are held equal across the classes as the default. You need to mention the ON statement in the class-specific parts of the MODEL command to relax this equality.
 Vinay K. posted on Monday, September 24, 2012 - 7:12 am
Hello Drs. Muthen,

I ran an LPA model, where latent clusters were extracted from two latent
variables (say, depression and anxiety), each of which consist of three item
scales.

The three-cluster solution was judged the best according to LMR-LRT test and
other fit indices as well as meaningfulness of the cluster profiles.

A journal reviewer asked me to test the conditional independence assumption
and to report pairwise residuals.
So I inserted Tech 10 in the Mplus output, but it gave me the warning
"TECH10 option is only available with categorical or count outcomes.
Request for TECH10 is ignored."

So it seems that Tech10 cannot be used for categorical variables. What
should I do to get pairwise residuals?

I have not used Mplus a lot. I'd appreciate it if you could help me out on
this.
 Linda K. Muthen posted on Monday, September 24, 2012 - 11:51 am
TECH10 is available for categorical outcomes.
 Jung-Ah Choi posted on Monday, October 01, 2012 - 12:26 am
Dear Linda,

I always do appreciate your help.
I'm runnuing LPA. I'd like to examine the effects of classes(4 classes) on one outcome variable(continuous variable). Is it possible to analyze classes as a predictor? I got error messages when I used MODEL command like "sa(contunuous outcome variable) ON c(4);". Would you tell me how I specify syntax command if it is possible? Thanks in advance.
 Linda K. Muthen posted on Monday, October 01, 2012 - 6:16 am
This effect is found in the varying of the means or thresholds across classes. You don't use an ON statement.
 Oxnard Montalvo posted on Wednesday, October 31, 2012 - 6:55 pm
Hi,
I am running an LPA on 9 continuous observed variables. What is the implication for my results if I allow the variance of the observed variables to vary across the classes?

And is it correct that there would be no 'equivalent' to this in LCA (i.e. equivalent to freeing the variance of indicators across classes in LPA), since the indicators are binary?

Thanks
 Linda K. Muthen posted on Thursday, November 01, 2012 - 11:38 am
If you relax the equalities of the variances across classes, the model is less stable and it may be more difficult to replicate the best loglikelihood. You can look at profiles of the indicators for each class to assess how much within class variability there is and relax the necessary variances.
 John G. Orme posted on Saturday, December 29, 2012 - 9:37 am
Hi Linda,

Suppose that you are doing a latent class analysis with standardized measures that have arbitrary and different scales (e.g., a standardized measure of marital satisfaction with a potential range from 0 to 100, and a measure of marital conflict with a potential range of 0 to 20). Also, suppose that you allow the means and variances of the indicators to vary across classes. Would there be a problem with transforming the raw score to standard scores in this situation? I wonder because it seems like there are advantages to doing this (e.g., it makes it a lot easier to interpret the profile plot because you can interpret differences between classes and other differences as differences in standard deviation units).

Thanks for any advice you can give me about this. My apologies if I’m missing the obvious here!
 Bengt O. Muthen posted on Saturday, December 29, 2012 - 2:33 pm
I don't think standardization would be problematic here. Your modeling is not making comparisons across variables (or the same variable across time as with growth), but only across classes.
 Yan Liu posted on Saturday, December 29, 2012 - 8:26 pm
Just want to follow up this question. What if I were doing a latent transition profile analysis? Would standardization work across time? Thanks!
 Linda K. Muthen posted on Sunday, December 30, 2012 - 6:29 am
No, you should not standardize when you are comparing across time.
 Yan Liu posted on Sunday, December 30, 2012 - 8:51 am
Is that because we will not be able to compare the changes of means over time after standardization? Thanks.
 Linda K. Muthen posted on Sunday, December 30, 2012 - 4:47 pm
Yes.
 Niamh Bennett posted on Friday, March 08, 2013 - 10:52 am
Hello,

I have run the following model and am wondering how to interpret the value of [gpa2] for each class. Are these values simply the mean of gpa2 for each class, while holding sex1 and gpa1 at the level of the sample mean?

MODEL:

%OVERALL%

c on sex1 gpa1;

%C#1%
[papp pavoid efficacy mastery];
[gpa2];


%C#2%
[papp pavoid efficacy mastery];
[gpa2];

%C#3%
[papp pavoid efficacy mastery];
[gpa2];
 Linda K. Muthen posted on Friday, March 08, 2013 - 11:20 am
If gpa2 is a continuous variable, this is a mean.
 Niamh Bennett posted on Monday, March 11, 2013 - 4:59 pm
Hello again,

Yes, gpa2 is a continuous variable, and I understand that [gpa2] is the code to request a mean. However, what is unclear to me is how the variables in the "c on sex1 gpa1" portion of the model affect the estimated means for each of the latent classes. In my situation, are the means for each class estimated for while holding gpa1 and sex1 at the sample mean?
 Linda K. Muthen posted on Tuesday, March 12, 2013 - 9:44 am
Sex1 and gpa1 predict the classes. Gpa2 varies across the classes that are predicted by sex1 and gpa1. There is no direct relationship between gpa2 and the other variables.
 Jason A Chen posted on Thursday, March 14, 2013 - 7:17 am
Dear Drs. Muthen,

I am forming profiles based on four variables (Immersion, Interest, Usefulness, and Relatedness). These items were assessed immediately following (T2) a technology activity that students participated in. I would like to see if certain pre-intervention variables (T1) predict membership into these profiles, and whether these profiles are related to outcomes that I assessed after the intervention was over (T4). In Mplus, I know that there is the AUXILIARY (e) and AUXILIARY (r) functions, which perform this function. Does it make sense to do this given that the variables predicting latent class membership occur at Time 1 (T1) and the correlates of latent class membership occur at Time 4 (T4)? Or is there some other code that I should enter to account for this difference in time?

Thank you!
 Kari Visconti posted on Thursday, March 14, 2013 - 11:31 am
Hello!
I am currently running an LPA with four indicators of class membership. I am interested in including a control variable to directly predict one of these class indicators. Is it possible to simply include an "ON" statement in the overall model command? If so, what are the implications for interpreting output? For example, the indicator that is being predicted by another variable is presented in the output within each class as an intercept rather than a mean.
Thank you!
 Linda K. Muthen posted on Thursday, March 14, 2013 - 2:34 pm
Jason:

I don't see a problem with this. See Web Note 15 which shows a new 3-step approach. It currently needs to be done manually.
 Linda K. Muthen posted on Thursday, March 14, 2013 - 2:35 pm
Kari:

Yes, you can include an ON statement in the overall part of the MODEL command. And yes, you will then be estimating an intercept instead of a mean.
 Christine McWayne posted on Monday, March 18, 2013 - 4:05 pm
Hi Dr. Muthen,

We are using MPlus to run an LPA to see if different profiles of families engagement exist and the relations between these profiles and demographic characteristics and child outcomes.

When we looked at the results, all but 2 of the auxillary variables were not in the expected metric. We then looked at class membership information that was saved, and also found the variables not in the order that was identified in the output.

Can you help us understand why this happened and how this can be resolved? I tried looking at the forum but couldn't seem to find anything about this.
 Linda K. Muthen posted on Monday, March 18, 2013 - 4:08 pm
Please send the files and your license number to support@statmodel.com. Please show in the output what you mean.
 Niamh Bennett posted on Thursday, March 21, 2013 - 10:58 am
I have a three class model with distal outcomes. Is there any way in mplus to test the effect of class membership on the distal outcomes while controlling for other variables? In particular, I'm interested in knowing whether the effect of class membership is related to a distal outcome, even when controlling for prior levels of the distal outcome.
 Linda K. Muthen posted on Thursday, March 21, 2013 - 1:37 pm
You can regress the distal outcome on the control variables. The relationship between class membership and the distal outcome is then the varying of the intercept rather than the mean of the distal outcome across classes.
 Andreas Mokros posted on Friday, April 12, 2013 - 12:50 am
Dear Sir/Madam:
I noticed that in LPA the means and variances for the latent classes differ from the means/variances that would result if one computed them solely based on the most likely class a person is in. As you mentioned in response to an earlier posting this is due to the fact that "the posterior probabilities for all classes are used for each person". Now I am wondering which values to report in a paper. Wouldn't it be easier for the purposes of replication to solely focus on the means/variances as implied through the most likely class instead of the posterior probabilities for all classes? Especially since we would like to provide a Bayesian classification function for assigning new cases to the classes? It would be most appreciated if you pointed us into the right direction.
Thank you and kind regards,
Andreas
 Linda K. Muthen posted on Friday, April 12, 2013 - 10:31 am
I would not use most likely class membership. I would use the model estimated values. And for prediction, I would use the SVALUES option of the OUTPUT command to obtain the input including ending values as starting values. I would change to asterisk to the @ symbol which fixes the parameter values and use that input as a prediction mechanism.
 Ting Dai posted on Thursday, April 25, 2013 - 7:56 am
Dear Drs. Muthen,

I have 2 measures, each with 16 items (continuous variables), and there are 3 latent factors for each measure.

If I want to see the classification of individuals with these 32 items, what LPA model should I use?

I thought about a regular LPA with all 32 items (ex7.9), but because there are 2 measures I think perhaps I should do a LPA model with two latent class variables (ex7.14)?

A general but related question is:
If the observed indicators are known to be multidimensional (i.e., loaded on multiple factors), should LPA/LCA be used to do classification at all?

Thanks in advance for your reply!
 Bengt O. Muthen posted on Thursday, April 25, 2013 - 9:04 pm
You can do many different model variations for this. Either letting latent class variables influence the factors or the items directly. In the former case, you can specify a latent class variable each for each set of 3 factor for the 2 measures.

You have models of this kind shown in UG examples 17, 26, 27.
 Carey posted on Monday, June 24, 2013 - 2:21 pm
I am trying to decide whether to do use LPA or a cluster analysis with my data, but am having trouble finding resources that may help me decide. Is it the simplified answer that a profile analysis is used moreso when you are looking at several variables that are somehow related (e.g., scales on one measure like the MMPI); whereas, in a cluster analysis you can use several different measures and kinds of measures?

My research question involves looking at several risk and protective factors (individual, family, environmental) which I hypothesize will create several distinct classes that differ on their potential for "resilience" (e.g., high on risk factors, few protective factors in one group; low on risk factors, high on protective factors in another). I do not believe that my factors are necessarily related as latent constructs, so am not sure if LPA is the right approach.

Finally, I will have two data points and am interested in looking at how the classes predict to outcomes.

Thank you!
 Bengt O. Muthen posted on Tuesday, June 25, 2013 - 9:02 am
I would say that the two methods can be used for the same sort of data and research questions.

Here is a good paper comparing K-mean clustering and mixture modeling:

Vermunt, J.K.. (2011). K-means may perform as well as mixture model clustering but may also be much worse: Comment on Steinley and Brusco (2011). Psychological Methods, 16, 82-88.

With two time points and an interest in classes predicting outcomes, I would choose LPA and LTA. See also our Web Note 15.
 Carey posted on Tuesday, June 25, 2013 - 1:36 pm
Thank you! Are there a limit to the number of measures that one can use for LPA? And should the measures used to predict your classes be related?

I ask because I am using several individual level variables (self-esteem, IQ), family level variables (parental monitoring, quality of home life), peer level (friendships), and environmental (neighborhood) to predict my classes. Does this limit the usefulness of LPA since I do not expect these variables to necessarily "hang together" as latent constructs?
 Bengt O. Muthen posted on Tuesday, June 25, 2013 - 2:00 pm
No, no,and no.

The predictors of your latent class variable do not have to be thought of as a construct. The class variable is the construct.

If you have many predictors, you may want to put the predictors on the auxiliary list and request R3STEP. See UG and Web Note 15.
 Wilfried Smidt posted on Tuesday, July 09, 2013 - 6:46 am
Dear Prof. Muthen,

I have conducted Latent Profile Analysis and I am considering the possibility to reject a model due to conditional probabilities of 1 or 0 as stated by some researches. Is this an appropriate way to deal with this problem?
Thank you very much.

Wilfried
 Linda K. Muthen posted on Wednesday, July 10, 2013 - 2:32 pm
You should not reject a model due to probabilities of 1 or 0. This can help define the classes.
 Reem Saeed posted on Monday, October 14, 2013 - 8:21 am
Hello,

New to Mplus and LCA I am trying to come up with LCs of socioeconomic status in my country. The groups I have (i.e. IDs or areas) are 118. Ive done analysis when the indicators are proportions calculated from total population for each area, and have also tried it using simple counts, e.g. the number of people who are illiterate as an indicator.

Input:

Variable:
Names are ..... (all variables in data);
usevariables = ..... (variables I have chosen to use, includes the ID or area);
classes = c(2)
Analysis:
Type = mixture ;
starts = 500 100;
stiterations = 50;

output:
WARNING: THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED. THE
SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA ...

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE
TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE
FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING
VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE
CONDITION NUMBER IS -0.185D-16. PROBLEM INVOLVING PARAMETER 61.

ONE OR MORE PARAMETERS WERE FIXED TO AVOID SINGULARITY OF THE
INFORMATION MATRIX. THE SINGULARITY IS MOST LIKELY BECAUSE THE
MODEL IS NOT IDENTIFIED, OR BECAUSE OF EMPTY CELLS IN THE JOINT

Advice please?
Thanks
 Linda K. Muthen posted on Tuesday, October 15, 2013 - 10:39 am
Please send your output and license number to support@statmodel.com.
 emmanuel bofah posted on Wednesday, October 30, 2013 - 7:40 pm
IN CHAPTER 7, OF USER MANUAL IS STATED THAT," In contrast to factor analysis, however, LCA provides classification of individuals". Does it mean if i am saving my spss data into .dat file i should flip the data so that individuals becomes the column and variables become the rows.
 Bengt O. Muthen posted on Wednesday, October 30, 2013 - 8:30 pm
No, you are thinking of Q factor analysis which is something different. In LCA the latent variable is nominal, so unordered categorical - people get placed in categories.
 emmanuel bofah posted on Wednesday, October 30, 2013 - 9:20 pm
Ok. Thank you very much. I was thinking of between between Q factor analysis an Latent class profiling. Is Q factor analysis possible in Mplus?.
 Linda K. Muthen posted on Thursday, October 31, 2013 - 10:24 am
Mplus does not have an option for this. You can try changing the data to have variables as rows and observations as columns. Using that for a CFA is Q factor analysis. I'm not sure if having more columns than rows will create an estimation problem.
 Kelly DeMartini posted on Thursday, November 07, 2013 - 1:20 pm
Good afternoon,

I have conducted an LPA with a 10-item scale. A reviewer would like us to assess the local independence assumption of our LPA and assess the bivariate residuals. As Tech10 is not available for continuous data, is there a way for me to get this information? If there is not, how should I best assess this assumption? Thanks for your help.
 Bengt O. Muthen posted on Thursday, November 07, 2013 - 2:41 pm
You can request RESIDUAL. Although no test is provided, this might give you an idea for which within-class covariances you want to let free. You can also look at Modindices to guide you in this way. TECH12 is a further possibility. And if you have a high entropy (> 0.8, say), you can divide subjects into classes using most likely class and compute within-class correlations.
 Kelly DeMartini posted on Friday, November 08, 2013 - 6:27 am
Thanks very much. That's very helpful. I have requested the RESIDUAL output. Is there a rule of thumb that can be used to determine whether a within-class covariance is too high and/or how many covariances can be freed before the model is invalid?
 Bengt O. Muthen posted on Friday, November 08, 2013 - 8:17 am
Short answer: No.

The too high question may be informed by the Modindices.

With continuous indicators you can add all covariances as in UG ex 7.22.
 Adar Ben-Eliyahu posted on Thursday, November 14, 2013 - 10:16 am
Dear Linda, I was told that in order to provide support that the best-fitting model for LCA/LPA is “good enough” to justify the use of MANOVA, I should use a 3 step procedure available in Version 7 of MPlus (Asparauhov & Muthen, 2013) that allows researchers to examine the relation between the latent profile variable and the other variables of interest independently while still incorporating the classification uncertainty associated with the latent profile models.
As I was searching what this meant, I came across webnote 15. However, I am having difficulties running this analysis because 1) the variables used to categorize individuals into latent classes are not categorical, they are continuous and 2) I was unsure what was meant by auxiliary variables - would that be the "outcome" or "dependent" variable of interest?
I think that this advice might be for CFAs or for categorical grouping variables, not necessarily for the data I was using. Could you possibly advise?
Thank you
 Brianna H posted on Friday, November 15, 2013 - 9:11 am
Hello Drs. Muthen-- In the output of a latent profile analysis with four continuous indicators (all ratio-scored), I received the following warning message for the input instructions: "All variables are uncorrelated with all other variables within class. Check that this is what is intended."

(1) The observed indicators are, in fact, correlated with each other. Does this message refer to the default in Mplus that covariances among latent class indicators are fixed at zero? Could you please advise about what this message means and possible ways to proceed?
(2) Is there a citation that you recommend for the use of ratio-scoring in latent profile analyses or SEM generally? Thank you.
 Bengt O. Muthen posted on Friday, November 15, 2013 - 4:26 pm
Answer to Adar

3-step modeling can be used also with continuous latent class indicators.

Auxiliary variable can be a predictor of the latent class variable or a distal outcome, that is, a variable influenced by the latent class variable.
 Bengt O. Muthen posted on Friday, November 15, 2013 - 4:30 pm
Answer to Brianna

(1) It means that within class the variables are specified to be uncorrelated. You can make them correlated by using WITH. The fact that the variables are correlated in the sample is captured by the variables being influenced by the same latent class variable, so a within-class correlation parameter is not necessarily needed.

(2) No, I can't think of any. Others?
 Adar Ben-Eliyahu posted on Friday, November 15, 2013 - 6:54 pm
Thank you so much for your prompt reply Dr. Muthen!
I was able to get this to run, but am now struggling with a MANOVA as the 3rd step...I entered all the DVs as the auxiliary vars and it seemed like it was alright, but then I could not find the actual across group comparison. Please find my code below:
Thank you!

VARIABLE:
NAMES ARE
ID Q911A1 RQ911A2 Q911A3 Q911A4 Q946A1 Q946A2 Q946A3 Q946A4 ZQ911A1 ZQ911A3 ZQ911A4 ZQ946A1 ZQ946A2 ZQ946A3 ZQ946A4;

USEVAR ARE ID ID Q911A1 RQ911A2 Q911A3 Q911A4 Q946A1 Q946A2 Q946A3 Q946A4 ZQ911A1 ZQ911A3 ZQ911A4 ZQ946A1 ZQ946A2 ZQ946A3 ZQ946A4;
IDVARIABLE IS ID;
missing are BLANK ;
class=C(3);
Auxiliary=ZQ911A1 ZQ911A3 ZQ911A4 ZQ946A1 ZQ946A2
ZQ946A3 ZQ946A4(R3STEP);
MODEL:
ANALYSIS: type = mixture;
starts = 200 10;
%overall%
Q911A1(1)
Q911A3(2)
Q911A4(3)
Q946A1(4)
Q946A2(5)
Q946A3(6)
Q946A4(7);

%C#2%
Q911A1(1)
Q911A3(2)
Q911A4(3)
Q946A1(4)
Q946A2(5)
Q946A3(6)
Q946A4(7);

%C#3%
Q911A1(1)
Q911A3(2)
Q911A4(3)
Q946A1(4)
Q946A2(5)
Q946A3(6)
Q946A4(7);
[Q911A1 Q911A3 Q911A4 Q946A1 Q946A2 Q946A3 Q946A4];
 Ginnie posted on Friday, November 15, 2013 - 10:22 pm
Hello Dr. Muthen,

I am conducting a LPA, and would like to incorporate additional observable (continuous) variables to predict latent profiles.

I am thus wondering how I can run this kind of analysis in MPlus.

Thanks,
Ginnie
 Linda K. Muthen posted on Saturday, November 16, 2013 - 11:26 am
Ginnie:

See Example 7.12.
 Bengt O. Muthen posted on Saturday, November 16, 2013 - 6:12 pm
Answer to Adar:

Please send your output to Support.
 Brianna H posted on Sunday, November 17, 2013 - 2:23 pm
Thank you for your replies.
 Tan Bee Li posted on Sunday, January 05, 2014 - 9:49 am
Hi,

1. Will there be an issue if the covariates and the indicators consist of both discrete and continuous data?

2. Distal outcomes are similar to the indicators but are identified based on time. Does MPlus require specific research designs such as repeated-measures to make this analysis?

3. Within-class correlations: Since 5 of my indicators are derived from the same construct, they are correlated (these are facets within a particular construct). Moreover, executive function skills which make my 6th, 7th and 8th and 9th indicator was found to be associated with some of these five indicators, and may be skills that contribute to their development, vice versa. Would it be redundant then for me to use LPA?
Is there a hierarchical version for LPA as how HLA is for LCA?

4. I have 1 questionnaire (likert) that measures 5 facets that makeup a construct. I am hoping to use the composite score for facet A as indicator 1, composite score for facet b as indicator 2 etc. Is that allowable?

5. What is the minimum sample size Mplus requires for meaningful analysis? Or how is it determined by the no. of indicators?

Based on my description, if LPA is not suitable, could you recommend a better model?

Thanks.
 Bengt O. Muthen posted on Monday, January 06, 2014 - 8:29 am
1. No. But remember that covariates should not be put on the CATEGORICAL= list.

2.No.

3.No. Yes.

4.Yes.

5. No general rule, but you want more observations than parameters generally.

I don't want to make a general recommendation because it would require a much deeper understanding of what you do.
 Tan Bee Li posted on Tuesday, January 07, 2014 - 10:50 pm
Thanks for your response.

What would be hierarchical version for LPA?

Also, the EF skills (indicator 6 to 9) have also been proposed to be outcomes instead of predictors (outcome of the 5 facets; predictor 1-5). Is there a meaningful way for me to examine if indicators 6 to 9 are better as predictors vs. distal outcomes?

Thanks.
 Bengt O. Muthen posted on Wednesday, January 08, 2014 - 10:43 am
By hierarchical version for LPA I assume you mean two-level LPA, in which case you want to look at

Henry, K. & Muthén, B. (2010). Multilevel latent class analysis: An application of adolescent smoking typologies with individual and contextual predictors. Structural Equation Modeling, 17, 193-215. Click here to view figures and syntax for all models.
 Carrere posted on Wednesday, March 05, 2014 - 9:45 am
Hi,

I am running LPA and would like to free the variances across classes. That would be great if could let me know what code has to be added.

Here is my current program:
VARIABLE:
NAMES ARE id icog scog ihea shea ieng seng;
USEVARIABLES ARE icog scog ihea shea ieng seng;
CLASSES = group(3);

ANALYSIS:
TYPE IS MIXTURE;
LOGHIGH = +15;
LOGLOW = -15;
UCELLSIZE = 0.01;
ESTIMATOR IS ML;
LOGCRITERION = 0.0000001;
ITERATIONS = 1000;
CONVERGENCE = 0.000001;
MITERATIONS = 500;
MCONVERGENCE = 0.000001;
MIXC = ITERATIONS;
MCITERATIONS = 2;
MIXU = ITERATIONS;
MUITERATIONS = 2;

STARTS=100 10;
MODEL:
%OVERALL%
icog with scog ihea shea ieng seng;
scog with ihea shea ieng seng;
ihea with shea ieng seng;
shea with ieng seng;
ieng with seng;


OUTPUT: STANDARDIZED TECH11;

Many thanks!
 Linda K. Muthen posted on Wednesday, March 05, 2014 - 10:45 am
You mention the variances is the class-specific part of the MODEL command, for example,

%group#1%
icog scog ihea shea ieng seng;
%group#2%
icog scog ihea shea ieng seng;
 Carrere posted on Thursday, March 06, 2014 - 2:56 am
Many thanks!
I have one additional question. What option has to be checked to get the class number each participant has been assigned?
 Linda K. Muthen posted on Thursday, March 06, 2014 - 5:59 am
Use the CPROBABILITIES option of the SAVEDATA command. See the user's guide for more information.
 Joshua Wilson posted on Friday, March 14, 2014 - 11:03 am
Hello,

I'm running a LPA model with a four-class solution, and would like to reorder the classes so that the largest class is last. To do this, I'm taking the SVALUES from the first output file and then reordering the class labels so the largest class is last. I'm setting STARTS = 0 and using the OPTSEED option. This process worked fine for reordering the 2-class and 3-class solution, but it isn't working for the 4-class solution.

The problem that I'm having is that this changes the entire model. It doesn't reproduce the first model's H0 likelihood value, changes the entropy and other fit statistics, and has very different class assignment values.

Is there anything you can suggest to rectify this?
 Bengt O. Muthen posted on Friday, March 14, 2014 - 12:48 pm
When you use SVALUES as start values with Starts=0 you should not also use OPTSEED.
 Joshua Wilson posted on Friday, March 14, 2014 - 1:09 pm
Thank you, Dr. Muthen.

That worked to rearrange the classes--now the largest class is last. But, it is still not reproducing the original H0 likelihood value and the original class counts.

FYI--I ran the original model twice, once with Starts = 1000 200; and again with Starts = 2000 500; to ensure that the issue wasn't one of a local maxima. In both cases, I got the same H0 value and it was reproduced dozens of times.

Is there anything else I can try?

Thanks.
 Joshua Wilson posted on Friday, March 14, 2014 - 1:23 pm
Hello again,

I figured it out.

The problems arise when I try to reorder more than two classes at a time. Switching class labels of two classes at a time results in no errors.

Thanks!
 Joshua Wilson posted on Wednesday, March 19, 2014 - 11:18 am
Hello,

I have a question about interpreting the output of LPA.

I have three continuous indicator variables which were used to estimate a 3-class solution.

Are the class-specific means of the indicator variables interpreted as the actual mean for that class? Or, are the class-specific means of the indicator variables interpreted as the 'mean difference' in values for that indicator between that class and the reference class.

For example, if Indicator1 (i.e., Y1) has an estimated mean of -0.831 (p < .001) for class 1. Is this interpreted as the actual estimated mean for Y1 for class 1 is -0.831, and that this value is statistically significantly different from zero?

Or is this interpreted as, the mean of Y1 for class 1 is -0.831 units less than the mean of Y1 for class-3, and this mean difference is statistically significantly different from zero?

Thanks for your help!
 Joshua Wilson posted on Wednesday, March 19, 2014 - 2:23 pm
Also,

How do I interpret the latent categorical means in LPA? What is their 'meaning'?

Thanks!
 Bengt O. Muthen posted on Wednesday, March 19, 2014 - 4:29 pm
First post: They are actual means. The test concerns whether it is statistically different from zero.

Second post: These are logit values which correspond to the class probabilities printed at the top of the results (Model-estimated...).
 Joshua Wilson posted on Wednesday, March 19, 2014 - 5:56 pm
Thank you, Bengt!

That really helps.
 Brianna H posted on Wednesday, April 09, 2014 - 4:58 pm
Hello -- I have been working on a latent profile analysis following the instructions of Asparouhov and Muthen (2012) for using TECH 11 and TECH 14 to determine the optimal number of classes (link below).

In my 5-class solution TECH 14 output, I receive the warning message "WARNING: THE BEST LOGLIKELIHOOD VALUE FOR THE 4 CLASS MODEL FOR THE GENERATED DATA WAS NOT REPLICATED IN 5 OUT OF 5 BOOTSTRAP DRAWS[...]"

Although I have increased the number of random starts in LRTSTARTS (to 0 0 3600 720), I still receive this warning message. Asparouhov and Muthen (2012) state that this warning message should go away if the # of LRTSTARTS are increased.

I am wondering if this issue might be occurring because the 5-class solution has two classes with less than 5% of the sample in the class. Could the model be unstable because of the low # of cases in each class?

Although the 4-class solution is easier to interpret (and has only 1 class with <5% of the sample), most of the model fit statistics (e.g. Entropy, BLRT, LL, AIC, BIC, adjusted BIC, and Tech 11 output) favor the 5-class solution.

Thank you for your help.

http://www.statmodel.com/examples/webnotes/webnote14.pdf
 Brianna H posted on Wednesday, April 09, 2014 - 5:07 pm
Actually, all of the fit statistics *except* Entropy favor the 5-class model.

Entropy of the 4-class model is 0.944 and of the 5-class model is 0.939.

BLRT of the 5-class model is 923.04, p<.001.

Thank you.
 Bengt O. Muthen posted on Friday, April 11, 2014 - 6:04 am
If you have followed the suggestions of web note 14 and still have problems I would simply go with BIC. I would not use entropy to choose the number of classes. Entropy is a description of the usefulness of the latent class model, not a measure of how well it fits the data.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: