Hi. I have a question regarding Latent Profile Analysis. I have several measures of child "executive function" that include behavior (e.g. impulsivity and attention) and language (e.g. expressive and reflective) that I am using in a profile analysis. These measures are popular in the field and are measured on different scales and thus have different variances. I was wondering if there was a general rule about the degree of difference (between smallest/largest) in variances for continuous items in a latent profile analysis (I know in this is an issue in other forms of "profile analyses" e.g. Tabachnick & Fidell and an issue in SEM e.g., Kline or Bentler). Is it a requirement that all items are measured on the same scale and have similar variances?? Thanks in advance.
Hi again. Continued discussion on "child executive function" from the earlier post (3/19). I tested the adequacy of my three class latent profile model by giving each class different start values to make sure the solution I got was the "right" one. The model appeared stable (results and log likelihood). Next, I simply changed the order of class 1 and 2 and kept the same start values (so, in my mind the results shouldn't have changed just the order of the results--what were class "2" results should have become class "1" results).
I ended up with different results both on the means within class and on class sizes (prior I had 27%, 36%, 36% and now have 37%, 37%, and 25%, where the third class [the same class in both analyses] was 36% is now 25%, class one was 27% now 37%) leading to different conclusions, which makes me a little concerned.
Is there something I have overlooked or should be concerned about given the changes in results? My operating assumption was that the start values were important, not the order of the classes (by the way, I am using actual start values, instead of 1 and -1, for convergence, it helps because the items I am using are on different scales). Advice??
I've completed a confirmatory factor analysis with three continuous LVs each represented by a set of indicators (which are items on a paper-and-pencil scale), and the fit appears acceptable. The indicators are each scored from "1" to "4" in a Likert-type response format. I have a preconceived hypothesis that the participants should fall into five separate categories based on their scores on the three continuous factors. For example, those "high" on the first factor and "low" on the other two will form one group, those "high" on the second and third factors will form a second group (regardless of scores on the first factor), etc. In addition, I expect to see certain gender differences in the proportions of participants assigned to each category. Is there a way to conduct a confirmatory latent profile analysis to test this hypothesis? Would this be an appropriate thing to do? If so, could you please route me to a reference and/or example in MPlus2? Thank you and happy Thanksgiving!
Bmuthen posted on Thursday, November 22, 2001 - 8:55 am
You may be interested in a new paper by Lubke, Muthen, Larsen (2001), Global and local identifiability of factor mixture models. This can be requested from email@example.com by mentioning paper 94.
Anonymous posted on Friday, March 08, 2002 - 10:00 am
Hi -- I am doing a latent profile analysis, using six indicators of "social capital", each measured on a 1-10 Likert-type scale. My model converges with all variances constrained (the BIC continues to decrease up to a six-class model, but a 3-class model fits better with theory, gives better class probabilities, and the entropy measure is higher). When I free variances past a 1-class model I have a variety of problems -- including within class means that are outside the scale, and/or at the min or max, and with 0 variance (and a variety of error messages re. the model not converging). Also, the class sizes change and patterns between the variable means within classes change when any variances are freed. My data is negatively skewed (less skewed within the classes than in the full group), but within the limits recommended by Kline. Should I trust my results with the variances constrained? Or, can you recomment how to proceed?
bmuthen posted on Friday, March 08, 2002 - 5:47 pm
Latent profile analysis can have these types of behaviors when the variances are allowed to vary across classes. The literature so far seems to have little guidance to offer in this area. You may want to consider the following approach. Using the model with class-invariant variances, you can classify individuals into the latent classes using their posterior probabilities. You can then go back to the raw data and study the variation of each variable in each class. If a variable is considerably more or less variable in a certain class, you can modify the model to allow that variable to have a class-specific variance for that class.
Anonymous posted on Monday, June 10, 2002 - 3:43 pm
I want to classify respondents from several ethnic groups into classes, three classes for each ethnic group. There are 24 5-point Likert variables (never to always) that measure five latent constructs. The classification will be based on the five latent constructs. The minimum and maximum subsample sizes are 190 and 300, totaling about 1,000. I want to see how the class proportions differ from one another group. Could you give some guidence on how to run this analysis. Thanks!
bmuthen posted on Tuesday, June 11, 2002 - 9:31 am
Let me first ask you if by classes you refer to a latent class(latent profile; LPA) analysis using the 5 latent constructs? If so, have you done preliminary LPA analyses of the factor scores within each ethnic group?
Anonymous posted on Tuesday, June 11, 2002 - 10:04 am
I have tried LPA with the factor scores for one subsample and the result looked ok! I am not quite sure if I should procede with this approach. Should I obtain the factor scores from a Multigroup CFA or from a single group CFA? If multigroup CFA is preferred, what constraints are needed on what parameters?
bmuthen posted on Tuesday, June 11, 2002 - 11:26 am
A multiple-group analysis is very valuable to do first because you want to make sure that you have a sufficient degree of measurement invariance before you compare the latent variables (or classes from them) across groups. You should use the default Mplus setup for a multiple-group meanstructure analysis which holds intercepts and loadings equal across groups. You can then look at modification indices to see if some items are not invariant wrt to either parameter type (intercept or loading).
Anonymous posted on Saturday, February 28, 2004 - 6:51 pm
Hello I am running a Latent Profile Analysis using a set of 15 behavioral characteristics. Some of the characteristics are highly correlated (e.g., .7 to .8), but the majority of characteristics have moderate to low relationships. Only 10 pairs of variables from the entire correlation matrix showed correlations above .7. Also, all variables are on the same metric (T scores).
In one run, the variables were considered independent, where the latent class variable was driving the relationship between the observed variables. In a second run, those variables which were highly related were allowed to correlate (using the WITH statement). In the run which considered the variables to be independent, the results were much more meaningful (e.g., lower BIC, higher entropy, MUCH easier to interpret) than the results in which the selected variables were correlated.
Can the LPA solution which considers the variables to be independent be interpreted? Or, is this solution 'invalid' due to the high correlations between some of the variables? How strong is the assumption of independent variables when running/intepreting LPAs?
Thank you for your comments and also for MPLUS.
bmuthen posted on Sunday, February 29, 2004 - 7:49 am
The sample correlations should be signficant for LPA. It is the within class correlations that are zero. Although LPA specifies zero within-class correlations among the variables, it reproduces correlations among the variables because the variables are all influenced by the latent class variable, so the variables become correlated when mixing across the classes. If some variables correlate more than others this can be due to these variables differing more in means across the classes than other variables. This means that you don't have to include WITH statements to make your model fit. Perhaps you need to include more classes, which have particularly high across-class mean differences on the highly correlated variables. It is also the case that IF you allow WITH for some variables, you may be able to use a smaller number of classes and still get the same model fit. WITH represents within-class correlation and should have a well-interpretable substantive meaning such as a measurement methods effect. So, to some extent classes and WITHs have similar effects on model fit, and substantive arguments will have to be brough in to make a choice. Related to this, you may also study chapter 3 of the Hagenaars-McCutcheon latent class book of 2002 published by Cambridge Univ Press, "Applied Latent Class Analysis".
As I've seen LPA used and described as a way to identify homogeneous populations within a larger heterogeneous population, indicator variables are usually either all continuous or all binary/categorical. What are the potential problems of combining binary/categorical indicators and continuous indicators in the use of LPA?
Do you know of a good example of where this mixed model has been applied using LPA to describe subpopulations within a heterogeneous group? I'm currious as to how descriptions of the differences between groups among the indicator variables are made (means for continuous and item endorsement probabilities for binary/categorical)?
Can anyone point me to a resource in which latent profile analysis was used with MPlus, and/or a general introduction to latent profile analysis including a description of the parameters that the analysis generates to determine these profiles? Thanks!
bmuthen posted on Tuesday, November 30, 2004 - 7:49 pm
Although not using Mplus, the Vermunt-Magidson chapter 3 in the Hagenaars-McCutcheon book Applied Latent Class Analysis is useful in this regard. An introduction using Mplus has yet to be written.
We have just run a latent profile analysis using Mplus. We have 18 variables that are continuous in nature and 1 variable that is categorical with 4 levels or groups. With respect to the output, we understand how to interpret the output for the 18 continuous variables. However, the output for the 1 categorical variable is unclear to us. Values for this variable are listed under the heading Means, and give us values only for 3 of the 4 groups that compose this categorical variable. Our questions include (a) why are these categories list under Means? (b) shouldn't we be getting proportions for this variable since it is categorical? and (c) in general, if these means are interpretatively meaningful, what do negative means tell us? Thank you for any help you can provide.
If this is an observed categorical variable, then you should get thresholds. This variable should be on the CATEGORICAL list. If this is a categorical latent variable, you should get means. I think you mean the former but am not totally certain.
I now change the categorical variable to be listed as CATEGORICAL rather than NOMINAL, and received the thresholds. I guess I am still confused why I did not receive probabilities for these as well, like one receives in an LCA. Thanks!
With binary outcomes, CATEGORICAL and NOMINAL should yield the same results. I suggest that you send the two outputs and data to firstname.lastname@example.org to be checked. You may not be using the most recent version of Mplus or there may be another explanation. I would need more information to determine this.
I have a question regarding the determination of the appropriate number of classes in an LPA. For example, if the Vuong-Lo-Mendell-Rubin Likelihood test is not significant for a 3 class solution (compared to a 2 class solution) but the BIC is smaller for the 3 class solution, which should trump? Meaning...how do go about evaluating whether the 2 or 3 class solution is superior?
bmuthen posted on Friday, February 04, 2005 - 6:03 pm
This does not have a simple answer. BIC and LMR can disagree. You may also want to consider sample-size-adjusted BIC which has shown superior results in some studies. When fit indices do not give a clear answer I would go with interpretability - often a k-class solution is merely an elaboration of a (k-1)-class solution, not a contradictory finding.
Also, are you sure you are interpreting the MLR p value correctly? See the User's Guide.
Could you tell me how MPlus sorts results files (.dat).? I have imported a results file into SPSS and want to be able to link subjects to their original case id’s—I should be able to do this if I can figure out how Mplus is sorting the file. Just so you have a little background (if necessary to answer the question), the LPA that I conducted includes only a subset of the total subjects in the original data file. The original file includes 3 sets of subjects, and I used the command syntax to include only those subjects with a code=1 on a categorical variable in the data set. Thus, the LPA was only conducted on these subjects in this specific analysis.
I have now been able to save the ID it is saving it like this 10.000**********, which is not the proper format. The subject ids are supposed to look like this 030100102. Can you suggest how I might change the commands so that the subject id's are accurately saved?
See the Mplus User's Guide where it states that the length of the ID variable can not exceed seven. You will have to shorten this variable. There is usually a unique part that does not exceed seven.
Anonymous posted on Thursday, February 17, 2005 - 4:56 pm
Regarding the interpretation of the MLR, discussed on Feb.4...If the p value of the MLR is less than .05, this means that the solution is superior to the k-1 solution? Conversely, if the p value is greater than .05, the k-1 solution is superior. Is this correct? Thank you.
bmuthen posted on Thursday, February 17, 2005 - 5:08 pm
You mean LMR (Lo-Mendell-Rubin). Yes, your description is correct.
Anonymous posted on Saturday, March 19, 2005 - 1:51 pm
I am running a latent profile analysis (LPA) of four count variables that index health care utilization (e.g. # ER visits). Initially I plunged ahead and did the LPA and found that a two class solution was indicated by the Vuong-Lo-Mendell-Rubin and Lo-Mendell-Rubin Likelihood Tests (i.e the two class colution was superior to the one class solution, and the three class solution did not improve on the two class solution). At the same time the BIC argued for a single class. I became concerned with the inconsistency and (as I should have done originally) I investigated the "Poissoness" of the utilization variables. On convexity plots three of the four variables showed deviations from poissoness. I suppose my initial question is, "In the latent mixture model context of an LPA how robust are findings to violations of dispersion for count (poisson) variables?" I took the additional step of running LPA's with inflation parameters. This showed more consistent results in terms of the likelihood ratio tests and the BIC, and argued for the existence of three groups. The problem with this is that I cannot seem to test 3 vs. 4 groups in order to establish this classification scheme with more certainty. I am receiving several error messages and do not think I am going to get the model to run. So I suppose my next question is this,"Assuming that I do not get the 3 vs. 4 class model to run, would it be reasonable to acknowledge the existence of three classes, establish that the three class solution is a variation on the two class (k-1) model and move on with my analyses using two groups?"
bmuthen posted on Saturday, March 19, 2005 - 4:12 pm
It sounds like you needed the zero-inflated version of the Poisson model. But you say you don't get a solution for 4 classes - or perhaps you don't get a tech11 (LMR) result in the 4-class run; I am not sure from your message. If you have tried to use many random starts (say starts = 100 5) and still fail, it may be due to 4 classes being too ill defined in these data, and staying with 3 is the way to go. So my inclination would be to say yes to your last question.
Anonymous posted on Friday, April 22, 2005 - 10:02 am
Hello I have run a k-means cluster analysis and a LPA analysis on the same set of data. I found an 8 cluster solution that made sense. But, I only found 4 classes (5 classes would not converge) I've tried varied start values for the LPA, allowed variables to correlate within class, etc. in attempts to try to get the same number of groups across both methods.
My question is: should I expect the procedures to uncover the same number of classes/clusters or could I find different solutions because one method is uncovering latent groups/subpopulations and one method is working more on the observed level?
bmuthen posted on Friday, April 22, 2005 - 11:07 am
I think k-means clustering uses a more restrictive model that LPA - doesn't it also assume equal variances across variables (in addition to the assumption of equality of variances across clusters)? See for example McLachlan's new Wiley book on Microarray analysis. In Mplus you can add the equal variance restriction.
Anonymous posted on Friday, April 22, 2005 - 11:51 am
Thank you for your reply. You're correcct - in k-means variables should be roughly equal across variables. I was wondering if the differences in solutions was related to "level" of results (latent classes vs. observable clusters). In general, I haven't seen LPA models uncover as many groups as Cluster Analysis (mainly 2-4 classes found). I know that hierarchial cluster methods (e.g. Wards) let you 'see' the different cluster solutions & was wondering if this was similar to differences between k-means and LPA.
In MPLUS, the default is equal variance across cluster, correct? Is this relaxed with the WITH Statment to allow correlations between variables?
bmuthen posted on Friday, April 22, 2005 - 3:20 pm
I don't see the "level" of results as being different between the two approaches. You can "vizualize" the LPA results by using Mplus to plot the observed variable mean profiles for the different classes. You probably get more LPA classes when you hold variances equal across variables (try it). Yes, the Mplus default is equal variances across classes (but not across variables). And adding WITH statements relaxes the conditional independence assumption, allowing correlations. See also the Vermunt-Magidson article in the Hagenaars-McCutcheon Applied LCA book (Mplus web site refs).
Anonymous posted on Monday, April 25, 2005 - 8:03 am
Thank you again for your reply.
How do you hold the variances equal across variables? I'm not sure if this is needed, since I am dealing with T-scores, but the variances should differ by class.
Also, on p.121 of the MPLUS (Ver 3) manual, an example mentions that by mentioning the variances of the latent class indicators, the default equality constraint of equal variances (across classes) is relaxed. Will this allow for estimates of different variances within each class as well as different variances for individual variables?
However, to compare to k-means, which creates groups based on minimum w/in cluster error, shouldn't the MPLUS default be imposed?
To hold variances equal across variables, give the variable names which is how you refer to variances and use parentheses with a number inside to represent equality,
y1 y2 y3 (1);
holds the variances of y1, y2, and y3 equal.
In the example, the equality constraint on a regression slope is relaxed. If you want to relax the equality constraint on another parameter such as a variance, then you would mention that parameter.
If you want to compare to k-means, then you should place the same constraints as k-means does.
bmuthen posted on Monday, April 25, 2005 - 3:01 pm
You mention T scores so it sounds like you are standardizing your observed variables. This may be necessary for k-means clustering. I would, however, recommend not doing that in the LPA - and if the variables have different metrics then also not hold the variances equal across variables (only across classes).
Anonymous posted on Tuesday, April 26, 2005 - 1:36 pm
Regarding yesterday's discussion about comparisons between LPA & k-means - Thank you very much. You both cleared up a lot of questions.
Dr Muthen, you mentioned that I would probably get more LPA classes when variances were held equal across variables (4/22 note)-- and this did produce results very similar to k-means. (up to 8 classes found before nonconvergence)
However, when variances were allowed to vary across classes (but not across variables), there were fewer number of classes found (up to 4 classes found).
Why would relaxing an assumption lead to finding fewer classes? Thanks again for your assistance
bmuthen posted on Tuesday, April 26, 2005 - 2:02 pm
The more flexible the model is for each class, the better it can fit data and therefore the fewer classes you need. Your finding suggests that the "true" classes have different variances (across classes). If class-varying variances is the true state of nature and you force classes to have equal variances in your analysis, you have to have more classes in order to fit the data. Same thing if the true state of nature is within-class covariance - if you force classes to be formed with uncorrelated variables within class, then you need more classes to fit the data (this can be vizualized if you draw a 2-dimensional plot with a single correlated pair of variables - that 1-class data situation need 2 or more uncorrelated classes to be fit).
Anonymous posted on Wednesday, April 27, 2005 - 1:07 pm
Re: yesterday's conversation: Thank you very much. So, if I have this right, with LPA we may want to start with a restrictive model (essentially K-means) and systematically "relax" assumptions (allow different variances across classes, allow covariances w/in class) until we find the model that fits the best in terms of parsimony, interpretablity, and fit indices -- correct?
Is there any reference for this procedure or is it just standard practice? Thanks again -- this conversation has been most helpful.
BMuthen posted on Wednesday, April 27, 2005 - 5:58 pm
See Chapter 3 by Vermunt and Magidson, Latent cluster analysis, in Hagenaars and McCutcheon's book Applied Latent Class Analysis.
Anonymous posted on Saturday, May 21, 2005 - 5:29 pm
I am trying to specify a latent profile analysis with covariates. I want the the latent class variable to be measured by one set of variables, and class membership to be "predicted" using a *different* set of variables. Most of the examples in Chapter 7 of the User's Guide have the covariates ALSO affecting (or covarying with)the indicators of class membership.
I've tried this:
model: %overall% c#1 by Fsamed FsameBm AshrCC blauCCm blauCCv; c#1 on meanphd acadappl psoc pmale quant sameIB samephdB;
But MPLUS output tells me it is no longer allowed, and I should see chapter 9, which is about multilevel modeling and complex data ... I couldn't see the link. Can you tell me how to model this?
The BY option was used in Version 1 for latent profile analysis. It is no longer used. See Example 7.12 for the Version 3 specification. Just delete the CATEGORICAL option because your indicators are continuous and delete the direct effect u4 ON x; from the MODEL command.
Anonymous posted on Sunday, May 22, 2005 - 6:26 pm
Thanks for the quick reply, that worked! I am now wondering about how to get all possible contrasts for the multinomial logistic regression of the latent class variable on the covariates. I am working with 3 classes.
When I type:
c#1 on meanphd acadappl psoc pmale quant sameIB samephdB; c#2 on meanphd acadappl psoc pmale quant sameIB samephdB;
MPLUS appears to give me the effect that each of these covariates has on the probability of being in the stated class (1 or 2) relative to being in class 3. But what about the probability of being in class 2 relative to class 3? MPLUS would not allow me to make any reference to the "last" class (#3) at all.
c#2 on meanphd acadappl psoc pmale quant sameIB samephdB;
gives the probability of being in class 2 relative to class 3. You can't make reference to the last class. It is the reference class with coefficients zero. See Chapter 13 of the Version 3 Mplus User's Guide for a description of multinomial logistic regression.
Anonymous posted on Thursday, May 26, 2005 - 2:52 pm
oops, sorry, I wasn't clear.
c#1 on meanphd acadappl psoc pmale quant sameIB samephdB; gives the probability of being in class 1 relative to class 3.
c#2 on meanphd acadappl psoc pmale quant sameIB samephdB; gives the probability of being in class 2 relative to class 3.
How do I get the probably of being in class 1 relative to class 2? (In STATA, "Mcross" gives you such results).
You would have to make class 2 the last class to do this. You can do this by using the old class 2 ending values as user-specified starting values for class 3 in the run where you want to compare class 1 to class 3.
Hello, I am considering estimating a latent profile analysis using a set of behavior ratings measured on a 5-point Likert scale. An alternative to this would be treating the items as ordinal, and estimating a latent class analysis. Another alternative is to consider the items as nominal. Is there any empirical way to determine which parameterization is most appropriate? The BIC from the 3 models is: 289368.688 from the LPA of the ratings treated as continuous indicators; 290569.173 from the LCA of the ratings treated as ordinal/categorical; and 290953.619 from the LCA of the ratings treated as nomial (2 class solution for each model). Thanks for your advice.
I don't think you can make this determination by comparing BIC's. I would need to know more about these variables to answer this but basically if this is an ordered polytomous variable, it is best to treat it that way. If it does not have strong floor or ceilling effects, you may be able to treat it as continous. I am not sure why you would want to treat it as nominal.
Sandra posted on Thursday, October 20, 2005 - 7:41 am
I’m working on a latent profile analysis using seven scales which measure different life goals. I tried some mixture models where I allowed variables to be correlated within classes and with variances allowed to vary across classes.
My problem is that even in the two-class solution I receive a class in which one scale (a_aibz) has a variance of zero. This scale measures relationship goals and has already in the empirical data set a very small variance. Fixing the variance to zero in one class does not solve the problem, because Mplus tells me that the covariance matrix could not be inverted.
Is there anything I can do to avoid this? Shall I drop the scale from the analysis? If not, what is the reason for this problem?
I attach the output of the two class mixture solution with variances set free across the classes:
Your options are to increase the MITERATIONS as the error message suggests, hold the variance of the problem variable equal across classes, or remove the variable from the analysis.
As a rule, if it is necessary to show output to describe a problem, you should send your input, data, output, and license number to email@example.com. We try to reserve Mplus Discussion for shorter posts.
I would like to run a LPA on personality trait information we have collected. This data includes both probands and siblings. I would like to examine the LPAs but feel the siblings relations should be modeled. How would I best do this?
See the LCA section of my paper under Recent Papers on our web site:
Muthén, B., Asparouhov, T. & Rebollo, I. (2006). Advances in behavioral genetics modeling using Mplus: Applications of factor mixture modeling to twin data. Forthcoming in the special issue "Advances in statistical models and methods", Twin Research and Human Genetics.
I have downloaded this paper and am trying to recreate these models. However, I am new to LPA, and I am not accounting for the presence of two latent class variables (and two groups of individuals) in the model. Is there another resource you might recommend?
I am running a LTA for two time points on 10 likert-scale items at each time point and have arrived at a 4 class model (2 at each wave). I am attempting to run the model by freeing the variance to be estimated for each class separately. I am unsure if I have the correct model commands specified to request this.
%c1#1% [s3ptp1-s3ptp12*] ;
%c1#2% [s3ptp1-s3ptp12*] ;
%c2#1% [s4ptp1-s4ptp12*] ;
%c2#2% [s4ptp1-s4ptp12*] ;
Further, when I run this I receive the following error message:
THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
WARNING: WHEN ESTIMATING A MODEL WITH MORE THAN TWO CLASSES, IT MAY BE NECESSARY TO INCREASE THE NUMBER OF RANDOM STARTS USING THE STARTS OPTION TO AVOID LOCAL MAXIMA.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
I have two questions concerning LPA: 1) In the LCA & Cluster Analysis discussion bmuthen posted on Wednesday, February 08, 2006 - 6:29 pm that LRT can be bootstraped in M+4. How do i bootstrap the LRT (I assume it is not possible with the MLR estimator?).
2) In the output we got that IT MAY BE NECESSARY TO INCREASE THE NUMBER OF ANDOM STARTS USING THE STARTS OPTION TO AVOID LOCAL MAXIMA. We have 5 continious indicators with a range from 1 to 5, what is a good way to obtain starting values and are the starting values means in this case? Can you give an example?
2) See the "STARTS" option in the version 4.1 UG on our web site.
Kelly Hand posted on Tuesday, October 03, 2006 - 6:57 pm
I have run both a latent class analysis and a latent profile analysis using 5 ordinally scaled items (5 point scale of agreement with 3 indicating "mixed feelings") about mothers' attitudes to employment and child care to create a typology of mothers employment "preferences". I plan to test this typology with a subsample of qualiative interviews.
I have found that the LPA solution is easier to interpret and is a better solution (although they are both good). But am concerned that it may not be acceptable to use 5 ordinal items in this manner. Is this ok to do in your opinion?
Unfortunately the survey only used a very limited number of items about this topic so I am unable to include any more items or create a scale.
I have also tried to search for a reference to support this but have had no luck. If you think it is an appropriate approach to take do you have any suggestions for a reference I could include in my paper?
This boils down to the usual choice of treating ordinal variables as categorical or continuous. I think treating them as continuous, using linear models, is often reasonable unless you have strong floor or ceiling effects. I would however worry if the two approaches gave different interpretations - if they do, I would be more inclined to rely on the categorical version. I would check that I had used a sufficient number of random starts (STARTS=) to make sure you have obtained the correct maximum likelihood solution. I can't think of relevant literature here.
We would like to test four models (4 variables): 1) Variances are held equal across classes, covariances among latent class indicators are fixed to zero 2) allowed for class-dependent variances but constrained covariance terms to zero 3) allowed for class-dependent variances and held selected covariances equal across classes 4) allowed for class-dependent variances and allowed free estimation of selected covariance estimates within class
Are these the corresponding model-inputs?
Ad 1) %OVERALL% Ad 2) %OVERALL%
%c#1% y2 y3 y4 y5;
%c#2% y2 y3 y4 y5;
We have no idea how to write the syntax for models 3 and 4, respectively. May you help with an example for model 3 and 4?
For model 2 we get the message: All variables are uncorrelated with all other variables within class. Check that this is what is intended. Does the model 2 syntax correspondent to what is intended by hypothesis 2?
thank you for you help, I was successful in reordering the classes and thereby maintainig all other parameters. My last question is how to reorder (last class as the largest) a model with this structure:
What happens with the bootstrapped Lo-Mendell-Rubin Likelihood ratio when the last class is not the largest one (for this model and in general)?
Thanks again, this conversation has been most helpful.
I'd like to obtain an LPA but allow correlations/covariances within class to be nonzero. Is this the way to do it? E.G.: ... CLASSES = c(3); ANALYSIS: TYPE = MIXTURE ; MODEL: %OVERALL% MODEL: %OVERALL% y1 WITH y2 y3 y4 y5 y6 y7 y8 ; y2 WITH y3 y4 y5 y6 y7 y8 ; y3 WITH y4 y5 y6 y7 y8 ; y4 WITH y5 y6 y7 y8 ; y5 WITH y6 y7 y8 ; y6 WITH y7 y8 ; y7 WITH y8 ;
Thuy Nguyen posted on Wednesday, February 21, 2007 - 11:12 am
Yes, this will free the covariances within class while holding them equal across class.
anonymous posted on Thursday, March 29, 2007 - 11:41 am
I'm sorry in advance if my question appear naive, I am new to these methods in an geographic area were few "coaches" exist.
I trying to allow for conditional dependance (within class correlations) in a latent profile analysis of seven different variables. I found at least four different ways of allowing conditional dependance:
(1) Including within class WITH statements between all of my indicators. (2) Running a model with conditional independance and relying on modification indices to allow for partial conditional independance (including within class WITH statements between the variables that "could" be correlated according to the modification indices). (3) Doing a factor mixture model without within class BY statements (fixing the factor loadings to remain equivalent accross classes). (4) Doing a factor mixture model with within class BY statements (allowing for differential factor loadings across classes).
I believe that the main advantages of models 3 and 4 is that they result in less parameters beeing estimated.
However, I believe that the real "essence" of conditional dependance is more clearly captured by models 1 or 2. Am I right ?
Are there any other arguments or advantages and disadvantages of doing it one way or the other ?
1. Leads to an unstable model in line with the Everitt-Hand book that we cite under Mplus Examples - not recommended.
2. MI's don't work very well with mixture models, probably due to non-smooth likelihood surface - not recommended
3. Good idea; works well
4. Ok; not always needed beyond 3. Class-varying factor variances can be introduced instead.
anonymous posted on Friday, March 30, 2007 - 3:20 am
Thank you very much for this answer.
It clarify things a lot.
Could you please expand a bit on your answer to 4 (or suggest a reading on this topic). I'm not sure that I properly understand why the freeing up of within class factor variance would be equivalent to the model with free within class "BY" statements or why the more complexe model will not be needed past 3 classes.
Letting factor variances vary across classes is not the same as letting factor loadings vary across classes. However, I have found that a model with class-invariant loadings and class-varying variances often is suitable. I have tried several variations on the factor mixture modeling theme in my articles listed under "Papers" on our web site - see especially articles under the topics General Mixture Modeling and Factor Mixture Analysis.
If my goal is to do a LPA (with two classes) of 3 variables (XX, XY, XZ). After trying the classical model, I can restrict indicators variance to be equal within class. Then I can try a less retricted model by allowing the variances to vary between class.
Following on the previous discussion, if I want to try for conditional dependance, I should rely on a factor mixture model letting the factor variances (and maybe the loadings) vary across classes.
My question is how I can combine conditional dependance (factor mixture) with the previous modifications of equal wihin class variances (A) and of unequal between class variances (B) ? Can I use commands such as theses or is there any additional "twist" ? A: %OVERALL% f BY XX XY XZ ; [f@0]; %c#1% f; [XX XY XZ]; XX (1); XY (1); XZ (1); %c#2% f; [XX XY XZ]; XX (2); XY (2); XZ (2);
B: %OVERALL% f BY XX XY XZ ; [f@0]; %c#1% f; [XX XY XZ]; XX XY XZ; %c#2% f; [XX XY XZ]; XX XY XZ;
If theses commands are right, it would means that example 7.27 reflects a traditional LCA with conditional dependance, equal between class variances and unequal within class variances ?
I ran a LPA w/6 continuous indicators (values ranging between -2 to +5). Here's the dilemma:
The LMR indicates a 5 class model, and this makes substantive sense.
However, the AIC/BIC/ABIC values continue to decline (never rising) and i've tested this up to a 8 class model. BUT like a scree-test, the differences in IC values between models do decline greatly after the 5 class model.
In addition, the BLRT remains non-significant at every step/model.
No warnings were found and i did "start 500 20".
I'm in the process of correlating the variables (which I am not a fan off), but thus far, no resolve.
I'm satisfied with the 5 class model. In addition to substantive sense, it was choose it based on 1) LMR being & remaining non-significant after the 5 class model, and 2) the IC values begin to level off after the 5 class model. and what do i make of BLRT being non-significant at all steps? I plan on reporting BLRT, but indicating that it is potential limitation of the study? is this all sufficient?
Your statement that BLRT is non-significant for all classes confuses me. In a k-class run, BLRT gives a p value for a k-1-class model being true versus the k-class model. So, a non-significant result (p >0.05) says that the k-1-class model is acceptable. So your statement implies that the k=1-class model is acceptable as judged by BLRT. Is this what you mean? If so, I would think BLRT is not applied correctly because it would imply that your variables are uncorrelated.
If BLRT is correctly applied (no warning messages), that could be a sign of having a lot of power due to a large sample size, in which case I would rely on the substantive reasons for choosing number of classes.
Thanks for the quick replies. The sample size was large (2000+). However, there was one warning "to increase the number of random starts using the starts option to avoid local maxima". I got this warning after increasing starts (500 20, and 1000 20), but read that this warning is typically issued?
So the take messages are to rely on substantive reasons, report LMR & IC values for statistical support, and also report the the BLRT (being significant at each model) but that it is sensitive to large sample size (thus power)?
We have Y11 ….. Y1T, Y21 ….. Y2T, …………………..,Yn1 …. YnT; Yit is continuous and Yit=f(Xit) where “i” indicates unit and T is the Tth time period.
We want to extract the possible grouping using Y’s as indicators.
I believe our Panel-mixture analysis will be in the line of Latent-profile mixture analysis (with covarites) rather than Latent-cluster mixture analysis since Y’s are continuous. Am I right? …. However there is a serial correlation or at least it is likely to be so and we need to test that.
Q1. Could you kindly suggest any established research on Panel-mixture analysis (rho across the error terms has to be calculated)? Q2. Could we estimate the model using MPlus?
I'm doing a latent profile analysis (7 indicators) with covariates (4). I'm running models with conditional independance and models with conditional dependance (CD). For CD models, I rely on factor mixture models with class varying intercepts (only).
Is there any problems if I run these analyses with standardized variables (indicators and covariates)?
I would work with the raw data. I don't know why you want to standardize. If it is because the variables have large variances, I would rescale them by dividing them by a constant. A constant is not sample dependent as are the mean and standard deviation used for standardizing.
In fact, suppose that we already did the analyses with standardized variables following the suggestion of a colleague (because it made it easier to compare the latent classes). What kind of problems might it cause ?
When you standardize variables, you are analyzing a correlation matrix not a covariance matrix. This is fine if your model is scale free but not if it is not. One example of a model that is not scale free is a model that holds variances equal across classes. If a model is scale free, the same results will be obtained whether a correlation or covariance matrix is analyzed. If I were you, I would rerun the analysis using the raw data.
Thanks once again for this valuable resource and for the workshops you have conducted. I am attempting to estimate an LPA with four continuous indicators which are age variables ranging from (age) 5 to 40. The variables represent how old participants were when they reached each of four developmental milestones. Our analyses are attempts to identify sub-groups of individuals who progress through these milestones at different paces. Not an ideal way to model developmental phenomena, of course, but the best we can do with the cross-sectional data we have. My question is whether or not this seems conceptually and statistically reasonable (assuming the models fit well, etc.) and if you know of any other published data that uses similar (age) indicator variables in LPA? I'm a little concerned with the validity of our approach.
My statistician is helping me with a Latent Class analysis. We are looking at latent classes in a group of workers with low back pain. The variables we are using to distinguish between the classes are pain, functional status, depression, fear and some workplace factors. We used age, duration of complaint and time on the job as predictors of class membership. We know that pain, fs, depression and fear probably correlate so we added these correlations to the model. In the output I see that there is (amongst others) a significant covariance between pain and f.s. within the 1st class (in a 2 class solution). Estimates: 24.463 SE: 5.0079 ESt/SE: 4.816. How should I interpret this? Are assumptions violated?
We also saw in earlier analyses that a 4 class solution turned out to be the best fit.
The covariance suggests that among the members of class 1, there is a relationship between pain and functional status. If the correlation is negative, then its a relationship moving in different directions. If the correlation is not significant in class 2, or if its in a different direction then class 2, then that's a really neat finding and supports the contention that there is heterogeneity in your sample
Regarding your 4 class solution question, are you saying that in an earlier analysis without the covariates you found a 4 class solution fit best, but that after adding the covariates only the 2 class solution fit? Bengt notes that class fit will change when covariates are added, and he has advocated that you should consider using the solution when covariates are added.
The correlations are as expected. Somewhat different between the classes. IN a few cases present in one class and not in the other.
We haven't done the 3, 4, (and 5) class analyses yet. (In previous analyses the 6 class solution didn't converge.) We realised we should do the analyses with adding the covariates after reading Bengts opinion on it and to us his point makes sense. (Starting out with SPSS K-means, it's getting better all the time:-) We expect that again the 4 class solution will be best, since the individual membership doesn't change that much, but it gives great info on how the constructs fit together. I expect to find more heterogeneity in the four class solution. We were a bit worried about our n (approx. 400)when adding all these extras. We might want to look into a subgroup in our next step.
Note that whenever there are at least 2 latent classes, the observed variables will correlate. If in a latent class analysis you choose in addition to correlate variables *within* classes, saying e.g.
%overall% y1 with y2;
then this means that your y1 and y2 variables correlate more than their common influence from the latent class variable can explain - so it is like a residual correlation. Often this comes about due to similar question wording or variables logically tied to each other.
Note that this is not a model violation. Although the standard LCA assumption of "conditional independence" no longer holds, you are using a perfectly legitimate generalized latent class model.
OK thanks very much. This helps a lot. The variables seems to be logically tied together. The LCA shows that some do in certain classes and some don't, which is good information.
A different question. We are now looking into latent classes within a subgroup (from n=441 to n=183). Only those people that haven't returned to work at baseline interview are now included in the LCA. Same variables, same observed independent variables, but without the (residual?)correlations in the model. The 4 class solution now doesn't converge (minimum number in 1 class is 33). I expected the 3 class solution to be optimal because the " low risk class" seemed to overlap with those who had returned to work. And that's an easier variable compared to the 5 we've used to determine classes. Unfortunately we now don't get information on model fit. Is there a way to get around this? Or does it just tell us that the results should be interpreted with caution?
I would not expect a four-class solution to be optimal if you have basically removed one of the classes. I would expect the three-class solution to be better. You will not get any fit statistics if the model does not converge. Is this what you mean?
I also expected the 3 class solution to be the best fit.
Yes, that is what I mean. In the previous analysis (n=441) we also could get fit statistics for a model with 5 (optimal fit +1 class) classes. I was hoping to get it (for the 4 class model) in this analysis (n=183)as well. We changed the setting to 500 iterations, but that doesn't help. Well, it's a sensitivity analysis anyway.
OK that worked, thanks for that. When submitting a paper a reviewer might ask: Why 8000 iterations, any suggestions? By the way: it seems that the SE decreases.
Another question: One of the people in the team asked me what the main drivers for the class solution where, so we now have a description of what the classes look like and we know that the total model has a better fit in the 3 (and the 4) class solution, but which factors predict or drive the class membership. "Predict" might be a bit confusing since we also are using some counfounding variables as "predictors" . We where think of using a multinomial logistic regression to get the estimates. Do you have any other sugggestions, perhaps how we should model this using M-Plus?
It is not 8000 iterations. It is 8000 sets of initial random starts and 800 solutions carried out completely. Read in the user's guide under STARTS. You may not need that many. You should not compare standard errors from a local solution to a replicated solution.
The model is estimated with the objective of conditional independence of the latent class indicators within each class. If you want to use covariates to predict latent class membership, you can regress the categorical latent variable on a covariate or set of covariates.
OK thanks, it's now quite obvious tha tit would be useful to do the course next year :-). You have been great help.
Vilma posted on Tuesday, November 27, 2007 - 6:35 am
I was running LPA with 3 continuous indicators. My sample is 210. According to fit criteria, it seems that could be 5 profiles, but the one of the profiles has only few people. My choice was 4 profiles (it makes more sense from theoretical point of view). These profiles differ from each other. The reviewers give me a hard time about LPA with a small sample and only 3 indicators. Basically, they said that I cannot do with 3 indicators 4 profiles (it is stretching data too far). Might be it is the truth. But could I check somehow that. Or should be better to have more indicators?
I think what you are trying to do is possible, although the solutation may not be very stable. The classic Fisher's Iris data had n=150 with 4 continuous indicators and 3 latent classes - see the Everitt & Hand (1981) book. That model did not use the LPA assumption of zero correlation within classes and so is harder to fit. Perhaps the reviewers are thinking LCA with binary indicators in which case only 2 classes can be obtained with 3 indicators.
To convince the reviewers (and yourself) you can do two things. You can use the Mplus Tech14 facility to test for the number of latent classes. You can also use the Mplus Monte Carlo facility to simulate data with exactly your parameter values and see how well or poorly the model is recovered.
Having more indicators, however, certainly helps.
Vilma posted on Wednesday, November 28, 2007 - 12:49 am
I have a question related to variable types (continuous, categorical, count) within LCA/LPA. I'm working with 11 variables that are neuropsychological test subscales. These subscale scores are really sums of successes on a number of binary items (eg remember name y/n). The subscale variables range in levels from 2 to 37 and some are very skewed. I fit an initial LCA model with binary variables, dichotomising the subscale scores at the median values in this sample. A 4-class model seemed to fit the data well (with interesting results) but the modelling approach was criticised for not using all the information available in the data. I then tried modeling all the variables as count variables but ran into problems with the variables with fewer than 3 levels. A mixed categorical and count variable model ran without errors but the results are difficult to interpret (variables are on pretty different scales)and not terribly interesting, plus I have concerns with model fit (one class with very low probability and Lo-Mendell Rubin and BIC fit results conflict).
In short, I’m more comfortable with the initial binary model. My question, then, is whether I really should be concerned over potential loss of information in the binary classification model – how much value does using the full scales really add? Could my binary model findings be invalid? Apologies if this is obvious; I’m new to latent variable modelling and Mplus.
No obvious decision here. Seems to me that the reduced-information binary approach is fine as long as the 11 subscales are a good summary. You could finesse it by using ordered polytomous representations of the number of successes (e.g. low, medium, high). Or you could stay with binary items, but go fancy by working with the original, total set of binary items used to create all of the 11 subscales. That large original set can be used for (1) LCA, or (2) factor analysis to see if 11 dimensions - or fewer - turn out, and then perhaps do LCA on the factors (either in 2 steps, or better still, by having a mixture for the factor means).
Unfortunately I don't have access to the original item-level questionnaire responses, only the 11 aggregated subscale scores. Am sure there would be an interesting underlying factor structure as many of the questions could measure multiple domains of cognition. May aim to look at this in a replication analysis!
Are these results interpretable, that is, do they suggest going with 2 profiles, 3 profiles, or continuing until Tech14 is no longer significant? More to the point, do these divergent results suggest that there is something fundamentally wrong with our data or input specifications?
I have found the proportions of membership in each class under the output for the LPA, but I was wondering how I could find out where each case was placed among the classes. Is there specific syntax for output I could request? Thanks!
Hi, I am performing a lca with items measured on a 6 point likert scale. my number of observations is for one sample 70, for another sample 170. How can I check via monte carlo if the model is appropriate (you stated in an answer above:You can also use the Mplus Monte Carlo facility to simulate data with exactly your parameter values and see how well or poorly the model is recovered.)
You can use the parameter values from an LCA as population values in a Monte Carlo study with sample size 70 for the parameter values from the analysis of the sample with 70 observations and sample size 170 for the parameter values from the analysis of the sample with 170 observations. Use mcex7.6.inp as a starting point.
to check for power I ran a monte carlo with nobservations=400 and nreps=500. here are the results for % Sig coeff
V25 ON LOY_ACC 0.054 REPURCH 0.106 AUSDEHN 0.251
the interpretation gives me some headache: for loy account and ausdehn: is a non sign path, and shows a low power; so increase of sample size might help - this is clear for me - is this right? for repurch: is a sign path, but also low power: this is puzzling me.
Shouldn't be a sign. path also have a high power or is the positive path only 10.6 % of the replications and so nearly a artefact? And if there is the case that there is a non sign path in the sem but a power of more than 0.8? Would that then also be one case in the 20% of not rejecting H0 although it is wrong?
I would be really glad if you could shed some light on that.
The method we proposed is used to assess the power of a single parameter not a set of parameters.
It sounds like you are making a mistake in your setup. Please send your output and license number to firstname.lastname@example.org.
Anne Chan posted on Monday, September 21, 2009 - 4:23 am
Hello! I am doing a study which compares boys and girls in relation to their motivation, parental support and their learning outcomes. I have two questions:
1) I applied LPA analysis to classify students into different motivational groups. I included Gender and Parent Support as the covariates, the 6-class solution is perfect, both in terms of model fits and theoretical meanings. However, if I run the LPA without covariates, no theoretical meaning solutions can be generated. How should I interpret these results?
2) I am planning to save the LPA class membership (with Gender and Parent Support) of individuals and conduct further analysis to study the differences between boys and girls, both within and between classes. However, I am still a bit confused of how to understand having gender as a covariate in LPA analysis. Is it appropriate for me to use Gender as a covariate in LPA, particularly the goal of my study is to compare the two genders? I mean, if gender is included in the LPA, then gender will affect the classification result, and is it methodologically inappropriate to use this “biased” classification to conduct subsequent gender comparison? Or, instead of thinking the classification is “biased”, it is actually more robust in using Gender as a covariate, as it can more accurately reflect the data?
1) Differences in latent classes when using and not using covariates usually is a sign that there are direct effects of the covariates onto the outcomes, not only indirect effects via the latent class variable. Try exploring direct effects (you cannot indentify all of them). Although that may move you away from your favorite solution, your 2 runs (without and with covariates) may agree more.
2) My opinion is that if Gender influences class membership you are fine including it in the model - the estimates will be better. The same is true for factor scores in MIMIC models.
However, doing analyses in several steps is not always desirable, particularly not with low entropy. Why not do your "further analysis" as part of this model?
For related topics, see also
Clark, S. & Muthén, B. (2009). Relating latent class analysis results to variables not included in the analysis. Submitted for publication.
under Papers, Latent Class Analysis on our web site.
Anne Chan posted on Thursday, September 24, 2009 - 4:20 am
Thanks a lot for your kind suggestion. As a follow-up of question (1), I will explore the direct effects. May I ask how can I do it? Can you please kindly point me to some examples or references? Thank you very much!
I figured I could justify a 3-class solution given close to sig. LMR but the means for the 3-class solution are providing almost no differentiation among the indicators for one relationship. I am not expecting much variation in terms of shape but I think at least two levels is more accurate. LPA for the six indicators for that relationship alone provides fairly clear (again LMR only) 2-class solution.
The means for the 4-class solution provide some differentiation among the indicators for that relationship and a more interpretable set though there is a small group (n = 10). I recognize the small N overall and in one class and the possibility the LMR tends to overestimate (Nylund etal 2007), but I am considering using the 4-class solution based on substantive meaning. Could I please have any insights you may have? Thank you
The non-significance (p=0.0577) for LMR in the 3-class run says that 2 classes cannot be rejected in favor of 3 classes.
Personally, I tend to often simply listen to what BIC says, in a first step. In your case it suggests to me that because you don't have a minimum BIC you may not be in the right model ball park. Perhaps you need to add a factor to your LPA ("factor mixture analysis") and then you might find a BIC minimum.
What do you mean by "add a factor"? I am basically familiar with how FMA integrates factors and classes but did you mean something specfic other than "try FMA"?
I do question whether a latent variable approach is appropriate here. The dimensions are rather skewed in the positive direction for one relation and mostly bipolar for the other. With a relatively small sample for LCA this is probably why a 2-class solution emerges.
Also Ive run separate LPA for each relationship to look at different class combinations as across relationship patterns. With this I get a clear 2-class solution for one relation and clear 3-class for the other but again with these there is no lowest BIC, BLRT is not useful (just .000), and LMR is my only solid indicator with more definitive p-values this time.
So if I continue to not find any lowest BIC is that evidence that latent variable approach may not be appropriate even if the LMRs are suggesting reasonable classes?
I did find that k-means cluster analyses provided an almost identical set group as the 4-class run (means and proportions) but not the 3-class run.
Yes, I meant try FMA. Such as a 2-class, 1-factor FMA where the item intercepts vary across the classes (factor means fixed for identification). So you could try 1-4 classes and see if you find a BIC minumum (where 1 class is a regular 1-factor model).
Thanks Linda, What would be the reference so I can find it. By the way, I just realized that Matt Thullen did ask another question in his last post. I just wanted to make sure that my posting did not "erase" this question. Thanks again
I'd like to build my doctoral thesis on Latent profile Analysis, but I don't know whether I have enough statistical power to identify all relevant classes. So I'm looking for any recommendations about sample size in Latent Profile Analysis. Are there any articles discussing that issue? Thank you very much in advance and best greetings from Germany...
Lubke, G. & Muthén, B. (2007). Performance of factor mixture models as a function of model size, covariate effects, and class-specific parameters. Structural Equation Modeling, 14(1), 26–47.
Lubke, G.H. & Muthén, B. (2005). Investigating population heterogeneity with factor mixture models. Psychological Methods, 10, 21-39.
Nylund, K.L., Asparouhov, T., & Muthen, B. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling. A Monte Carlo simulation study. Structural Equation Modeling, 14, 535-569.
I used two sets of four dichotomous items as measures of two factors per set (for a total of four factors) in seven-class factor mixture models in separate analyses of data from seven national election surveys. (I didn't combine the surveys in an known-group analysis because I believed the computation would be too heave, and, in any event, I also conducted a separate stacked SEM of the same data.) The factors have no variance, so I assume that I did a LPA? My computer doesn't have enough memory to estimate the variances (using algorithm = integration), so I guess I'm stuck with LPA. Three (more) questions: (1) Is the Vermunt-Magdison article still the best reference? In particular, has anyone else used probit factor analysis in a mixture model? (2) How does one interpret a factor with no variance? Would it be correct to say that the factor means + the (largely invariant) item intercept determine the class-specific probility-dist. of an item associated with the two factors it measures? (3) How does one interpret the variance and covariance of the two continuous vars. that I also used (along with some additional categorical variables)? Ideally, I would like to interret these quantities as the measurement error in these variables.
Jason Chen posted on Tuesday, November 02, 2010 - 12:32 pm
I would like to conduct a Latent Profile Analysis to form clusters of students based on 4 variables (x1, x2, x3, and x4). These 4 variables are considered "sources" of another variable (y). There are theoretical arguments that another variable (m1) might moderate the relationship between the sources and y. I would like to use a person-centered approach because these 4 sources do not operate in isolation. However, if I wanted to test whether m1 moderated the relationship between the sources (x1-x4) and y, how would I test that if the sources are clustered within a person?
In regression, I could compute y = x1 + m1 + x1*m1. And if the interaction term was significant, that would be evidence of moderation. But If I'm clustering the 4 sources and exploring how m1 moderates the relationship between these clusters and y, how could that be done?
It sounds like you want the latent class variable (say c) behind the x's to influence y. With a continuous y this implies that the mean of y changes across latent classes.
If you have a binary moderator m1 you can simply use that to form a Knownclass latent class variable (say cg) and let the y means change over both latent class variables (that is the default) - and then use Model Test to see if the y means for the c classes are the same across the cg classes.
Seven classes and 2 factors is a lot of latents. Typically, when factors are added to a latent class model you don't need as many latent classes. Conversely, if you have a lot of a latent classes, the factor variances can go to zero. I would use BIC to compare the alternative models, varying the number of classes and factors.
1) You might consider my overview:
Muthén, B. (2008). Latent variable hybrids: Overview of old and new models. In Hancock, G. R., & Samuelsen, K. M. (Eds.), Advances in latent variable mixture models, pp. 1-24. Charlotte, NC: Information Age Publishing, Inc.
which is on our web site under Papers.
2) Typically a model with no intercept (or threshold) invariance across classes has a much better BIC than letting only the factor mean vary. If you don't have factor variance, the factor is not motivated except as a non-parametric device of describing the factor by a mixture - and again it may be due to having too many classes.
3) With LPA there is no within-class covariance between the continuous outcomes. The variance is a within-class variance, but not necessarily measurement error, perhaps just "severity variation".
Jason Chen posted on Wednesday, November 10, 2010 - 9:11 am
Thanks very much for the reply, Bengt. If my moderator (m1) is not binary, I'm assuming that there is no other way to test for this moderation effect other than artificially creating one on my own (e.g., median splits?).
Continuous moderation (m1) of the effect of a latent class variables on a distal y? Can't you think of that as the m1 influence on yvarying over the latent classes (at the same time as the latent classes influence y by the y means varying over the classes)? So a c-m1 interaction. That's doable in Mplus.
luke fryer posted on Monday, November 29, 2010 - 6:55 am
Dr. Muthen would you please expand on your comment from " on Saturday, October 17, 2009 - 12:27 pm...":
"Personally, I tend to often simply listen to what BIC says, in a first step. In your case it suggests to me that because you don't have a minimum BIC you may not be in the right model ball park. Perhaps you need to add a factor to your LPA ("factor mixture analysis") and then you might find a BIC minimum."
I am facing a problem similar to the original post--arriving a minimum BIC for my analysis--BLRT is also not proving to be useful, entropy is occasionally useful. Would it be worth adding categorical variables (Gender, department, etc) to my LPA in order to create a more decisive model? What other alternatives might I have?
luke fryer posted on Monday, November 29, 2010 - 7:16 am
One more question... At what point does the software's request for more starts--WARNING: THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED. THE SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA. INCREASE THE NUMBER OF RANDOM STARTS.--start to be an indication of anything other than "time to increase the number of starts". The Mplus Manual gives clear advice up to 500 starts. If the warning persists, does one just continue to increase the number of starts? I have never had an analysis fail to converge, but I consistently get this warning.
Hello, I have a question in response to Robin Segerer's post on Monday, April 26, 2010 - 8:10am about determining sample size for an lpa.
I am submitting a grant for funding for my dissertation work. I will be conducting a latent profile analysis using continuous indicators of children's behavior and using demographic covariates to predict class membership. In addition, I plan to simultaneously estimate the lpa as well as a lgca of longitudinal distal outcomes to examine mean differences across the profiles in intercept and slope parameters of this distal outcome.
I read through the articles that you suggested for Robin but am wondering how to determine sample size needed for estimating the lpa and lgca simultaneously. In addition, are there any examples that you know of where this has been done to determine profile differences in intercept and slope parameters of a distal outcome?
I would greatly appreciate any guidance. Elizabeth Bell
I'm running a cross-sectional LCA with continuous (4-point Likert items) and dichotomous items. In this analysis, when I specify a 4-class solution, the mean of one of the continuous variables is fixed for three of the four classes. Why would this happen? I know that with dichotomous variables, when the logit is very small it is fixed at -15 or +15, is this similar to what is happening with these continuous variables? If so, how is the value it is fixed at chosen? Is this problematic?
Below is the output with one mean fixed for class 2.
Hello, I am doing LPA with N=2000 and 6 continuous indicators. Each of the indicators are 3(5-pt Likert)item parcels (created by taking the mean of the 3 items). In most solutions, certainly in all intepretable solutions, I have modification indices indicating residual within-class correlations among parcels. This would seem to indicate that conditional independence is violated.
My question is whether modification indices are the only way to get a look at conditional independence when doing LPA in MPLUS. Clearly if I had categorical items I could use TECH10, but is their anything like that for the continuous indicator case or is there otherwise another way within MPLUS? I have considered freeing within class bivariate correlations, but it seems in a previous correspondence that this was not recommended in general as a way of modeling conditioal dependence (note these models are largely exploratory) -- [see post from anonymous on March 29, 2007 11:41a.m]. When I allow residual correlations %overall% (which I believe constrains the corrs to be equal across classes), the within-class corrs often remain to a lrage extent in the modification indices. Note I have also attempted FMA but would prefer to stay with "manifest parcels" if possible.
In a related question, is there a recommendable way to create categorical indicators using the parcels (15 levels seems too many)?
Sorry I may have been a bit unclear above -- each parcel consists of taking the mean of three 5-pt Likert items.
Also, when I say at the end that "15 items seems to many" I was thinking in terms of summing (which I guess would be a maximum of 12 levels - still seems too many), but did not mean to indicate there may not be a recommendable way using the means of the items within parcels or some other way as well.
You can use the estimated LPA model and classify subjects according to it. Then for each class see how correlated the variables are - and for which pairs.
But if the model which allows class-invariant within-class covariances has a much better BIC, or if a one-factor FMM has a much better BIC, then it is questionable to stay with the conditionally independent LPA.
LPA with N=900 and 4 continuous indicators on parenting to explore profile of parents on both parenting and differential parenting. As parenting is intrinsically related to children's characteristics, it seems that some children variables (for.ex. age) must be incorporate as endogenous cov. However the literature on covariates in LPA is quite vague. Here are some options that seem to me ok. I would be pleased to have your advice about the best option: 1.Making latent cl regress on the cov. The most commonly used approach. However, the covariates I’d like to incorporate are uniquely related to some indicators but not to others (for ex, child’s age is related to parenting whereas age gap between siblings is related to differential parenting). I’m wondering if it wouldn’t be better to ‘link’ more specifically the covariates to their relevant indicators. Other options that I imagine : 2.Making each indicator regress on its ‘relevant’ covariates. 3.Making both the latent and the indicators regress on the covs (latent on each covariate ; indicators on their specific covariates) 4.Residualizing the indicators after regressing them on their specific covs. Also I’d like to test the role of some predictors (not endogenous) on the latent classes. What is the best way to test models that would include both covariates and predictors ? Thanks a lot in advance.
Hi, I am looking at a 5 factor, 4 cluster solution. I requested the cluster membership variable using SAVE = CPROBABILITIES;
I then read it into SPSS and examined the descriptives of each cluster.
First, the number of cases it labeled as each cluster differs substantially. In addition, although I would interpret each cluster based on the estimated parameters given in MPlus, when I select each cluster and examine the means, the interpretation would be much different. For example, clusters that were once classified as "low" on variable X are now classified as "high" oh variable X.
Hi everyone, I recently conducted an LPA and identified six profiles of positive functioning in my sample of 19 year olds. I then used MANOVA to compare the profiles on a number of variables measured when they were 17 years old. A journal editor has asked me to consider using "conductional LPA analyses" rather than MANOVA, however I am unfamiliar with this technique. Would anybody know of a paper that would point me in the right direction? Thank you!
Julia Lee posted on Wednesday, September 07, 2011 - 8:15 pm
I am new to latent class analyses. I have been reading about issue of 'minimum BIC' on the discussion board. I have n = 521. I conducted a LPA with 5 indicators (all are continuous variables).
My interpretation of the fit indices below suggest that the 4-class model is the best model. I used VLMR and LMR to help me make the final decision on the number of classes because BLRT was significant for models 2 to 6. VLMR and LMR suggested 4-class model was better than the 3-class model. In addition, the Entropy for the 4-class model seems closer to 1.
One researcher in one of the posts mentioned his concern about the declining BIC/AIC/ABIC. What does minimum BIC mean? My BIC values were declining all the way from model 2 to 6. I did not continue to check model with 7 classes because it didn't make sense to me to continue without a substantive reason to do so. Should I be concerned about my results and consider using FMA? I'm unclear whether I am on the right track or not.
It looks like you are not reading the LMR results correctly. The first instance that you get a high p-value implies that one less class should be chosen. If I am reading your table correctly, you have the p-values 0.000 (for 2 classes), and 0.608 (for 3 classes), which then implies that you should choose 2 classes.
In my experience, when BIC continues to decline with increasing number of classes without hitting a minimum, better models can be found. For instance, an FMA should be explored.
Julia Lee posted on Thursday, September 08, 2011 - 11:53 am
Dr. Muthen, thank you for your feedback regarding my fit indices. I will try FMA. Is the syntax is similar to the CFA Mixture Modeling syntax in Example 7.17 of the Mplus version 6 manual? The factor means in this example for class 1 of c was fixed to 1. The factor means was fixed for identification purposes, correct?
For ex 7.17, the factor mean is fixed only in the last class. In the first class the factor mean is given a starting value of 1 to show that it is free in this class. Note that @ means fixed and * means free.
Are there any materials which discuss conducting LPA when the sample is restricted to only those with high scores?
Given that some clinical measures have cut-offs, some might argue that only those above the cut should be examined to explore subtypes. However, I am wondering about finding spurious latent classes when sample variance is significantly restricted from using only cases with high scores. Put another way, shouldn't the high score classes naturally emerge when the entire sample is used?
My take from the Bauer and Curran papers is that LPA (and more generally LCA) could result in spurious latent classes if high score only samples result in nonnormal data.
I don't know of any papers on this, but we have had similar concerns in analyzing ADHD symptoms in general population surveys versus treatment samples. It seems that when a treatment sample is used, you get subclasses of ADHD such as hyperactive only, inattentive only, whereas with a population sample some of that detail gets lost due to broader distinctions being made.
I wonder what would happen if you oversampled the high scorers.
Li xiaomin posted on Monday, October 03, 2011 - 9:07 pm
Dear Dr. Muthen, I have a question. Suppose there are 3 data files, naming "file1.dat","file2.dat", and "file3.dat", and 3 input file, "file1/2/3.inp". how can i use Mplus to analysis the 3 data automatically and generate the associated output file (file1/2/3.out)?
You cannot do this. You could create a bat file with the set of inputs that you want and you will receive a set of outputs. You may want to check if MplusAutomation can help you. See the website under Using Mplus Via R.
Li xiaomin posted on Saturday, October 08, 2011 - 8:10 pm
thanks for the suggestions!
Junqing Liu posted on Thursday, October 27, 2011 - 11:56 am
I used the following command to save the class membership based on a LPA into a seperate dataset.
SAVEDATA: SAVE=CPROBABILITIES; FILE IS ebppostprobs.dat;
I need to do analysis using the class membership and some other variables that are included in the original dataset, but not in the class memberhsip dataset.
How may i merge the two datasets or possibly directly save the class memership into the original dataset? I am new to Mplus. Thanks a lot!
(I apologize if this has been answered elsewhere, but I can't figure it out from the userguide.)
Is there an easy way to save the class means (and variances) from an LPA? I'm using 6.1 on a mac and need to export the means to plot the solution in a separate program. I know I can save the parameter estimates to an outfile using the ESTIMATES option, but it's not an ideal way to extract just some of the parameter estimates. If there's a faster way to do this, I'd appreciate knowing about it.
I have a question concerning the treatment of missing cases in Latent Profile. I am dealing with cross-sectional data of three same-aged birth cohorts (18-29 years old) on four transitions marking adulthood: moving out of the parental home, starting the first job, getting married, and becoming a parent. For each transition, I have a status variable stating if a person already experienced the respective transition and if yes, the precise age. First, I analyzed the timing of the transitions separately using Cox regression because of the large number of censored cases for marriage and first child. Second, I am interested in looking at all four transitions simultaneously to explore different pathways/patterns into adulthood using Latent Profile (or Latent Class Analysis) for each cohort separately in Mplus. I am just concerned about the large number of missing cases because many subjects did not marry or get children yet. Is mixture modeling capable to handle the censored cases, do I need to address it specifically in the program? Moreover, is it possible to run a Latent Profile Analysis based on the precise age at transitions or do I have to run Latent Class Analysis based on categorical status variables?
You can do LCA/LPA with continuous age and/or categorical status - that is, you can mix scale types in Mplus mixture modeling.
Mixture modeling does not handle censored cases as in survival analysis. It seems complicated to come up with a model that both determines when an event happens and then apply LPA/LCA to it, so some simpler approach is needed. For instance, restrict your analysis of marriage, child timings to the older subjects to reduce the amount of missing data.
I am running an LPA with 6 indicators. I want to see of the profiles differ based on ethnicity and gender. Can I use the KNOWNCLASS command for this? Should I run seperate LPA models for each group first to make sure that they have a similar latent profile structure? Thank you!
Yes, you should first run separate group analyses. You can use Knownclass, but it is somewhat simpler to have the 2 variables be covariates. If the covariates influence only the latent class variable ("c ON x" in Mplus language), then you have measurement invariance, that is, the same profiles - but you allow for different class prevalences. If you have some direct effects from the covariates to the LPA indicators, then you don't have measurement invariance. The covariate analysis also shows you the class-specific means of the covariates.
Thank you so much for your response. In the case that the groups have different structure (exaple: I just ran the LPA for one group and found that a 2 profile model was the best fit, whereas when all data are used, an 8 profile model is the best fit) I would not use KNOWNCLASS or the covariate method, correct? In this instance, I would assume that it would be most appropriate for me to discuss these as seperate models from seperate subsamples, correct? Thanks so much for your help!
Hello, I am using LPA to examine 7 indicators coming from a variety of self-report and clinical interview data (i.e., scales in different metrics) and had a few questions.
1) Under what circumstances would one override the assumption of conditional independence and allow freely estimated indicator covariances within classes? Should this decision be made primarily based on model fit (e.g., if the conditional dependence model provides lower BIC)?
Based on prior posts it sounds like conditional dependence should be specified if method effects are suspected and not solely because of high correlations between indicators. What are other circumstances when a conditional dependence would be an appropriate approach?
2) It seems like the below syntax can be used to specify a model with conditional dependence (3 class model):
MODEL: %OVERALL% y1 y2 y3 y1 WITH y2 y3; y2 WITH y3;
However, is it necessary to also specify freely estimated indicator covariances within each class, or would this be redundant coding e.g.,
MODEL: %OVERALL% y1 y2 y3 y1 WITH y2 y3; y2 WITH y3; %C#1% y1 y2 y3; y1 WITH y2 y3; y2 WITH y3; %C#2% y1 y2 y3; y1 WITH y2 y3; y2 WITH y3; %C#3% y1 y2 y3; y1 WITH y2 y3; y2 WITH y3;
Hi, I have run an LPA model in which 2 profiles emerged. I would like to see if these profiles predict a continuous outcome and if this association is moderated by a continuous variable.
My entropy is only .68 so I don't think a class-analysis strategy would be particularly appropriate here. Is there a way to look at this interaction within the LPA framework (eg, by specifying the model when I specify the 2-profile solution)?
Thank you! -Mindy
Julia Lee posted on Tuesday, February 21, 2012 - 5:59 pm
I had my prospectus defense recently and I was asked to check the data set for nonlinearity by a committee. I am conducting LPA and LTA using Mplus to answer my research questions. I have read several book chapters and papers related to LPA and LTA prior to my proposal defense but I did not come across the issue to check for nonlinearity. Is it an assumption of these two statistical techniques to check for nonlinearity? Thanks.
I would say no. To me, nonlinearity is something that is relevant for the regression of a continuous variable on other continuous variables or with regular Pearson Product-Moment correlations. The LPA model does not consider such regressions because the continuous latent class indicators are related to a categorical (latent) variable. Nor are correlations analyzed or fitted.
1) Note that LPA describes the correlations among the indicators. It does so as soon as you have more than one latent class. So conditional non-independence is a correlation among the indicators that is beyond what is explained by the latent class variable.
I would explore conditional non-independence if I had a priori reasons such as the methods effects that you refer to, or similar question wording.
2) It is not redundant coding but says that you believe the within-class correlations to be different in different classes. I would not recommend within-class WITH statements as a starting point - this is perhaps giving too much flexibility and may result in an unstable model (hard to replicate the best logL).
You can do this in a single analysis. Say that you have latent class variable c influencing continuous outcome y, moderated by continuous predictor x. Moderation is handled by letting y ON x be different in the different c classes. This is so, because moderation is an interaction between c and x.
I really want to use LPA for a person-centered analysis I'm doing, but I'm having trouble getting it to perform as well as cluster analysis. This is frustrating, as I find mplus so much easier to use than programs like Sleipner. I have tried a variety of ways of specifying the LPAs (including fixing and freeing variances across classes and variables). After I decide on the number of classes, I compare the LPA solution with a cluster solution from a recently published paper using the same data. In all cases, across multiple outcome variables, the cluster solution does a better job of explaining variability.
Can you help me figure out what I'm doing wrong? Based on the readings suggested on this site, I feel like the right specification of a latent variable model should be as good (if not better) at producing a useful classification system. I understand the problems inherent in the clustering algorithms, especially in the presence of heterogeneity of variances. Still, why would the CA produce a more precise classification?
If you have any ideas or people I could talk to about this (I'm local), I'd appreciate it. Thanks.
Yes on the Hagenaars & McCutcheon book. I have it on my desk right now (open to Vermunt & Magidson's chapter 3). (You recommended it on this site. It was very helpful).
Regarding the published paper (mine--out this month in Jrnl of Ed Psych) I used CA to describe patterns of motivation at time 1, then tested for differences between clusters in affect and achievement outcomes at a later wave. Like others, I argue that a person-centered approach gives a better picture of what motivation is than traditional variable-centered approaches. However, CA is very time consuming and (based on the readings suggested here) likely to make assumptions about the data that may not be appropriate.
My next step is to use a similar approach to describe developmental changes in patterns of motivation by conducting CA (or LPA) by grade for kids in grades 7-12. I want some evidence that the LPA solution is as trustworthy (and useful) as the CA solution. It seems like it should be at least as good, but I'm having trouble finding evidence of it. Can you direct me to other ways of establishing the utility of an LPA solution (as compared with CA)? Or is that not even a question I need to answer, in your opinion? Thanks for your help, by the way.
For a description of the advantages of LCA/LPA over k-means clustering, see also Magidson, J. and Vermunt, J.K. (2002). Latent class modeling as a probabilistic extension of K-means clustering. Quirk’s Marketing Research Review, March 2002, 20 & 77-80. (pdf)
Thanks for that 2002 ref. It does a great job summarizing reasons for preferring LCA/LPA over clustering.
Bergman, Magnusson, and El-Khouri (2003) describe few procedures for longitudinal CA (e.g., LICUR), but I agree with you that LPA is preferable. I think a better test may be to use the posterior probabilities instead of the most likely class when computing time 2 means for the LPA solution. In that way I could take advantage of the probability-based classification (the first point in the article you posted). If you can think of any papers taking this approach, I'd be grateful for the direction. Thanks again for the help. If you are ever in Orange County I'll gladly buy you lunch.
Hi Dr. Muthen, Is this paper? Steinley & Brusco (2011). Evaluating mixture modeling for clustering: Recommendations and cautions. Psychological Methods, 16(1), 63-79. It shows that CA can outperform LPA. The full PM issue also includes answers by McLaghlan and Vermunt. Things are tricky with clean simulated data. From experience mixture analysing real-messy-data always involves interacting with the data through error messages, relaxing restrictions, etc. to get at the final model that is never more than the best “approximation” of the reality. In the end, I think the decision is practical. I prefer the flexibility of mixture models since they are part of the generic latent variable family. Assumptions can be relaxed & imposed, fully latent models can be specified (with various degrees of class invariance – see the 2011 special ORM issue on latent class procedures that include illustrations of CFA invariance testing across unobserved subgroups), factor mixture, and even cross-group LPA invariance. These can be implemented in mixture models (and result in substantively interesting new parm estimates), but not in CA. This is especially true of growth mixture models were the "developmental trends" cannot be clearly taken into account in CA models (see the recent Morin et al. aticle in SEM, 2011, 18, 4, pp. 613+ on the advantages of flexibility).
Hello, I have conducted an LPA on 7 indicators in a sample of 1200 individuals and found that the BIC continues to decline up until a 9 class model (BIC increases for 10 and 11 class models). However, the LMR indicates a 4 class solution (i.e., first non- significant LMR was found for the 5 class model). It is noteworthy that I am using a clinical sample and the majority of the indicators are positively skewed
1) Should I be concerned that LPA may not be the appropriate model given that the BIC continues to decline up until a 9 class model but the LMR indicates a 4 class solution? Or is it safe to assumed that LPA is appropriate given that the BIC eventually did reach a minimum? I have also conducted a single factor FMA and found that the BIC declines up until a 6-class solution, but that LMR still indicates a 4-class solution.
2) Is the interpretation of LMR influenced by size of my sample or the fact that my indicators are positively skewed? Many LPA papers I have read seem to have convergence in deciding the number of classes using BIC and LMR, however many of these studies used much smaller samples (e.g., N=200 or 300). I also found one study using a larger sample that rescaled positively skewed indicators into ordered categories. Would you recommend doing something like this?
I have arrived at a 6 class solution for my latent profile analysis, but have noticed that the saved class probabilities differ in the solution with random starting values vs. the solution in which I specify the last class to be the largest (e.g., to interpret tech11 and tech14).
Is this discrepancy expected?
Which class probabilities should I use for secondary analyses?
Make sure you have replicated the best loglikelihood several times in the first analysis. When you specify the largest class to be the last class, be sure you obtain that loglikelihood.
Julia Lee posted on Saturday, March 24, 2012 - 5:06 pm
I am conducting:
LPA cross-sectional data (spring of first grade) and LTA longitudinal data (fall and spring of first grade). a) Are the LPA and LTA robust to floor effects and outliers? Is this an issue since mixture distribution is allowed but normality within each latent class is assumed? Is there a way to check for normality within each subgroup or should I assume in theory that normality was met for each subgroup? Because this is an unselected sample of first graders, some of the variables were positively skewed and there were outliers.
I am trying to include covariates in my latent profile analysis in order to evaluate meaningful between-class differences (e.g., multinomial logistic regressions) on various outcomes. I am noticing that the classes change substantially when regressed onto continuous covariates that are closely related to the latent profile indicators (using one self-report measure of depression as one of the profile indicators; regressing classes onto a different self-report measure of depression). In contrast, the classes do not change in an meaningful way when I regress them onto a related categorical covariates (e.g., depression diagnosis).
Is it appropriate to model direct effects between class indicators and closely related covariates in a situation such as this?
It seems like you sometimes recommend modeling direct effects between covariates and class indicators (i.e., if classes are changing substantially after including covariates). However, in other posts you also caution the acceptability of a mixture solution that changes substantially with the addition of covariates
You might find the following paper which is available on the website helpful:
Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345-368). Newbury Park, CA: Sage Publications.
Junqing Liu posted on Tuesday, April 10, 2012 - 10:23 am
I run the following syntax in Mplus6 to conduct LPA. All the observed variables are continuous. I kept getting error message of “ ERROR in VARIABLE command CLASSES option not specified. Mixture analysis requires one categorical latent variable.” How I may fix the problem? Thanks a lot.
ANALYSIS: type = mixture twolevel; Process=8(STARTS);
Model: %WITHIN% %OVERALL% %BETWEEN% %OVERALL% C#1; C#2; C#1 WITH C#2;
Julia Lee posted on Tuesday, April 10, 2012 - 2:06 pm
I have a question about LPA. Because I have missing data in the covariates, I used multiple imputation. However, Tech 11 and Tech 14 is not available with multiple imputation. Is there some other way I can get the VLMR, LMR, and BLRT p values? If these are not available, does this mean that entropy and looking for a reduction in LL, AIC, BIC, and ABIC would be the only way to decide how many classes best fit the data and compare it to substantive theory? Thanks. I appreciate your response.
Quick question re: LPA that may have already been answered ad infinitum I just have not had a moment to do a thorough search.
In the case of a three class solution with 8 continuous indicators...how is that the estimated mean parameter for a given indicator yields a significant z-value in the LPA framework yet when I use the resultant groups to compare b/w group differences on the indicators in the ANOVA framework, the groups do not significantly differ from one another on a subset of the indicators that yielded significant z-values in the LPA framework?
Does this have to do with the local independence assumption? Or, is it that the z-parameter tells one that the estimated mean for that latent class is significantly different than zero (yet may not be significantly different between the classes).
I assume that when you use the ANOVA framework, you are using most likely class membership. This is not what is used in the LPA where each person is in each class proportionally. Depending on entropy, these can be different.
Yes, I understand the posterior probabilities are used to derive class enumeration. So I guess my question is still why would the derived latent classes (used in an ANOVA framework as a manipulation check) show no significant b/w group differences on a given indicator which has evidenced a significant estimated mean parameter in the LPA itself. Apologies if I am missing the obvious.
Because in the LPA the means are not compared across the most likely class a person is in but the posterior probabilities for all classes are used for each person. Only if classification is perfect will they be the same. What is your entropy?
Your initial message talked about significant z-values for the indicators in an LPA. I assume that you meant significant differences in indicator means across classes? If so, I think you need to send relevant files to be able to answer this.
Thank you. I actually think Linda's answer above is what I am trying to ask. Perhaps if I restate my question more clearly just to be sure.
I have k...8 indicators all continuous. I fit a 3 class model (as well as 2 and 4). 3 seems best from the perspective of all of the fit indicators available.
The profile plot clearly shows that it is the later 4 indicators that best separate the groups (the first four the lines are tightly packed together). Gives the impression of a lighting bolt across the sky.
The estimated means for each of the three classes all have significant z-value parameters for the first four indicators (the ones whose lines are tightly packed in the plot).
I then ran the ANOVA on the derived classes and sure enough, the three groups did not show b/w group differences on the first four indicators but did on the later four (as the plot would suggest). This got me confused as to the following:
Why would the estimated mean parameter values in the LPA for the first four indicators be significant, yet fail to reveal these differences in the context of the ANOVA (as a manip check). If the sig. value of the est. means in the LPA is a function of the posterior probabilities (rather than the most likely class membership) I follow. If not, I am still conceptually unclear.
So, a significant z-value for an indicator in a given class in the output means that within that class the estimated mean for that indicator is significantly different than zero? In other words, what exactly does the significant mean estimate 'technically' mean as a function of class membership (particularly in the context of my current situation, where the estimated means by class are significant yet the resultant classes themselves are not different [ANOVA] on a subset of the indicators).
I actually was able to reach out to a friend who clarified for me my question...Simply put (and perhaps my question was not clear) the significance value for a mean estimate for a given indicator references whether that mean is significantly difference from 0. In the context of an ordinary LPA (no covariates, no grouping variable) is this interpretation correct?
I got it. Apologies for the lack of clarity in my question(s). I was switching from LCA to LPA and got a bit bewildered in the process moving from item response probabilities and thresholds to means and variances.
Katy Roche posted on Monday, April 23, 2012 - 9:27 am
What is the best approach for conducting latent profile analysis with 20 imputed data sets (created in SPSS)? Do I need to create one combined data file from those in order to conduct the LPA?
I performed a latent profile analysis with 6 continuous variables, N=5000. I examined 1 to 4 classes. For the 2,3 and 4 class solutions there were a number of starting value runs that did not converge. In addition, for the 3 and 4 class solutions I got the following warnings:
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-POSITIVE DEFINITE FISHER INFORMATION MATRIX. CHANGE YOUR MODEL AND/OR STARTING VALUES.
THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH. THE CONDITION NUMBER IS -0.136D+00. THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE MCONVERGENCE OR LOGCRITERION OPTIONS.
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THIS IS OFTEN DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. CHANGE YOUR MODEL AND/OR STARTING VALUES. PROBLEM INVOLVING PARAMETER 2. RESULTS ARE PRESENTED FOR THE MLF ESTIMATOR.
I increased the STARTS to 500 50 and the MITERATIONS to 5000, but this did not help. I found that one of the variables is causing these problems, but I do not want to exclude this variable. Do you know how I could solve these problems? Thank you.
Hello, I am conducting a Latent Profile Analysis using a set of 8 behavioral characteristics. Based on the results I have identified 5 classes . currently, I am interested in including covariate in the model. When I run the model with the covariates, with 5 classes, I end up having a different number of participants in each of the classes in comparison to my initial analysis. Hence, I thought to set the classes means for each of the variables based on the initial analysis results. How do I set the classes means for each class in the syntax? Thank you.
When this happens, it points to the need for direct effects. See the following paper on the website for more information:
Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345-368). Newbury Park, CA: Sage Publications.
deana desa posted on Friday, June 08, 2012 - 8:12 am
Hello Dr. Muthen,
I have fundamental questions regarding latent class analysis and latent profile analysis, which is confusing me.
Here is the situation. The data that I have measuring 6 behaviors. Each behavior has 9 categories, and is evaluated (scored) for 12 different cases. Thus, the cases can be considered cross-sectional(?) measures.
Is LCA be a correct analysis to profile the data I have?
Thanks! I appreciate it.
Do you have recommendation for literature for me to start with this LCA for porfile analysis?
The basic distinction between LCA and LPA is the scale of the latent class indicators. In LCA, they are categorical. In LPA, they are continuous. If you treat your latent class indicators as categorical, you have an LCA.
See the Topic 5 course handout and video on the website. There are many references there.
I am interested in the latent profiles of students (n = 488). I have two questions about LCA:
1) If the variables (Likert-scale questions on epistemic beliefs) I use to generate the latent classes are multi-dimensional, (i.e., I ran a EFA with them, three factors were extracted), then should I still use LCA (Mplus ex 7.9)? If not, what model (and Mplus example) should I use?
2) The smallest BIC is when modeling 6 classes (class sizes ranging from 23 to 100). Is the 6-class result trustworthy?
Hello, I have a question concerning Latent Profile Analysis (LPA). J.D. Haltigan touched on the issue in an earlier post.
I have conducted a LPA of 12 different coping methods (12 items), each measured on a 1-5 scale. All fairly normally distributed.
In LCA the output reports which indicators differ between the latent classes (Latent class 1 vs. latent class 2 might significantly differ on three items). Is there an equivalent output for LPA? If not is there an easy way to make this comparison? I suppose a series of equality constrains could be used, but that seems incredibly cumbersome.
A related question is.... how can I determine which item contributes most to discriminating between the latent classes, or in other words I want to find out which indicators contribute most to discriminating between the groups and which are minor, ideally I would like to rank the importance/contribution of each item.
I don't know what you mean when you say that LCA output reports which indicators differ between the latent classes.
For both LCA and LPA the interpretation of the classes is most easily obtained by the PLOT command, asking for SERIES and getting the mean/probability profiles over the indicators for each class.
You can as you say test differences across classes between the means of the LPA, but that is cumbersome. There is not an automatic way that I know of to determine which indicators best discriminate between classes, but one has to look at for which indicators the mean estimates differ the most across classes. Some indicators may be good at discriminating between some classes and some other indicators good for discriminating other classes. It would be hard to get a simple summary of this I would think. To get at the significance you can use Model Test, but again it is cumbersome to do it for all possible differences.
Just as an addendum to Robert's post, the issue that I finally got straight after figuring out how to articulate my question properly is that the significance test of the indicator in the LPA analyses tests whether the indicator is significantly different from zero with respect to a given class. This is usually of little substantive import although I guess one could make the case that if the indicator is not significantly different from zero (for all classes?) then perhaps it could be dropped from the indicator set?
Mplus gives the test of significantly different from zero as a standard and in some cases this test is not of interest at all - this is one such case. The means don't have to be significantly different from zero for the indicators to be useful.
The first one is based on the model where individuals have a posterior probability for class membership in each class. The second is based on the largest class membership for each individual. I would report the first one.
I performed a LPA in a sample of about n=9.000 and got 2 classes, then, based on a sample of selected Hi-Scorer (10%), I performed another LPA and got 3 classes whereof two were similar to the total sample solution plus one more class that was differently shaped. I'm interested in whether these two similar classes of both models are comparable? I tried to compare proportions via crosstabs but I'm uncertain what it reveals. Any suggestions?
Thank you for your reply. We have already used the PLOT command to visualize the profiles of both models. Visually the profiles of the two classes in solution #1 (all participants) look very similar indeed to the first two classes of solution #2 (subgroup of highscorer only). We are just wondering whether there is a test to examine whether this similarity can be statistically proven? For example, could test X tell us that, yes, the two solutions, albeit stemming from different samples, are not significantly different?
Adam Myers posted on Thursday, August 30, 2012 - 11:53 am
In LPA, how important is it that the continuous variables used to estimate the class solution approximate a normal distribution? Is it customary to run the typical diagnostics (histograms, etc.) and correct for non-normality by taking the logs of the variables, etc.? Does doing this sort of thing make an important difference? I haven't been able to find advice on this matter in the literature. Your input would be much appreciated. Thanks in advance.
I would deal with non-normality by using the MLR estimator which is robust to non-normality.
Susan Pe posted on Thursday, September 20, 2012 - 12:10 pm
I am doing a Latent Profile Analysis. Other than using Vuong-Lo-Mendell-Rubin, Lo-Mendell-Rubin adjusted LRT tests, and parametric boostrapped likelihood ratio test, someone recommended that I also check with MANOVA to make sure groups differ as people do with the cluster analysis. Does that make sense for LPA? Thank you.
Always thanks for your help. I'm doing Latent Profile Analysis. 4 classes was most approprate. Next, I added predictors to the classes. However, all the coefficients were same among classes. I'm not sure what was problem. My syntax and output as follows.
MODEL:%OVERALL% Zdep WITH Zanx Zagg; Zanx WITH Zagg; c#1 ON grade se sef fb fa sex school;
Parameterization using Reference Class 1
C#2 ON GRADE 0.079 0.029 2.760 0.006 SE -0.450 0.055 -8.114 0.000 SEF 0.033 0.060 0.547 0.585 FB -0.362 0.044 -8.166 0.000 FA -0.303 0.045 -6.787 0.000 SEX -0.042 0.051 -0.821 0.412 SCHOOL -0.201 0.091 -2.212 0.027
C#3 ON GRADE 0.079 0.029 2.760 0.006 SE -0.450 0.055 -8.114 0.000 SEF 0.033 0.060 0.547 0.585 FB -0.362 0.044 -8.166 0.000 FA -0.303 0.045 -6.787 0.000 SEX -0.042 0.051 -0.821 0.412 SCHOOL -0.201 0.091 -2.212 0.027 ......
These coefficients are held equal across the classes as the default. You need to mention the ON statement in the class-specific parts of the MODEL command to relax this equality.
Vinay K. posted on Monday, September 24, 2012 - 7:12 am
Hello Drs. Muthen,
I ran an LPA model, where latent clusters were extracted from two latent variables (say, depression and anxiety), each of which consist of three item scales.
The three-cluster solution was judged the best according to LMR-LRT test and other fit indices as well as meaningfulness of the cluster profiles.
A journal reviewer asked me to test the conditional independence assumption and to report pairwise residuals. So I inserted Tech 10 in the Mplus output, but it gave me the warning "TECH10 option is only available with categorical or count outcomes. Request for TECH10 is ignored."
So it seems that Tech10 cannot be used for categorical variables. What should I do to get pairwise residuals?
I have not used Mplus a lot. I'd appreciate it if you could help me out on this.
I always do appreciate your help. I'm runnuing LPA. I'd like to examine the effects of classes(4 classes) on one outcome variable(continuous variable). Is it possible to analyze classes as a predictor? I got error messages when I used MODEL command like "sa(contunuous outcome variable) ON c(4);". Would you tell me how I specify syntax command if it is possible? Thanks in advance.
If you relax the equalities of the variances across classes, the model is less stable and it may be more difficult to replicate the best loglikelihood. You can look at profiles of the indicators for each class to assess how much within class variability there is and relax the necessary variances.
John G. Orme posted on Saturday, December 29, 2012 - 9:37 am
Suppose that you are doing a latent class analysis with standardized measures that have arbitrary and different scales (e.g., a standardized measure of marital satisfaction with a potential range from 0 to 100, and a measure of marital conflict with a potential range of 0 to 20). Also, suppose that you allow the means and variances of the indicators to vary across classes. Would there be a problem with transforming the raw score to standard scores in this situation? I wonder because it seems like there are advantages to doing this (e.g., it makes it a lot easier to interpret the profile plot because you can interpret differences between classes and other differences as differences in standard deviation units).
Thanks for any advice you can give me about this. My apologies if I’m missing the obvious here!
I have run the following model and am wondering how to interpret the value of [gpa2] for each class. Are these values simply the mean of gpa2 for each class, while holding sex1 and gpa1 at the level of the sample mean?
Yes, gpa2 is a continuous variable, and I understand that [gpa2] is the code to request a mean. However, what is unclear to me is how the variables in the "c on sex1 gpa1" portion of the model affect the estimated means for each of the latent classes. In my situation, are the means for each class estimated for while holding gpa1 and sex1 at the sample mean?
I am forming profiles based on four variables (Immersion, Interest, Usefulness, and Relatedness). These items were assessed immediately following (T2) a technology activity that students participated in. I would like to see if certain pre-intervention variables (T1) predict membership into these profiles, and whether these profiles are related to outcomes that I assessed after the intervention was over (T4). In Mplus, I know that there is the AUXILIARY (e) and AUXILIARY (r) functions, which perform this function. Does it make sense to do this given that the variables predicting latent class membership occur at Time 1 (T1) and the correlates of latent class membership occur at Time 4 (T4)? Or is there some other code that I should enter to account for this difference in time?
Hello! I am currently running an LPA with four indicators of class membership. I am interested in including a control variable to directly predict one of these class indicators. Is it possible to simply include an "ON" statement in the overall model command? If so, what are the implications for interpreting output? For example, the indicator that is being predicted by another variable is presented in the output within each class as an intercept rather than a mean. Thank you!
We are using MPlus to run an LPA to see if different profiles of families engagement exist and the relations between these profiles and demographic characteristics and child outcomes.
When we looked at the results, all but 2 of the auxillary variables were not in the expected metric. We then looked at class membership information that was saved, and also found the variables not in the order that was identified in the output.
Can you help us understand why this happened and how this can be resolved? I tried looking at the forum but couldn't seem to find anything about this.
I have a three class model with distal outcomes. Is there any way in mplus to test the effect of class membership on the distal outcomes while controlling for other variables? In particular, I'm interested in knowing whether the effect of class membership is related to a distal outcome, even when controlling for prior levels of the distal outcome.
You can regress the distal outcome on the control variables. The relationship between class membership and the distal outcome is then the varying of the intercept rather than the mean of the distal outcome across classes.
Dear Sir/Madam: I noticed that in LPA the means and variances for the latent classes differ from the means/variances that would result if one computed them solely based on the most likely class a person is in. As you mentioned in response to an earlier posting this is due to the fact that "the posterior probabilities for all classes are used for each person". Now I am wondering which values to report in a paper. Wouldn't it be easier for the purposes of replication to solely focus on the means/variances as implied through the most likely class instead of the posterior probabilities for all classes? Especially since we would like to provide a Bayesian classification function for assigning new cases to the classes? It would be most appreciated if you pointed us into the right direction. Thank you and kind regards, Andreas
I would not use most likely class membership. I would use the model estimated values. And for prediction, I would use the SVALUES option of the OUTPUT command to obtain the input including ending values as starting values. I would change to asterisk to the @ symbol which fixes the parameter values and use that input as a prediction mechanism.
Ting Dai posted on Thursday, April 25, 2013 - 7:56 am
Dear Drs. Muthen,
I have 2 measures, each with 16 items (continuous variables), and there are 3 latent factors for each measure.
If I want to see the classification of individuals with these 32 items, what LPA model should I use?
I thought about a regular LPA with all 32 items (ex7.9), but because there are 2 measures I think perhaps I should do a LPA model with two latent class variables (ex7.14)?
A general but related question is: If the observed indicators are known to be multidimensional (i.e., loaded on multiple factors), should LPA/LCA be used to do classification at all?
You can do many different model variations for this. Either letting latent class variables influence the factors or the items directly. In the former case, you can specify a latent class variable each for each set of 3 factor for the 2 measures.
You have models of this kind shown in UG examples 17, 26, 27.
I am trying to decide whether to do use LPA or a cluster analysis with my data, but am having trouble finding resources that may help me decide. Is it the simplified answer that a profile analysis is used moreso when you are looking at several variables that are somehow related (e.g., scales on one measure like the MMPI); whereas, in a cluster analysis you can use several different measures and kinds of measures?
My research question involves looking at several risk and protective factors (individual, family, environmental) which I hypothesize will create several distinct classes that differ on their potential for "resilience" (e.g., high on risk factors, few protective factors in one group; low on risk factors, high on protective factors in another). I do not believe that my factors are necessarily related as latent constructs, so am not sure if LPA is the right approach.
Finally, I will have two data points and am interested in looking at how the classes predict to outcomes.
Thank you! Are there a limit to the number of measures that one can use for LPA? And should the measures used to predict your classes be related?
I ask because I am using several individual level variables (self-esteem, IQ), family level variables (parental monitoring, quality of home life), peer level (friendships), and environmental (neighborhood) to predict my classes. Does this limit the usefulness of LPA since I do not expect these variables to necessarily "hang together" as latent constructs?
I have conducted Latent Profile Analysis and I am considering the possibility to reject a model due to conditional probabilities of 1 or 0 as stated by some researches. Is this an appropriate way to deal with this problem? Thank you very much.
You should not reject a model due to probabilities of 1 or 0. This can help define the classes.
Reem Saeed posted on Monday, October 14, 2013 - 8:21 am
New to Mplus and LCA I am trying to come up with LCs of socioeconomic status in my country. The groups I have (i.e. IDs or areas) are 118. Ive done analysis when the indicators are proportions calculated from total population for each area, and have also tried it using simple counts, e.g. the number of people who are illiterate as an indicator.
Variable: Names are ..... (all variables in data); usevariables = ..... (variables I have chosen to use, includes the ID or area); classes = c(2) Analysis: Type = mixture ; starts = 500 100; stiterations = 50;
output: WARNING: THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED. THE SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA ...
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS -0.185D-16. PROBLEM INVOLVING PARAMETER 61.
ONE OR MORE PARAMETERS WERE FIXED TO AVOID SINGULARITY OF THE INFORMATION MATRIX. THE SINGULARITY IS MOST LIKELY BECAUSE THE MODEL IS NOT IDENTIFIED, OR BECAUSE OF EMPTY CELLS IN THE JOINT
IN CHAPTER 7, OF USER MANUAL IS STATED THAT," In contrast to factor analysis, however, LCA provides classification of individuals". Does it mean if i am saving my spss data into .dat file i should flip the data so that individuals becomes the column and variables become the rows.
Mplus does not have an option for this. You can try changing the data to have variables as rows and observations as columns. Using that for a CFA is Q factor analysis. I'm not sure if having more columns than rows will create an estimation problem.
I have conducted an LPA with a 10-item scale. A reviewer would like us to assess the local independence assumption of our LPA and assess the bivariate residuals. As Tech10 is not available for continuous data, is there a way for me to get this information? If there is not, how should I best assess this assumption? Thanks for your help.
You can request RESIDUAL. Although no test is provided, this might give you an idea for which within-class covariances you want to let free. You can also look at Modindices to guide you in this way. TECH12 is a further possibility. And if you have a high entropy (> 0.8, say), you can divide subjects into classes using most likely class and compute within-class correlations.
Thanks very much. That's very helpful. I have requested the RESIDUAL output. Is there a rule of thumb that can be used to determine whether a within-class covariance is too high and/or how many covariances can be freed before the model is invalid?
Dear Linda, I was told that in order to provide support that the best-fitting model for LCA/LPA is “good enough” to justify the use of MANOVA, I should use a 3 step procedure available in Version 7 of MPlus (Asparauhov & Muthen, 2013) that allows researchers to examine the relation between the latent profile variable and the other variables of interest independently while still incorporating the classification uncertainty associated with the latent profile models. As I was searching what this meant, I came across webnote 15. However, I am having difficulties running this analysis because 1) the variables used to categorize individuals into latent classes are not categorical, they are continuous and 2) I was unsure what was meant by auxiliary variables - would that be the "outcome" or "dependent" variable of interest? I think that this advice might be for CFAs or for categorical grouping variables, not necessarily for the data I was using. Could you possibly advise? Thank you
Brianna H posted on Friday, November 15, 2013 - 9:11 am
Hello Drs. Muthen-- In the output of a latent profile analysis with four continuous indicators (all ratio-scored), I received the following warning message for the input instructions: "All variables are uncorrelated with all other variables within class. Check that this is what is intended."
(1) The observed indicators are, in fact, correlated with each other. Does this message refer to the default in Mplus that covariances among latent class indicators are fixed at zero? Could you please advise about what this message means and possible ways to proceed? (2) Is there a citation that you recommend for the use of ratio-scoring in latent profile analyses or SEM generally? Thank you.
(1) It means that within class the variables are specified to be uncorrelated. You can make them correlated by using WITH. The fact that the variables are correlated in the sample is captured by the variables being influenced by the same latent class variable, so a within-class correlation parameter is not necessarily needed.
Thank you so much for your prompt reply Dr. Muthen! I was able to get this to run, but am now struggling with a MANOVA as the 3rd step...I entered all the DVs as the auxiliary vars and it seemed like it was alright, but then I could not find the actual across group comparison. Please find my code below: Thank you!
VARIABLE: NAMES ARE ID Q911A1 RQ911A2 Q911A3 Q911A4 Q946A1 Q946A2 Q946A3 Q946A4 ZQ911A1 ZQ911A3 ZQ911A4 ZQ946A1 ZQ946A2 ZQ946A3 ZQ946A4;
USEVAR ARE ID ID Q911A1 RQ911A2 Q911A3 Q911A4 Q946A1 Q946A2 Q946A3 Q946A4 ZQ911A1 ZQ911A3 ZQ911A4 ZQ946A1 ZQ946A2 ZQ946A3 ZQ946A4; IDVARIABLE IS ID; missing are BLANK ; class=C(3); Auxiliary=ZQ911A1 ZQ911A3 ZQ911A4 ZQ946A1 ZQ946A2 ZQ946A3 ZQ946A4(R3STEP); MODEL: ANALYSIS: type = mixture; starts = 200 10; %overall% Q911A1(1) Q911A3(2) Q911A4(3) Q946A1(4) Q946A2(5) Q946A3(6) Q946A4(7);
Brianna H posted on Sunday, November 17, 2013 - 2:23 pm
Thank you for your replies.
Tan Bee Li posted on Sunday, January 05, 2014 - 9:49 am
1. Will there be an issue if the covariates and the indicators consist of both discrete and continuous data?
2. Distal outcomes are similar to the indicators but are identified based on time. Does MPlus require specific research designs such as repeated-measures to make this analysis?
3. Within-class correlations: Since 5 of my indicators are derived from the same construct, they are correlated (these are facets within a particular construct). Moreover, executive function skills which make my 6th, 7th and 8th and 9th indicator was found to be associated with some of these five indicators, and may be skills that contribute to their development, vice versa. Would it be redundant then for me to use LPA? Is there a hierarchical version for LPA as how HLA is for LCA?
4. I have 1 questionnaire (likert) that measures 5 facets that makeup a construct. I am hoping to use the composite score for facet A as indicator 1, composite score for facet b as indicator 2 etc. Is that allowable?
5. What is the minimum sample size Mplus requires for meaningful analysis? Or how is it determined by the no. of indicators?
Based on my description, if LPA is not suitable, could you recommend a better model?
1. No. But remember that covariates should not be put on the CATEGORICAL= list.
5. No general rule, but you want more observations than parameters generally.
I don't want to make a general recommendation because it would require a much deeper understanding of what you do.
Tan Bee Li posted on Tuesday, January 07, 2014 - 10:50 pm
Thanks for your response.
What would be hierarchical version for LPA?
Also, the EF skills (indicator 6 to 9) have also been proposed to be outcomes instead of predictors (outcome of the 5 facets; predictor 1-5). Is there a meaningful way for me to examine if indicators 6 to 9 are better as predictors vs. distal outcomes?
By hierarchical version for LPA I assume you mean two-level LPA, in which case you want to look at
Henry, K. & Muthén, B. (2010). Multilevel latent class analysis: An application of adolescent smoking typologies with individual and contextual predictors. Structural Equation Modeling, 17, 193-215. Click here to view figures and syntax for all models.
Carrere posted on Wednesday, March 05, 2014 - 9:45 am
I am running LPA and would like to free the variances across classes. That would be great if could let me know what code has to be added.
Here is my current program: VARIABLE: NAMES ARE id icog scog ihea shea ieng seng; USEVARIABLES ARE icog scog ihea shea ieng seng; CLASSES = group(3);
I'm running a LPA model with a four-class solution, and would like to reorder the classes so that the largest class is last. To do this, I'm taking the SVALUES from the first output file and then reordering the class labels so the largest class is last. I'm setting STARTS = 0 and using the OPTSEED option. This process worked fine for reordering the 2-class and 3-class solution, but it isn't working for the 4-class solution.
The problem that I'm having is that this changes the entire model. It doesn't reproduce the first model's H0 likelihood value, changes the entropy and other fit statistics, and has very different class assignment values.
Is there anything you can suggest to rectify this?
That worked to rearrange the classes--now the largest class is last. But, it is still not reproducing the original H0 likelihood value and the original class counts.
FYI--I ran the original model twice, once with Starts = 1000 200; and again with Starts = 2000 500; to ensure that the issue wasn't one of a local maxima. In both cases, I got the same H0 value and it was reproduced dozens of times.
I have a question about interpreting the output of LPA.
I have three continuous indicator variables which were used to estimate a 3-class solution.
Are the class-specific means of the indicator variables interpreted as the actual mean for that class? Or, are the class-specific means of the indicator variables interpreted as the 'mean difference' in values for that indicator between that class and the reference class.
For example, if Indicator1 (i.e., Y1) has an estimated mean of -0.831 (p < .001) for class 1. Is this interpreted as the actual estimated mean for Y1 for class 1 is -0.831, and that this value is statistically significantly different from zero?
Or is this interpreted as, the mean of Y1 for class 1 is -0.831 units less than the mean of Y1 for class-3, and this mean difference is statistically significantly different from zero?
Brianna H posted on Wednesday, April 09, 2014 - 4:58 pm
Hello -- I have been working on a latent profile analysis following the instructions of Asparouhov and Muthen (2012) for using TECH 11 and TECH 14 to determine the optimal number of classes (link below).
In my 5-class solution TECH 14 output, I receive the warning message "WARNING: THE BEST LOGLIKELIHOOD VALUE FOR THE 4 CLASS MODEL FOR THE GENERATED DATA WAS NOT REPLICATED IN 5 OUT OF 5 BOOTSTRAP DRAWS[...]"
Although I have increased the number of random starts in LRTSTARTS (to 0 0 3600 720), I still receive this warning message. Asparouhov and Muthen (2012) state that this warning message should go away if the # of LRTSTARTS are increased.
I am wondering if this issue might be occurring because the 5-class solution has two classes with less than 5% of the sample in the class. Could the model be unstable because of the low # of cases in each class?
Although the 4-class solution is easier to interpret (and has only 1 class with <5% of the sample), most of the model fit statistics (e.g. Entropy, BLRT, LL, AIC, BIC, adjusted BIC, and Tech 11 output) favor the 5-class solution.
If you have followed the suggestions of web note 14 and still have problems I would simply go with BIC. I would not use entropy to choose the number of classes. Entropy is a description of the usefulness of the latent class model, not a measure of how well it fits the data.
It is the number of means and variances of the latent class indicators times the number of classes plus the mean logits of the categorical latent variable. There are the number of classes minus one mean logits.
stella posted on Wednesday, April 30, 2014 - 2:34 pm
I'm interested in running LPA using a 17 item continuous scale. I'm wondering if it's appropriate to use LPA with items instead of scales/measures (i.e., create latent profiles based on the items from this scale rather than multiple scales)? Thank you in advance!
In reference to your response to Brianna on Nov 15th, 2013, you wrote that using the with statement would allow for variables to be correlated within class. However, I have included a with statement and continue to get that warning. Am I doing something wrong? My input is pasted below.
Hello, I am using latent profile analysis to identify the number of latent classes depending on students’ answers on eight scales. As I theoretically expect a solution with four classes, I compare the fit of five models from a two- to a six-class solution. I do this for each of the three time points I’ve got data for. For my second time point, Mplus has a problem with the three-class solution:
“THE ESTIMATED COVARIANCE MATRIX FOR THE Y VARIABLES IN CLASS 1 COULD NOT BE INVERTED. PROBLEM INVOLVING VARIABLE DAVE_OP. COMPUTATION COULD NOT BE COMPLETED IN ITERATION 22. CHANGE YOUR MODEL AND/OR STARTING VALUES. THIS MAY BE DUE TO A ZERO ESTIMATED VARIANCE, THAT IS, NO WITHIN-CLASS VARIATION FOR THE VARIABLE. THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL AND/OR STARTING VALUES. THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.”
I use starts = 5000 100 and there is no problem with other numbers of classes in this time point or with the three-class solution of the other two time points. Do you have an idea how to solve this problem? Does this finding mean that a three-class solution does not fit to the data? Thanks in advance!
Dear Dr.'s Muthen-I have conducted latent profile analyses with 2 age groups (children and adults)-among children I get a unique profile that does not emerge in the adult group. I have received feedback that this unique profile may not be due to age but could be due to SES differences in the age groups. (Although my child profiles do not differ based on SES.) The suggestion is to residualize the outcomes after controlling for SES and rerun the analyses. I am wondering how to approach this -as I haven't located anything through a search on the MPlus board. Further-does this appear to be a solid solution in your view? Many thanks.
Residualizing sounds complicated in this setting since you can let SES influence both the latent class variable and its indicators. Why not simply do separate analyses for different SES categories.
Chrissy posted on Tuesday, September 09, 2014 - 6:43 am
I have a question in relation to Latent Profile Analysis (LPA), multi-collinearity and compositional natured data. I am trying to identify weekly patterns of physical behaviour using % of daily time spent in sedentary behaviour, light activity and moderate-to-vigorous activity across 7 days using LPA. All variables (21 indicator variables) are expressed as % of daily time spent in activity and thus together equal 100% for each day. I was wondering whether multi-collinearity and the compositional nature of the data are an issue in LPA? Thank you in advance.
Hello, I have searched for this information, but not found it, so if is an overlap I apologize. I am interested in using LPA to examine a number of emotion variables. However, I am not sure if all those variables are relevant. Thus, in the spirit of Raftery and Dean (2006; https://www.stat.washington.edu/raftery/Research/PDF/dean2006.pdf) and Dean and Raftery (2010), I am hoping to use variable selection techniques to determine which of the emotions should be included in the clustering procedures. However, I am not sure how to do this using MPLUS. Has syntax been developed/published for this procedure? Dean and Raftery note this can be done during the clustering procedure itself, is this possible in MPLUS? Thanks!
I don't think Mplus can do this. But in the upcoming version 7.3 we have added a simple descriptive device to see which latent class indicators are particularly useful for distinguishing among the classes. It is called univariate entropy.
Eric posted on Saturday, September 13, 2014 - 12:03 am
Yellowdog posted on Friday, September 19, 2014 - 3:31 am
I have derived 4 latent classes that I would like to use as an outcome in a mediated pathmodel. Furthermore, I would like to compare direct and indirect effects between males and females using the GROUPING option. Can I use classes that were derived from the total sample (with sex using as covariate for class estimation) for a MSA or do I have to estimate classes separately for both groups? Furthermore, is it recommend to use a two-step approach or better all in one (LPA & mediation analyses)?
Are you saying that in an X->M->Y mediation model your Y is a nominal variable based on 4 latent classes? A nominal Y requires special mediation formulas, but can be done.
I would first do an invariance study of gender for the latent class part of the model. Gender can affect c only or also the c indicators directly. In the latter case you don't have measurement invariance. If you have measurement invariance you can use classes derived from the total sample.
Yellowdog posted on Monday, September 22, 2014 - 1:38 am
thank you for your reply.
1. Does "a special mediation formula" mean to create NEW parameters (indirect effects=products of single paths) using MODEL CONSTRAINT? 2. Could you please give an example for how to test for measurement invariance of c and indicators.
I am conducting an LPA using 8 continuous variables and have arrived at a 5 class solution. My goal is to identify which class subsequent participants are most likely to belong to based on his or her own responses to these measures.
Is there a way to calculate likelihood for new participants to belong to the previously identified latent classes? If there are multiple solutions, is there a way to do this without re-running all participants each time?
You can do this by fixing all model parameters and use data for say just one new subject, running this to get the estimated posterior probabilities for a subject for all classes. We discuss examples of this in our Topic 6 handout and video.
You do this using SVALUES in your first run, then changing * to @ and running your second run with Starts=0 to get the posterior probabilities by asking for Save=cprob in the Savedata command.
Sam Courtney posted on Tuesday, September 30, 2014 - 10:44 am
Thank you for getting back to me so quickly. I will go through the examples and videos, but this sounds like a very promising solution. Thank you for the help!
in your older postings I read that you recommend to use raw data in a LPA. I tried raw data as well as standardized data and it seems to make a difference because the fit changes. Do you know of any reference which explains the consequences of using (un)standardized data in LPA?
I do not only have different information criteria but also different p-values in the Vuong-Lo-Mendell-Rubin LRT and in the Lo-Mendell-Rubin adj LRT. With a change from standardized data to raw data, the p-value decreases. So with standardized data the tests indicate a 2-class-solution whereas with raw data they rather indicate 3 classes.
Sorry, I can't answer your question clearly. In both cases the ICs decrease with every class added. According to this I should probably adopt a model with 6 classes or more, which seems neither theoretically plausible nor parsimonious (I would expect 3 or 4 classes). I tried to find out, whether every drop in (B)IC, as small as it may be, is necessarily meaningful or whether there are rules of thumb/formulas, which indicate a negligible drop. Unfortunately, I am not yet satisfied with what I found. Could you perhaps recommend further reading?
A follow-up question to your answer: Would you generally prefer the BIC over the above mentioned LRTs? Or are there certain conditions under which the LRTs are not trustworthy?
When BIC keeps on decreasing with increasing number of classes there is often another kind of model that is more suited to the data. It could be something minor like residual covariances.
I usually use BIC together with looking at how different the solutions are (in terms of mean profiles) for k and k+1 class models that have close BICs. Sometimes the k+1th class is just a minor variation on one of the k classes.
Thank you for the advice. I will have a look at the different solutions in the way you suggest. Concerning your first point: Which criterion do you use, in order to decide whether an additional class is neccessary or whether some residual covariances should be permitted in some classes?
S Elaine posted on Thursday, October 16, 2014 - 9:35 pm
I am conducting LPA to examine profiles of socioemotional functioning among children in my study.
SAVEDATA: FILE IS sociotest.dat; SAVE = CPROBABILITIES;
My output file includes: WARNING in VARIABLE command Note that only the first 8 characters of variable names are used in the output. Shorten variable names to avoid any confusion.*** ERROR The number of observations is 0. Check your data and format statement. Data file: /Users/shelbyelainemcdonald/Desktop/diss/socio.dat ERROR Invalid symbol in data file: "Pnumb" at record #: 1, field #: 1
If I am conducting an LPA with covariates in the same step ("c on covariate" rather than the 3-step approach), does this increase the probability that the covariate will be related to latent classes, thus increasing type I errors?
Dear Dr. Muthen, I am running an LPA with four variables to determine number of profiles. I get this ERROR: One or more variables have a variance greater than the maximum allowed of 1000000. Check your data and format statement or rescale the variable(s) using the DEFINE command. Do you have any suggestions? Thank you!
Dear Dr. Muthen, I am carrying out a Latent Profile Analysis using 10 dependent variables: 6 continuous, 1 categorical, and 3 count (and 2 covariates). I am using archival data, and two of my continuous variables are actually subscale means, with scores ranging from 0 to 4 (each has approximately 25-30% zeros, but are comprised mainly of non-integers; e.g., 2.83). Although I believe these two variables are technically count variables, I decided to treat them as continuous, as count variables cannot have non-integer values. When run this way, a 3-class solution is generated that is meaningful and consistent with theory.
I do have one concern--the distributions of these two variables (both measuring mental health symptoms) are positively skewed (as I would have anticipated them to be in my population). I am using the MLR estimator, which I understand is robust to non-normality. However, when discussing my results with a colleague, I was told that given the degree of skewness, treating them as count variables might be more appropriate. So I tried re-coding these two variables into integers (using the CUT option) to be able to treat them as count variables. However, when run this way a four-class solution fits best, and my classes are no longer consistent with theory and are more difficult to interpret.
Do you have any advice as to the most statistically valid approach here? Thanks, Leigh
Treating them as continuous was my first instinct too, but I want to be sure I am not simply choosing the easiest solution to interpret.
I have one more question, just to make sure I am on the right track. The variables I am having trouble with are measures of anxiety and depression (equal to the mean number of symptoms present for each disorder, which is likely why my colleague suggested it be treated as a count variable). If I treat these variables as continuous, is it possible that the tail end of the skewed distribution is driving the formation of one of the classes? I only ask because when I treat them as count variables, the mental health variables do little in distinguishing classes, but when used as a continous variable, one class has much higher mean scores on these mental health variables relative to the other 2 classes.
Hello again Drs. Muthen, As per your earlier suggestion, I have tried treating my skewed continous mental health variables as censored (from below-zero inflated). My results appear to be similar to when I treat them as continuous-normal, in terms of class differences/optimal number of classes.
However, I am confused as to how to interpret my output. When run as censored variables, "one or more logit scale parameters approached and were set at the extreme values. Extreme values are -15.000 and 15.000."
1. What do the means represent in the output for censored variables? Are they log-odds, as they would be with zero-inflated count variables?
2. I am confused as to how I would interpret negative means, given that one can't have a score less than zero on the anxiety/depression measures I am using. For example, how would I interpret this for class 1?
Means Anx#1 -15.000 Anx -0.168
Thank you for any clarification you can provide. Or if you know of any relevant readings that might help, that would be great too.
1. When you do censored-inflated you have a binary latent variable indicating if the subject is in the zero class or not (see our UG or zero-inflated Poisson literature). A logit of -15 that your output shows indicates zero probability of being in the zero class, which means that you don't need the inflation part but can specify the variable simply as censored.
2. Censored modeling assumes an underlying continuous, unlimited latent response variable and the mean refers to the mean of that variable so it can be less than the lower censoring point. See the censored-normal literature.
A good book for both topics is by Scott Long (Sage "White" series).
Thank you for your help thus far and for your quick replies! Treating my variables as censored seems to have done the trick. But I do have another follow-up question: I read in the UG that in order to model the covariance between two censored variables, you need to use special modeling (e.g., use a latent variable that influences both variables). Can you elaborate a bit on how this might be done, or perhaps direct me to an example in the UG?
where y1 and y2 are censored variables. The covariance value is found in the factor loading for y2.
HwaYoung Lee posted on Thursday, January 22, 2015 - 11:48 am
Dear Dr. Muthen, I ran several LPA using four indicators, one covariate and one distal outcome. The output for 2-class model showed only one best likelihood value even though I increased random start values and # of optimization (STARTS= 20000 8000;). When I used optseed to do LMR and BLRT test (2 classes vs. 1 class), LMR is .06, but BLRT is significant (<.00001). Then, I tried to run 3-class model. For 3-class model, LMR values varied. When I used several optseed numbers with different best likelihood values, p values for LMR was significant (2 class vs. 3class) for one best likelihood value, but was not significant for two best likelihood values. AIC, BIC, aBIC as well as BLRT consistently decreased when the # of latent classes increased. I ran 6-class model, AIC, BIC, aBIC values still decreased, but BLRT was not trustworthy due to local maxima even though LRTbootstrap=200 was used. My questions are 1) If output provided only one best likelihood value for 2-class model, is that any problem in my code or model? 2) The results of LMR and BLRT were not consistent. Also, AIC, BIC, and aBIC consistently decreased when the # of latent classes increased. How can I decide which model is optimal? 3) Is there any way to evaluate whether this population is homogenous or not? Because LMR comparing 1-class model vs 2-class model was not significant. Any help would be appreciated.
1) With only one best logL the model is typically not very stable; such situations should be avoided. Modify the model.
2)If BIC keeps decreasing when increasing the number of classes that may indicate that the model should be modified. Perhaps you need to add some WITH statements to allow for within-class correlations among your indicators. A k-class model with some WITH statement may have a much better BIC than a k+1-class model without any WITH statements.
I checked correlation coefficients among indicators such as vitamin supplement, # of drinks, # of cigarettes before running any analyses. These correlation coefficents were very low. For example, the coefficients between one indicator and others were .020, -.042, .012, .029 with sample size of above 4,000. Can I use LPA even though correclation coefficents were low among indicators?
Yes, you need non-zero correlations among your indicators for there to be more than 1 latent class.
Perhaps you have strong floor effects for your outcomes, causing small sizes of regular Pearson product-moment correlations.
HwaYoung Lee posted on Wednesday, January 28, 2015 - 7:12 am
Thank you for your advice. Well, I tried to add with statements in K-1 class and compared K-1 class which has with statements and K class. However, differences in BIC values (as well as AIC and aBIC) were larger across number of latent classes, so it didn’t work even though I added with statements in K-1 class model. By the way, the variables’ ranges were large, for example, the ranges of variables are 1) 0 to 277; 2) 1 to 130; 3) 1 to 140; 4) 1 to 34….). Does this feature affect the number of classes? (sample size is above 4,000). As I said above, BIC value (as well as BLRT, AIC…) keep decreasing (I tried to run even 20-class model….) Any advice would be appreciated.
HwaYoung Lee posted on Wednesday, January 28, 2015 - 11:47 am
One more question. BIC and other fit indices provide that a larger number of latent class (say 9-class model) is better, but a few people belong to a couple of latent classes, can I just collapse some classes to one? That means I choose a fewer number of latent classes (say 4-class model). Is that ok?
Perhaps you want to investigate a Factor Mixture Model (see the UG for examples) to solve the problem of no BIC minimum.
You can also explore how the interpretation of the classes changes from say K=4 to K=9. Perhaps the classes found for K=4 are still there for K=9 and perhaps the extra 5 classes are not of substantive interest.
Tessa posted on Thursday, February 05, 2015 - 11:57 am
Dear Dr. Muthen and Dr. Muthen,
I am working on a Latent Profile Analysis making use of 4 continuous indicators that were multiply imputed (n = 40 data sets) using NORM prior to reading them into MPlus.
I would like to compare the fit between models with differing numbers of classes; however, when using the TECH11 command to request model fit statistics, I receive the following error statement: TECH11 option is not available with DATA IMPUTATION or TYPE=IMPUTATION in the DATA command. Request for TECH11 is ignored.
Could you please advise on whether there is an alternate command to request model fit statistics (LMRT, BLRT) in this case? I have specified Type is Imputation under the DATA command and Type = Mixture under the VARIABLE command. Thank-you!
If you think in terms of estimating the number of classes as a parameter you can estimate the number of classes for each imputed set and then combine the information as for other parameters. Hopefully they all point to the same number of classes. If not use the mode for the number of classes.
Carey posted on Saturday, February 07, 2015 - 12:53 pm
I have performed a LPA and ended up with 4 classes as the best solution. I am now interested in using the classes as IVs in a moderation analyses with a continuous moderator and a continuous dependent variable. What is the best way to do this?
I am a new MPlus- user and would like to use the Mixture add-on (latent profile analysis) to model heterogeneity in one continuous variable. All the examples I have found have typically been multivariate with multiple class indicators.
Question: do you see any caveats or objections to using LPA with one class indicator and if so, would you recommend another anlaysis for my purpose within Mplus?
This is possible. With only one indicator, it can be difficult to know if you are modeling the non-normality.
Katrin Mägi posted on Thursday, February 26, 2015 - 5:47 am
Dear Dr. Muthen, I have run a five class LPA model with distal outcomes (using manual 3-step approach). I'm interested in knowing whether the effect of class membership is related to a distal outcome, even when controlling for prior levels of the distal outcome. To do this I’ve regressed the distal outcome on the control variables. As I understand the relationship between class membership and the distal outcome should then be varying of the intercept rather than the mean of the distal outcome across classes. What puzzles me is that the diff test results for intercepts across classes(using Model Constraint) are different depending on weather I standardized my control variables (with Define: standardize) or not. I am unsure If I should use standardized or unstandardized control variable scores and how does one or the ohter approach affect the interpretation of the differences in intercepts of my distal outcomes across classes. Thank you!
S Elaine posted on Monday, March 09, 2015 - 12:59 pm
I'm using 6 indicators of children's socioemotional functioning in an LPA. Relationships among the majority of the indicators are, as expected, low to moderate. However, there are 3 pairs with the following r values: .690** .714** .652**
I'm under the impression that correlations among the indicators are expected for LPA, and did not use WITH statements for my LPA. However, I see comments in articles that mislead me, such as this: "We note that (a) vulnerability appraisals, (b) depression, (c) injuries at incident, (d) physical health functioning, and (e) social relations—both positive and negative—were primarily selected for inclusion in the LPA analysis based for the substantive and theoretical reasons discussed early. In addition and prior to the LPA analysis, we examined the bivariate correlations among these variables as well as between each of these variables and the IPV exposure variables. As anticipated, these analyses showed statistically significant relationships among the variables. Nonetheless, no statistical relationship was so strong (i.e., .7 and higher) to indicate that we had measured similar constructs with these various measures." Nurius & Macy (2010)
Should I have included the WITH statements due to the fact that I have pairs indicators that are strongly correlated?
S Elaine posted on Monday, March 09, 2015 - 1:00 pm
As a follow up to my previous post, the resulting model was characterized as follows:
Fit stats for the 3 profile model I selected are: Log-likelihood (-6291.148); AIC (12634.297); BIC (12729.803);) Adjusted BIC(12647.352); Entropy (.92) LMRT|BLRT p-value (03|0) No. classes with n<5% study sample(0)
The lowest Average Latent Class Probability was .95
An LPA with at least 2 classes implies that the items are correlated; they are all influenced by the same latent class variable. If some items correlate more that could either signal the need for one more class or the need for WITH statements among some of the items in the 2-class model.
S Elaine posted on Monday, March 09, 2015 - 6:48 pm
Thank you. I should have stated that the correlations I listed were the coefficients for the sample as as a whole. Am I correct in interpreting that you mean if some items correlate more within a class, then WITH statements among some of the items are needed? If so, I do not have this issue for correlations among the items w/in each of the three classes...they are zero or very weak.
Danli Li posted on Thursday, March 12, 2015 - 6:45 am
I was wondering if it is possible to carry out LPA on a sample size of less than 100, with 14 latent class indicators that are expected to form 3 or 4 classes
Hello, We have a 3 class solution of an anxious-depression construct (low, moderate and high). We used the high class as the reference to run our logistic model to determine the association with ethnic groups. The reviewer for a paper will like us to use the low class as the reference category.
This is part of the syntax that we use (excluding USERVARIABLES): CLUSTER = psu_id; STRAT = strat; WEIGHT = weight_final_norm_overall; MISSING ARE all (999); IDVARIABLE IS ID; CLASSES = C(3); ANALYSIS: TYPE = COMPLEX MIXTURE; STARTS = 500 50; K-1STARTS = 20 5; MODEL: %OVERALL% C#1 C#2 ON mex_do mex_ca mex_cu mex_pu mex_sa mex_oth income_c4 AGE LANG_PREF GENDERNUM EDUCATION_C3 ; OUTPUT: SAMPSTAT TECH11;
Where C1=low C2=moderate C3=high
I wonder if we need to change C#3 C#2 ON and that will give C1 (low) as the reference category?
You don't show your USEV list so I don't know what your latent class indicators are. Let's call the latent class indicators y1-y2. To change the class order you want to use the SVALUES option of the Output command to get statements with final estimates in the output and then use those statements to give starting values in a new run with STARTS = 0, where you have changed the class order. For instance, say that your first run has SVALUES
class 1: [y1*5 y2*10] class 2: [y1*1 y2*5]
If you want class 1 last, your starting values for the second run has
class 1: [y1*1 y2*5] class 2: [y1*5 y2*10]
Make sure that the two runs have the same loglikelihood value. If not, you have to give starting values for more parameters.
I have a couple of questions about running a latent profile analysis:
(1) Is it possible to do the DU3step procedure when using multiply imputed data?
(2) Is it possible to include control variables when running the DU3step procedure? So control for earlier levels of a behavior when exploring whether there are differences in the distal outcomes across the different classes?
(1) Is it possible to use the manual DU3step approach if I am using ALGORITHM = INTEGRATION, INTEGRATION = MONTECARLO to run the latent profile analyses with covariates that have missing data? (rather than impute the data)
(2) I realized that when I run the LPA with imputed data TECH11 AND TECH14 are not available, are there recommendations for choosing the model with the best number of classes in these situations?
(2) I would run/use tech11/tech14/BIC for each imputed data set separately and use the number of classes obtained most frequently across the imputed data set. If the amount of imputed data is not substantial I doubt there will be any differences in the class enumeration across the imputed data sets.
Hello - new MPLUS user. I have performed a Latent Profile Analysis with a set of 8 continuous variables for about 370 observations, resulting in a 9-class best-fit solution. Everything appears to be working great, however I am trying to get to the bottom of local independence for our particular case. For a similar question above Bengt recommended calling for RESIDUAL, MODINDICES, and maybe TECH12, as well as looking at in-class correlations if Entropy was >.8, which ours is.
Taken together I am unclear on what, if anything may be wrong, and how best to assess any issues. The last method you mention in particular, if I were to create correlation matrices for each of the 8 variables across the 9 classes separately, model assumptions are such that within-class correlations should be minimal or non-existant, correct? What if we find some correlations here that are moderate or strong? Is this method the best indicator of local dependence? Or would relying on RESIDUAL and MODINDICES output be better? I am unclear on the best way to test for local dependence in the LPA case without TECH10.
Q3-Q4. That's a methods research question that I don't think is resolved.
You can also try Factor Mixture Modeling, where you introduce a factor and try different number of classes. We have UG examples of that. Large factor loadings can indicate which pairs of variables are in need of a WITH statement (instead of the factor).
Hello Dr. Muthen, I ran LPA using six indicators (physical activities, individual smoking, second-hand smoke, vitamin use, restaurant meals and alcohol use). When adding one more class, fit indices kept decreasing (never resolved). One issue is that two-class model had only one best log L value. Increasing starting values didn’t help to resolve this issue. Among indicators, individual smoking and second hand smoke were correlated (r= .5), so residual covariance between two indicators was added in the model. But the model with residual cov had still one best log L. [I don’t want to use Factor Mixture model, because these indicators, except for two smoking variables, were not related (correlation coefficients were small)]. However, when adding one more class (e.g.,three-class model), a couple of best log L values were replicated. So, here is my question. 1) Do I just ignore two-class model and go to three-class model or four? 2) When compared the model without residual cov and the model with residual cov, class membership was substantially different (1st model: c1=2267, c2=801; second model: c1=3033, c2=35). Also, entropy substantially improved (.858 to .999). But adding one more class (c-class model) had better fit than c-1 model with residual cov. Do I choose a model with residual cov? Any suggestions would be greatly appreciated.
Hi, I have data from a questionnaire where approx 7000 women were asked how they would describe themselves (body shapes) at different ages. Each question had 9 pictures of bodyshapes varying from very thin to very thick and they were asked to choose the one they thought resembled themselves at different ages (8 yrs, menarche, 30y and "now").So there are 4 variables with 9 categories and some missing (at random). I did a simple LCA model:
VARIABLE:NAMES ARE id2 wom8y womarche wom30y wom45y womow wompm ; USEV = wom8y womarche wom30y womow ; IDVARIABLE IS id2; CLASSES = c (6); CATEGORICAL = wom8y womarche wom30y womow; MISSING ARE ALL (-9999) ANALYSIS: TYPE = MIXTURE; ALGORITHM = INTEGRATION; STARTS= 1000 100; SAVEDATA:FILE IS bodywomenLCA.dat; SAVE = CPROB; I did find a good fit for 6 classes, but I want to take into account the age-effect so that wom8Y comes before womarche and womow is always the last "observation". I tried by adding: Model: %OVERALL% i s | wom8y@1womarche@2wom30y@3womow@4; But get the following Error Message: This analysis is only available with the Mixture or Combination Add-On. Am I using a wrong approach here?
Thank you. I found that there were some issues with the program I had on my Laptop. But when I ran the syntax on my stationary computer I got the following error: *** ERROR in MODEL command The categorical variables in the growth model do not have the same number of categories. Use the CATEGORICAL option to allow the number of categories to differ for maximum likelihood estimation. Problem with: I S One of my ordinal categories had categories scoring from 1-8, and the others 1-9, so I added the following command: "CATEGORICAL = wom8y(1-8)| womarche(1-9)| wom30y(1-9)|womow(1-9)"; But I still get the error saying that the categorical variables in the Growth model does not have the same number of categories.
I just performed my first latent profile analysis with 5 indicators. I have 2 questions.
1)My estimated within-class means are outside the range (i.e., >5 on a 5-point scale). What could this indicate?
2)Ultimately I would like to see whether my latent profile variable (c) predicts a distal outcome (y) above and beyond the latent indicators (u1 - u5), because the indicators are typically associated with y). Can I simply perform the appropriate 3-step approach to this and y on u1-u5 as well, or does it require something else?
1) Please send output and license number to support.
2) A 1-step model is not identified when both the latent class variable and all its indicators predict a distal, so I would not trust a 3-step approach that attempts this. The indicators would be associated with the distal if the latent class variable predicts the distal (they have a common cause), so that is not an argument for including both types of predictors of the distal.
Tony Bonadio posted on Wednesday, September 16, 2015 - 4:01 pm
Hello Drs. Muthen,
We are running a latent profile analysis using the 8 syndrome subscales from the Child Behavior Checklist (CBCL; parent report measure) and the 8 syndrome from the Youth Self Report (YSR; parallel youth report) as indicators. We are interested incorporating multiple informants to identify distinct subgroups of youth and explore patterns of reporter discrepancy. We feel that using all 16 continuous indicators provides information regarding patterns of symptom severity that could be lost by using the standardized difference scores. However, my concern is that we are violating the assumption of local independence as parent and youth are both reporting on the same individual. Although these reports are not highly correlated, theoretically they are not independent of each other.
So my questions are: 1) Do I need to be concerned about this violation?
2) If so, would allowing the parameters between corresponding indicators (e.g., CBCL Withdrawn Depressed and YSR Withdrawn/Depressed) to correlate WITHIN class account for the shared variance attributed to the multiple reporters for each child?
Yes, you should be concerned (a bit). If you can include the 16 variables for both parent and youth, a 32-variable analysis would account for the dependent observations. This is called a wide approach.
Respected. Prof. Muthen. I am running a simple LPA with one continuous variable, two latent classes. I tried two models, one with variance of the variable being equal (default setting) and the other variance relaxed to be un-equal. The Tech 11 outputs for the two models are completely different. Webnote 14 is greatly useful in deciding no. of classes, however the variables are categorical in that webnote. Please advice on how to decide whether variance should be set to equal or unequal and the number of classes please. I am just running a simple test model before getting to my project that has 3 continuous variables please.
--- Equal variance --
LO-MENDELL-RUBIN ADJUSTED LRT TEST Value 28.594 P-Value 0.0000 ------ Unequal variance ----
LO-MENDELL-RUBIN ADJUSTED LRT TEST Value 29.511 P-Value 0.5927
Thanks a lot. BIC for equal variance is 61.667; unequal variance is 66.894. So I could potentially choose the equal one? Could you advice on interpretability please as it is subjective. Should I be checking whether the p-value of variance in un-equal is statistically significant? Right now it is in-significant. In sum, given the lower BIC and insignificant equal variance, shall I go for the equal-variance (default) model?
The class counts and proportion are almost equal --equal variance-- 1 101 0.50249 2 100 0.49751 --unequal variance-- 1 91 0.45274 2 110 0.54726
---> May I kindly understand the reasons as to why in Mplus the default setting is equal variance, though that is a restricted model of a more general un-equal variance please.
Hi, I have obtained a three-profile solution using 6 continuous indicators. I now want to examine how different continuous, ordinal and binary covariates are related to profile membership. Theoretically, some of these covariates are more likely to be predictors and others more likely to be outcomes. However, all indicators and covariates have been measured on a same time.
For continuous outcome, I was planning to use Auxiliary DCON. For binary predictors, R3STEP. However, for continuous and ordinal predictors, I'm not sure. Is it appropriate to use R3STEP? Should I treat them as outcome instead and use DCON or DCAT?
Sorry if this might be obvious, but I'm relatively new to this type of analysis.
The AUXILIARY option works differently for covariates versus distal outcomes. For covariates, the coefficients are partial regression coefficients. For distal outcomes, each one is done independently of the others. For covariates, you should use R3STEP. For a continuous distal, BCH is recommended. For categorical distals, DCAT is recommended.
I have three more sub-questions: a) Is it ok to use ordinal or continuous predictors (covariates) with R3STEP (auxiliary)? b) For ordinal distal, can I use DCAT? c) The variables that I consider to be "outcomes" have been measured at the same time as the profile indicators variables, so is it ok to consider them "distal outcomes" (the study is cross-sectional)?
Thank you so much for your precisions. This is so helpful!
Hi, I'm still having trouble deciding if I should treat one of the covariates as an outcome or as a predictor of latent class membership.
Is this just a theoretical matter? Basically, I want to know if the variable is associated or not with latent class membership. If feel like considering the variable as a predictor or an outcome will give a similar answer (although with different statistics) to this question, isn't it?
Thanks again! Your answer are very helpful.
Jon Heron posted on Wednesday, October 21, 2015 - 8:51 am
your approach should ideally reflect your thinking about causality.
It's true that in some cases - e.g. a logistic model with a single binary covariate - you can swap them round with no effect, however more generally this is not the case.
Not only does your chosen approach potentially lead to a different interpretation, you may also be making different assumptions. Zuzana Bakk discusses this in her paper about the LTB method (which turns distal outcomes into predictors of class membership)
ywang posted on Thursday, November 12, 2015 - 8:25 am
I have a quick question on class counts on the LCA outputs. The output provides two sets of class counts: one is FINAL CLASS COUNTS AND PROPORTIONS and the other is class counts based on MOST LIKELY LATENT CLASS MEMBERSHIP. I read the previous posts and it was recommended to report the former. However, when I checked the output file with save=Cprob and it gave me the class counts on the latter. Please clarify it. thanks.
Laura Healy posted on Thursday, December 03, 2015 - 6:50 am
I have used LPA within some of my research to create profiles of goal motivation within student athletes. I have also used the AUXILIARY function to analyse between-profile differences in a range outcomes. Having received some feedback from a peer-reviewer, I was wondering if it is possible to generate effect sizes for the analyses?
I think there is a paper by Vermunt and Magidson where they compare k-means clustering and LCA/LPA and they might have mentioned this in that context. It's a chapter in the Applied Latent Class Analysis book by Hagenaars and McCutcheon.
I have run a latent profile analysis and I'm examining the profiles' associations with categorical covariates with the R3STEP command. However, there are missing values on the covariates (under 5% for all covariates, except 10% for one of them). What is the best way of dealing with these missing values ?
There is really no good way to deal with that using R3STEP. Multiple imputation creates limited options in the next step. If you are just interested in one variable at a time your can use DCAT instead.
I'm running an LPA with 4 observed continuous variables. Some of the models with the best fit indices (BIC, LMR, etc.) had low proportions of final starts reaching convergence and/or the reaching the LL. Should I re-run these models using the final start values obtained via SValues?
I have conducted a latent profile analysis with the default Mplus setting (free mean estimation, but fixed variance). I have learned that allowing free variance estimation could also be pertinent. What are your point of view on this, concerning the context in which it would be most pertinent?
Also, concerning model selection (number of profiles) what would be your advice when the BIC, CAIC and AIC suggest four profiles, but the VLMR and ALMR suggest three? Also, the level of entropy is the highest for three profiles (0.90 vs 0.85 for four profiles)
Q1. I don't think you want to freely estimate the variances to be different across groups - that can cause problems of tiny classes. I think you want to first analyze with equal variances across classes. Then check the variation in each class for those classified into it and see if any of the classes need a free variance for any of the variables.
Thanks for the answers. Here is a follow-up on these questions.
Q1. How would you proceed to check such variation within each class (what to look for in the Mplus output)?
Q2. From some of my readings, I thought that I was appropriate to stop looking at adding more profiles as soon as the VLMR and/or ALMR tells so? And what about the entropy? Shouldn't we favour the profile with the best entropy?
I just want to make sure that I understand the rationale. Thanks in advance for your support.
Q1. Classify people by most likely class and then check.
Q2. If the indices disagree, I would go with BIC. I would not choose a model based on entropy - it's like R2 in SEM: A model can have a bad R2 but a good fit and a model can have a good R2 but bad fit.
I lowered the degree of random starts using STSCALE as suggested and the model did not converge. I've tried using the svalues and increasing the number of starts to 500/100 (also no convergence). Is it appropriate to now conclude that this model has failed, even though in my initial runs it had the best fit on multiple indices (that had only 10% converge)? As this is my first time doing LPA, I'm not sure if the results I'm getting are due to 'user error'.
Thank you very much for the time you devote to answering our questions.
Concerning my question 1 above (whether or not I should allow free variance across classes), I have produce a classification with the default setting (equal variance across classes), then I have classified people by most likely class and I have looked at the standard deviation within each class for each indicator. How large of a difference in standard deviation (or variance) between classes should warrant to free the variance across classes? Should a use a statistical test (e.g. Levene) to determine this?
Another question: I want to predict membership to classes (profiles) from a list of several binary variable (sex, etc) + one continuous variable (age)? Before including all these variables at the same time in the R3STEP command (multivariate), is there a way I can test them one-by-one in order to do a first clean up, retaining only the few most likely to be pertinent for the R3STEP? (my sample size is relatively limited, so I want to limit the number of variable)
Last question: Can the command 'auxiliary (e)' be used with binary auxiliary variable?
Thanks for everything. If these are too specific questions and you prefer to refer me to relevant literature, I would totally understand.
Q2: In my question above, I was specifically talking about testing predictors of profile membership. If I understand well, it's ok to use BCH or DCAT in this context, even if I conceptualize the variables to be predictors and not outcomes? Is this the only way to test the variable one by one? Instead should I use R3STEP and re-run the analysis integrating each time a different variable as predictor?
Another question: I want to produce "equality of means" tests for characterizing the profiles on the very exact variables used for deriving the profiles (the indicators). In this case, is this ok to use R3STEP ? I have been suggested to use" auxiliary (e)" in such context because these variables are neither predictors, neither outcomes.
With one variable, viewing it as a predictor or distal outcome is the same thing statistically - conditional on the latent classes you assume uncorrelatedness between this variable and the latent class indicators. With several variables used as covariates in R3STEP you don't assume uncorrelatedness among the covariates (conditional on the classes).
R3STEP does not provide equality of those means. Use BCH (or DCAT for categorical). Read web note 21.
Sara N. posted on Tuesday, February 23, 2016 - 1:47 pm
Hi, I have conducted a LPA analysis with three indicators and the best fitting model is a two-class model. Using the three-step approach, I then regressed my classes on continuous predictor and moderator as well as their interaction terms. However, the percentages of cases in each class changes because some of the cases are kicked out of the analyses) in the third step. I tried to include the variances in the overall model,but I get an error message. Is there any possible way to keep the cases ? Below is the mplus code for step 3: Thank you!
usevar = class X Z Product ; missing = all(999.000, 9999.000); classes = c(2); nominal = class;
Product=(X*Z ); analysis: type = mixture; Starts = 0; processors = 4(starts); MODEL: %OVERALL% c ON Product (b3) X (b1) Z (b2);
i'm doing a latent profile analysis and have extracted 3 profiles using 4 continuous indicators. i'd like to look at a distal, binary outcome of the profiles.
(a) can DCAT handle sample weights? (b) if i am using the subpopulation command, do i need to be doing the manual steps by first saving class probabilities, then following the example in Appendix A in the appendices to Web Note 15? (c) if the automatic method can handle the subpopulation command, do i instead follow the automatic syntax of Appendix A?
thanks so much!
Taylor BC posted on Wednesday, March 02, 2016 - 7:49 am
I am conducting a LPA with 11 continuous indicators and 2 dichotomous covariates. I have been comparing models using random starts and things seem to be going smoothly, no warnings aside from variables not correlated within classes and fit indices are helping choose a solution. After reading through all of these threads, however, I am wondered if I need to try running the models with specified start values. I basically have picked random values (I.e., 0 and 100) because it is not clear to me how to choose the start values. I am getting different results from my original models with random starts. How do I know whether I need to worry about specifying start values?
Yes, just label the means in each class and then use Model Test.
anonymous Z posted on Tuesday, April 05, 2016 - 12:37 pm
I want to examine the occurrence pattern of different drug use (drug A, drug B, and drug C) across four time points. The distribution of all these drugs includes a preponderance of zeros. I am thinking to do a joint trajectory latent class analysis. The class membership will be decided by the change trajectory of drug A, drug B, and drug C. I know how to do joint trajectory LCA If the response outcome variables are normally distributed. But how should I go about with joint trajectory LCA when my response outcome variables include a lot of zeros.
You can use categorical, count, or two-part modeling. All are describe in the UG.
anonymous Z posted on Wednesday, April 06, 2016 - 8:52 am
Thanks for your response. I have experiences with two-part modeling and LPA separately, but I have never integrated them into one examination. I wondered how the syntax should be like? I cannot find relevant example from the User Guide. Could you give one example?
I assume you would want an LCA for the binary part of your two-part approach. So you would have two latent class variables that you can relate to each other using WITH. I have no such example but it seems possible to do.
Hi, I'm doing a LAP with a sample size of 31 pre-school teaching. I need to know the number of items or variables that must be included in the analysis (5, 6 or 7?) Given the size of the sample. Or where I can find this information, because the articles I have read identify criteria to select the sample size or the amount of latent class, but not the quantity of items. Thank you very much.
They cover a variety of topics beyond SEM. There may be another better general discussion forum but I don't know what it would be.
Sara N. posted on Friday, April 22, 2016 - 1:57 pm
Dear Dr. Muthen, I would like to ask a follow up question regarding your response to my previous post (posted on Tuesday, February 23). From your response, I conclude that including variances in the overall model to keep the cases, which is an option when running SEM and path models, is not available for LCA/LPA models. Is that correct? Thanks! Sara
I ran 3 latent variable models to identify profiles for three types of parental discipline. For each selected profile solution, entropy was between .997 to 1 and profile probabilities ranged from .996 to 1.
With such high entropy and probabilities, I created profile categorical variables. I then wanted to identify groups of parents across the profiles associated with discipline1, 2 and 3. It was suggested that a cross tabulation would work because of the certainty of the profiles. I then created group categorical variables and used them to predict a distal outcome in the same wave using binary logistic regression.
A peer raised concerns about my methods. I am confused as to what to do:
1) Was treating my profiles as a certainty despite such high entropy and probabilities a mistake?
2) Do I need to use Lanza's method even if my distal outcome is in the same wave and I use chi-squares to followup on my logistic regression in order to understand the relationship?
3) Was the cross tabulation a mistake? it was important to determine whether profiles existed within each discipline because I could not find any other study that demonstrated variance in how people used strategies within a discipline type. I then wanted to show how parents combined the 3 profiles of discipline1, 2 and 3.
Greetings, I have a question in my study.After running the LPA, an extension of LPA model is to include covariates to predict the latent class membership.Theoretically speaking,covariates should be included in the LCA, otherwise,the model may be misspecified, leading to distorted parameter estimates.(Muthen,2004)
As is known to us all,in application of LCA and LPA, they usually assume measurement invariance and only covariates influence the latent class and indirectly influence the observed variables via the latent variable. But in practice,the assumption is often violated and cause problems.
I want to ask you: Before proceeding the LPA, I analyzed the population with Multiple group ESEM and tested the factor loading invariance and intercept invariance with success.Do I still need to include covariates in the LPA model to predict the class membership? Please give me some advise. Thank you!
It is good that you have established measurement invariance using multi-group analysis. But that is for the mixture of classes, not for each class. So there is still a threat that you may have some covariates that not only influence the latent class variable but also the latent class indicators directly, in which case leaving out the covariates when determining the number of classes can lead you astray.
I, I just received a comment from a reviewer who wants to know about the collinearity of my independant variables in a 3-step LPA-D model. I assume the reviewer is talking about my indicators (n=5). To your knowledge, is there a limit beyond which correlation between the indicators can affect the model?
I read a lot on this concern but I could not find a clear response. Do you know a reference that can help me figure out if the correlation between my idicators a problematic? To be more precise, I would appreciate to have references on the correlation between indicators in LPA.
For info, I have a sample of 315 and I got 30 parameters estimated for a 3-profile model. The entropy is 97%.
I am not sure the reviewer is referring to the indicators - they are not independent variables. Perhaps you have covariates in the model? You may want to ask on SEMNET about how to spot collinearity.
Dennis Li posted on Thursday, July 28, 2016 - 1:21 am
Hello, Please forgive my beginner-level question, but I am trying to understand how Mplus handles missing data. I am running an LPA with 10 indicators, each representing age at a certain developmental milestone (similar to Michael Marshal's post on June 18, 2007). Not all milestones may have occurred for all individuals, and non-occurrence is theoretically as important as occurrence; however, non-occurrence is coded as missing. How might I get LPA to take consider such structural zeros as part of the patterning, or does it already do that? Would it be better to recode structural zeroes with an actual value (e.g., 0 or 100) to differentiate them from true missing values?
Perhaps a two-part approach is suitable where the occurrence/non-occurrence is also modeled, not only the age. See the paper on our website under Factor Mixture:
Kim, Y.K. & Muthén, B. (2009). Two-part factor mixture modeling: Application to an aggressive behavior measurement instrument. Structural Equation Modeling, 16, 602-624.
Dennis Li posted on Friday, July 29, 2016 - 5:41 pm
Thank you, Dr. Muthen, for that reference. I am in the process of applying it to my research question to see if it fits my needs.
In the meantime, I read a few times on this message board that modification indices perform strangely for mixture models. My entropy for k>2 is currently above .8, but I am tempted to put in some correlated indicators based on MIs to improve those numbers, with some theoretical justification as well. First, do you think the MI values are trustworthy to follow, and second, would it be better to specify equivalent correlations across classes through the %overall% model or allow correlations to vary by class?
I am running an LPA clustering on three continuous variables. The model runs fine with 1 cluster. When I run two or more clusters, I get the following error:
THE ESTIMATED COVARIANCE MATRIX FOR THE Y VARIABLES IN CLASS 2 COULD NOT BE INVERTED. PROBLEM INVOLVING VARIABLE T1MGCP. COMPUTATION COULD NOT BE COMPLETED IN ITERATION 11. CHANGE YOUR MODEL AND/OR STARTING VALUES. THIS MAY BE DUE TO A ZERO ESTIMATED VARIANCE, THAT IS, NO WITHIN-CLASS VARIATION FOR THE VARIABLE. THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION.CHANGE YOUR MODEL AND/OR STARTING VALUES.
i'm following up on a conversation with Tihomir on march 3 whereby it was indicated that a binary outcome used in a BCH analysis should be interpreted as the mean and is the same as the probability of being one. how does one write results for the estimates presented in 'model results'? if i take the intercept coefficient and let's say it's .10, can i talk about this as i would in a linear probability model? should i say a 10% probability of the outcome being 1, or should i say a 10% prevalence? thanks!
The BCH part of the output talks in terms of means - which as you say can be expressed as either a 10% probability or prevalence. But the regular Model results output is in terms of thresholds (the negative of an intercept).
thanks Bengt. my follow up question to your response is i'm following example 3.1 in Webnote 21, and focusing on my results from the 2nd step. Tihomir suggested using BCH when the Y is binary, thus, in the example from the webnote, how would you interpret the coefficient on X? it would still be mean or prevalence? thanks very much!
You will find examples of both types of analyses in the User's Guide.
Jenny Gu posted on Wednesday, October 26, 2016 - 4:38 am
Hi Dr. Muthen,
I've run latent profile analysis and used the 'standardize' option under the 'define' command to obtain standardised estimates for each indicator under each profile. I originally noted down the 'STDYX' estimates but noticed that these values differ from the values shown in the plot, which seem to just correspond with the default estimates under model results (e.g., for one profile, the STDYX estimate for one indicator is 1.06 but it looks like it's 0.6 on the plot). Is there any way to create a standardised plot of STDYX values? Or am I approaching this incorrectly and the 'default' model results estimates (which correspond to the plot) are the standardised values I need (rather than the STDYX values)?
"used the 'standardize' option under the 'define' command to obtain standardised estimates"
This option does not do that.
The standardize option in the Define command is not related to the standardized option in the Output command. In Define you standardize the observed variables by their sample variances. In Output we standardize parameter estimates by estimated variances.