Anonymous posted on Wednesday, April 23, 2003 - 1:24 pm
I am planning on analyzing a structural equation model that contains both latent variables and observed variables in the structural part. My question is how to estimate the measurement model. Do i include the variables (by putting them in the "usevariables" syntax) that will be used as observed variables (not as indicators of a latent construct), or leave them out when estimating the measurement model?
All observed variables that are part of the analysis whether they are used as factor indicators or not should be included on the USEVARIABLES list. I think this is what you meant. If you are developing the measurement model as a first step, then only the observed variables that are part of the measurment model should be on the USEVARIABLES list for this step.
I am trying to establish structural invariance across groups in a Multigroup CFA. I have already tested the equality of the factor loadings on the different factors. How do I re-run it to compare the structural paths to be equal? When I try to enter it in my 2nd model command it comes back with the same results as if nothing were free to estimate. Is there a different command for testing the relationship between the latent variables across groups?
TITLE: Multigroup CFA DATA: FILE IS 'P:\MultigroupCFA10112005.dat'; VARIABLE: NAMES ARE threat1 threat2 threat3 threat4 threat5 threat6 slap1 slap2 slap3 slap4 slap5 slap6 beat1 beat2 beat3 beat4 beat5 beat6 knife1 knife2 shoot1 shoot2 sample site sex race age; USEVARIABLES ARE threat1 - shoot2 sex; MISSING ARE ALL (-9); GROUPING IS sex (0=female 1=male); MODEL: f1 BY threat6 slap6 beat6; f2 BY threat5 slap5 beat5; f3 BY threat2 threat3 slap2 slap3 beat2 beat3; f4 BY threat1 threat4 slap1 slap4 beat1 beat4; f5 BY knife1 knife2 shoot1 shoot2; f1 with f2 f3 f4 f5; MODEL male: f1 with f2; OUTPUT: STAND MOD;
Equalities are specified by placing the same number in parentheses following the parameters that are to be held equal. So for example:
MODEL: f1 WITH f2 (1);
would hold the covariance of f1 and f2 equal across all groups. See Chapter 16 of the Mplus User's Guide for a description of the special language for equalities and see Chapter 13 the user's guide for a description of special issues for multiple group models.
Anonymous posted on Tuesday, December 20, 2005 - 10:54 am
If there are numberous variables to be held equal across the groups, would the coding look like this:
model f1 with f2 f3 f4 (1);
Does this make it so that f2, f3, and f4 are equal merely ACROSS groups, or are they also set to be equal WITHIN groups? f2 = f3 = f4...
thanks for your help - this discussion board is saving my sanity!
I have recently created a 45 item instrument. EFA results and theory lead me believe that is is best represented by a 3 factor model. A sample of 750 was used for the EFA and a different sample is being used for the CFA. Could you give me your advice and/or suggestion of an article that will help me become more familiar with to what extent and how to use mod indices to make the model a better fit, while keeping theory in consideration? I have your MPlus book and Klein's SEM book, but am looking for something a little simpler and more specific to this question as I intend to use the latent constructs within structural modeling in the near future.
Jenny L. posted on Tuesday, May 07, 2013 - 2:54 pm
I have a scale (with only 5 items, which are expected to load on the same factor)used at 2 time points. I tested factorial invariance by using the example model 5.26 command. Model fitness was fine based on CFI(=.971) and TLI(=.955), but RMSEA was .102 (90%CI: .074, .132), and p-value of chi square was .0000.
I was wondering if there are indices in the output I can refer to further improve the model.
Modification indices can be used to see where model fit can be improved. See the MODINDICES option of the OUTPUT command.
Jenny L. posted on Tuesday, May 07, 2013 - 7:32 pm
Thank you Prof. Muthen. I'd like to follow up with the question above. My understanding is that model modification indices only suggest the paths that should be added. Are there things I can look at in the output to determine whether a particular item should be removed from the factor analysis to improve factorial invariance across time?
You could do an EFA at each time point to see if the items behave as expected.
Jenny L. posted on Tuesday, May 07, 2013 - 10:02 pm
Thank you, Prof. Muthen!
dvl posted on Thursday, January 30, 2014 - 6:33 am
I have a number of questions:
(1) For example: If I have a structural equation model with two scales “work-to-family conflict” and “family-to-work conflict” (these are my latent concepts) and two exogenous variables “gender” and “age”, should I already include the exogenous variables “gender” and “age” in the measurement part of the model? Or should I just study the correlation between the two latent concepts WTF en FTW and the internal consistency of the different scales in the measurement part of the model? In this scenario, the covariates gender and age are only brought in the model from the moment the structural part of the SEM model begins? So summarized my question is Should I include the covariates gender and age already in the measurement model or not? (2) I learnt in SAS that we fix the variances of the factors to 1 when performing a measurement model (or confirmatory factor analysis) in order to solve the scale problem. How is the scale problem solved in mplus when doing a confirmatory factor analysis? In the same way as we do it in SAS, or? (3) In the structural part, I saw that the first factor loading is fixed to 1, I would rather fix the largest factor loading to 1, how can I change this assumption?
You will want to study Topic 1 on our website (handout and video) which discusses the questions you raise. For (1) you want to look for "MIMIC" modeling. For (2) - (3) Mplus fixes the first loading as the default but you can change that to fix any loading or the factor variance (see User's Guide on how to do that).
dvl posted on Thursday, January 30, 2014 - 2:14 pm
Thanks for answering!
1. One more question, in fact I do not understand why I should use a mimic model? As I have read, a mimic model study the influences of gender on wtf, should I not just study the correlation between gender and wtf in the measurement part of my model?
2. Another question, in the second message on this forum there is mentioned "if you are developing the measurement model as a first step, then only the observed variables that are part of the measurement model should be included in the use variables list". What is meant with "observed variables that are part of the measurement model"? Are these only the factor indicators of WTF and FTW in my example? And not gender and age?
1. You use the MIMIC model to look for direct effects between the covariates and the factor indicators. Significant direct effects represent differential item functioning. Please see the Topic 1 course handout and video on the website where measurement invariance and population heterogeneity are discussed for the MIMIC model and multiple group analysis.
2. Yes, this means just the factor indicators.
Tom Bailey posted on Thursday, July 17, 2014 - 5:26 pm
Dear Dr Muthen
I was hoping you might be so kind as to offer me some guidance into the construction of a measurement model in Mplus.
I have a structural model that features:- one exogenous latent factor with four indicators, one exogenous observed variable (dichotomous) and one endogenous observed variable (also dichotomous).
If I was to do a measurement model beforehand is it unnecessary to also include the observed variables in order to see how the model fits as a whole(as below).
VARIABLE: NAMES ARE Gender CARINT Q9_a Q9_b Q9_c Q9_d;
CATEGORICAL = Gender CARINT Q9_a Q9_b Q9_c Q9_d;
MODEL: FINREW BY Q9_a Q9_b Q9_c Q9_d;
Gender WITH CARINT; Gender WITH FINREW; CARINT WITH FINREW;
Lucija posted on Tuesday, March 21, 2017 - 6:05 am
Dear Dr. Muthen,
I am testing a measurement invariance of a construct across three countries. MI suggest that I should add an error covariance in one of the groups. If I add this to the model, it is estimated, but is almost zero and non-significant. Surprisingly, MI still suggest that I should add it to the model.
Could you please tell me what can cause this and how to solve it?
I have two latent variables and three time points. From previous models, we know that each latent variable has five indicators. Nevertheless, initial fit is not satisfactory. Thus, after reading some handouts, I have introduced correlated errors over time in the model, and the output provided a better fit. However, I still need to improve measurement model fit. As suggested in several comments in statmodel website, I have taken a look at modification indices and I noticed cross-loadings. How can I take into account this in Mplus, without changing BY syntax (from previous evidence I should not modify the definition of the involved latent variables), in order to improve model fit?
I have read that not including the covariances between latent variables will result in biased cross loadings. In this situation should I specify free covariances between latent variables or should I fix them to a specific value?
Is there an alternative way to improve model fit without introducing cross-loadings?