on page 2 Mplus LANGUAGE ADDENDUM 7.1 is said that the The alignment optimization method consists of three steps: 1. Analysis of a configural model with the same number of factors and same pattern of zero factor loadings in all groups. 2. Alignment optimization of the measurement parameters, factor loadings and intercepts/thresholds according to a simplicity criterion that favors few non-invariant measurement parameters. 3. Adjustment of the factor means and variances in line with the optimal alignment. my question is why is that a model with ALIGNMENT = FREE (CONFIGURAL); give the same estimates as ALIGNMENT = FREE; because i was thinking you need to be able to compare the configural and the scalar but their estimates are the same.
Tait Medina posted on Tuesday, February 25, 2014 - 6:19 am
Is it possible to test the equality of regression coefficients across groups under the alignment method? For example, I am interested in regressing the factor on age and testing whether the coefficient for age is different across groups?
The alignment method currently doesn't handle covariates. But you can divide people into age groups and then you get a*b groups in the alignment run, where a is the number of age groups and b is the number of original groups.
Tait Medina posted on Tuesday, February 25, 2014 - 3:11 pm
Thank you for your reply.
I have a question about the output from Alignment=Fixed. How are the estimates given under MODEL RESULTS connected to the numbers given under “Item Parameters In The Alignment Optimization Metric”? I have read Web Note 18, but am having a hard time mapping the output in these two sections onto the equations.
Also, the R2 measures given in Table 7 in Web Note 18 need to be calculated by hand using Eq 13 and 14. Correct?
The “Item Parameters In The Alignment Optimization Metric” section contains the alignment results in the metric in which the alignment optimization is performed, i.e., after all indicator variables are standardized and also under constraint (10) from web note 18, also including the factor mean fixed to 0 in the corresponding group. These parameters are scale reversed back to the original metric of the variables and also to factor variance fixed to 1 in the corresponding group if that is the requested parameterization. There is a way to get these parameters as your final model estimates. You will need to standardize your variables using the standardized option of the define command as well as the analysis option METRIC=PRODUCT;
R2 is computed with the upcoming Mplus 7.2 but you can compute it by hand as well.
Thank you for your response. I have another question about the alignment approach.
I have noticed that when I use ALIGNMENT=FREE I receive a warning that I should switch to ALIGNMENT=FIXED and a reference group (or baseline group) is suggested. How is the suggestion for a baseline group determined? I have played around with using different baseline groups trying to get a feel for this new approach and have noticed that the choice of group impacts the results under the APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS section. Could you provide a bit more insight into this? Thank you.
> I have noticed that when I use ALIGNMENT=FREE I receive a warning that I should switch to ALIGNMENT=FIXED and a reference group (or baseline group) is suggested. How is the suggestion for a baseline group determined?
It is the group with the smallest absolute factor mean value. Presumably fixing that parameter to 0 would lead to the smallest misspecification.
> I have played around with using different baseline groups trying to get a feel for this new approach and have noticed that the choice of group impacts the results under the APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS section. Could you provide a bit more insight into this?
Intercept for CHILD Group Group Value Value Difference SE P-value 2 1 3.378 3.385 -0.006 0.027 0.817 3 1 3.501 3.385 0.117 0.039 0.003 3 2 3.501 3.378 0.123 0.042 0.003 Approximate Measurement Invariance Holds For Groups: 1 2 3 Weighted Average Value Across Invariant Groups: 3.430
Invariant Group Values, Difference to Average and Significance Group Value Difference SE P-value 1 3.385 -0.046 0.019 0.019 2 3.378 -0.052 0.022 0.017 3 3.501 0.071 0.023 0.002
The process is explained in Section 4 http://statmodel.com/examples/webnotes/webnote18.pdf but to summarize the invariance is not determined by pairwise comparison but rather by this: compare group 3 against the average of group 1,2,3. Also due to multiple testing we use smaller p-value 0.001 as the cutoff value.
Tait Medina posted on Tuesday, April 15, 2014 - 11:58 am
Is the pairwise comparison portion of the output related to the "first step" of the algorithm used to determine a starting set of invariant groups that is described in Section 4? "We conduct a pairwise test for each pair of groups and we "connect" two groups if the p-value obtained by the pairwise comparison test is bigger than 0.01." (pg. 15).
Finally, when dichotomous outcome variables are used, how are scale factors/residual variances handled? Are they fixed to 1 in all groups?
The second question also yes - we use the theta parameterization where all residual variances are fixed to 1 during the configural model estimation. After that ... the alignment is done without any consideration for the residual variances, i.e., the alignment is for the intercepts and loadings only and it does not use residual variances in the computations.
I have to also correct my message from Feb 26. To get the "Item Parameters In The Alignment Optimization Metric" as your final parameter estimates you have to use a linear scale transformation for each indicator variable Y like this define:Y=(Y-a)/b; where a and b are obtained from the configural model estimates as follows
a=average Y intercept across the groups b=average Y loading across the groups
Tait Medina posted on Friday, April 18, 2014 - 12:51 pm
Thank you, that makes sense.
I have a follow-up question about Eq. 9 in Webnote 18. I am trying to make sure I understand Eq. 9 by plugging in the estimates taken from the output (using ML, Alignment=Fixed) using my own data. For now, I am using 2 groups. The loading for item 1 is .585 in group 1 and .588 in group 2. Taking the difference of these loadings gives me -.003. Scaling this by the CLF (using the small number .0001) gives me f(x)= .103. The Contribution to the Fit Function for this item, given in the output under Loadings, is -.316. I am not sure how to arrive at that number. The sample size for group 1 is 698 and for group 2 it is 949. The sqrt(N1*N2) is therefore 813.881. Weighting f(x) by 813.881 gives me 83.512. What am I misunderstanding about Eq. 9?
The loss function that is reported in the output has a negative sign. See footnote 2 on page 10. You are also using 0.01 not 0.0001. Also the weight is standardized: scaled so the total weight is equal to the total number of cross group comparisons NG*(NG-1)/2 which is 1 in your case. So the actual weight that we use for the tech8 output is w=((NG-1)*NG/2)*w0/sum(w0) where w0=sqrt(N1*N2) and NG is the number of groups. The weight standardization of course doesn't affect the optimization since it is a constant multiple. It is done so that all weights are 1 when the groups are of equal sizes. In your case the weight is 1 because there is just one cross group comparision. Thus the loss function for that loading is -sqrt(sqrt(0.003^2+0.01))=-.316
Dr. Asparouhov, thank you so much for taking the time to address my questions. It has been tremendously helpful!
I have a general question about the Alignment approach. In many applications of multiple group factor analysis when the outcome variables are continuous, you will see a sequence of progressively more restrictive invariance tests performed, and distinction made between metric invariance (invariance of factor loadings) and scalar invariance (invariance of factor loadings, and item intercepts). It is only once metric invariance is found as a tenable hypothesis, that the hypothesis of scalar invariance is considered. In some ways the focus on testing for metric invariance first and then scalar invariance seems unnecessary to me when the goal is to compare factor means across groups. I wonder if this is perhaps a bit of a historical artifact stemming from the fact that EFA was based on correlation matrices, and then CFA expanded this to covariances matrices, and then Joreskog (1971) expanded CFA to multiple groups, and then Sorbom (1974) expanded multiple group CFA to include a mean structure. I am wondering if the Alignment approach makes this hierarchical distinction between metric and scalar invariance?
With alignment you can compute factor means and compare the factor means across groups even when full loading invariance is not fulfilled. As long as loading invariance and intercept invariance is violated to some minor extent factor means will be estimated well, see the simulation studies in Section 5 http://statmodel.com/examples/webnotes/webnote18.pdf
I’m testing measurement invariance of a scale in two groups because I need to compare means between both. For this and based on webonte 18 and on article from schoot et al (2013), I decided to use: alignment = fixed (bsem) approach. But I have some difficulties to understand the output. More specifically on the following issues:
1 I got this message USE THE FBITERATIONS OPTION TO INCREASE THE NUMBER OF ITERATIONS BY A FACTOR OF AT LEAST TWO TO CHECK CONVERGENCE AND THAT THE PSR VALUE DOES NOT INCREASE. I added to the model (in 3 different times) FBITERATIONS= 1000, 5000, 20000 and I always get the same message. This means that the model doesn’t converge and I can’t continue with the invariance analysis?
2 The results from the alignment output indicate that the intercepts of 3 items (in 14) are variant between the 2 groups. How do I know the model has a good fit? Can I say that the measure is approximately invariant between the 2 groups? How can I calculate the factor scores to compare factor means between the 2 groups?
1. That is an automatic message that always comes out and does not reflect on the quality of your run. You should check that the PSR is 1 for the different FBITER runs and if the results are approx the same.
2. When a minority of the measurement parameters are non-invariant the factor means and variances for the different groups are typically trustworthy. As our website handout for the May UCONN M3 workshop shows you can do a Monte Carlo study to check that the factor means and variances are dependable. No need to compute factor scores to compare the factor means.
Joana posted on Wednesday, July 09, 2014 - 2:05 pm
Thanks very much for your help!
Before the means comparison I tried to study the quality of the alignment results and run a monte carlo simulation according to the following papers: IRT studies of many groups: the alignment method (version 2 - july 2014) and New Methods for the Study of Measurement Invariance with Many Group (october 2013), and I have one more doubt:
- After running the model I get the following warning message: "All variables are uncorrelated with all other variables within class" and I can't figure out what I did wrong on input specification... Is this the reason why I don't get the correlations results to evaluate the quality of alignment results?
Thanks again for your help.
Best regards Joana
Here is an excerpt of the input:
Montecarlo: NAMES ARE mhc1-mhc14; ngroups=2; nobservations=2(2000); Nreps=50;
Tait Medina posted on Sunday, December 14, 2014 - 10:37 am
I have noticed that when I have few groups (<10) that ALIGNEMENT=FREE tends not to work and I have to move to ALIGNEMNENT=FIXED. However, when I have more groups (15 or more), ALIGNMENT=FREE does tend to work. Are there any characteristics of the data that you would say tend to support ALIGNMENT=FREE? Have you seen this in regards to increases in group number?
For ALIGNMENT=FREE to work well you need a certain level of non-invariance. The more non-invariance there is the better ALIGNMENT=FREE will be compared to ALIGNMENT=FIXED. The more groups you have the more likely it is that enough non-invariance will be accumulated to warrant ALIGNMENT=FREE.
With 7.3 and the ordinal alignment method: I have two CFA factors specified, and in one of them there are a few (necessary) items with 3 categories, the rest have 4 categories. I notice that Mplus suddenly considers these 3-categorical items as 4-categorical items (with the last category empty), judging by the "Univariate proportions and counts" and the non-identification errors. Is this a bug/not yet implemented? Or is it yet impossible for this method? Any legitimate work-arounds? Collapsing all items is here problematic... Cheers
Hello-- A few questions about ordered categorical alignment.
In the SEM paper you state the parameters are, by default, reported in a standardized metric in the MODEL RESULTS part of the output. Then, is it appropriate to interpret thresholds from a binary or ordered categorical alignment as zscore units and the loading as standardized loadings?
Related--Above Dr. Asparouhov said the theta parameterization was used for specifying the configural model, but not when minimizing the loss function with respect to the intercepts and loadings-- How does the theta parameterization in the earlier part of the estimation impact the results reported in the MODEL RESULTS section?
I have a dataset with 835 participants and 45 continuous variables. Participants came from 7 different studies, which range from 40 to 346 participants. To examine invariance (in factor loadings, intercepts, etc) across the 7 samples, I am using multiple group factor analysis with alignment=fixed. My model terminates normally, however, I get the following warning messages:
WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH. AN ADJUSTMENT TO THE ESTIMATION OF THE INFORMATION MATRIX HAS BEEN MADE. THE CONDITION NUMBER IS -0.158D+02. THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE MCONVERGENCE OR LOGCRITERION OPTIONS OR BY CHANGING THE STARTING VALUES OR BY USING THE MLF ESTIMATOR.
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS -0.315D-17. PROBLEM INVOLVING PARAMETER 117.
NOTE THAT THE NUMBER OF PARAMETERS IS GREATER THAN THE SAMPLE SIZE.
An earlier reply stated that the first warning message may be disregarded. May the other 2 warning messages be disregarded as well?
Hello Dr. Asparohouv, Thank you for your response about the parameterization of estimates in the model results section on 3/3. I have been thinking more about this and have a follow up question I hope you can answer.
I am doing a simulation study on the polytomous alignment for my dissertation. For my model population, I am specifying starting values that use results from a real, single group, data analysis. I want to make sure that there is not a mismatch between the metric of the results from the real data analysis and what I input for the alignment simulation. To ensure this, should I complete the real data analysis using PARAMETERIZATION=THETA, then input those results in my model population for simulating in the alignment framework? I am just trying to simulate data that are characteristic of the real factor analysis I did with one group. Previously I have completed a single group polytomous factor model with the default, DELTA. Does this create a mismatch? Or, in other words, when I input the results from the real single group, polyomous factor model using DELTA into model population using the alignment, is it interpreting those as being THETA parameterized starting values?
I apologize for my long question, I hope this is clear.
It is a mismatch. You should analyze the real data using PARAMETERIZATION=THETA and use those values in model population. Alternatively you can redo the real data analysis using the ML estimator (which is based on the theta parameterization as well).
Rafael - the fit of the alignment model is the same as the fit of the configural model. You can see that they have the same log-likelihood value. There is no penalty for the level of invariance that the alignment provides - it is kind of the maximum invariance you can get for "free"/with no penalty in fit.
Mircea Comsa posted on Thursday, September 24, 2015 - 11:41 pm
Hi, Can I use alignment in conjunction with a bifactorial model? According to your paper (2014, MG factor analysis alignment) is not possible yet. There is an alternative? Thank you.
DC posted on Wednesday, October 07, 2015 - 1:14 pm
In running a model with the alignment method, I am getting the following message: "THE CHI-SQUARE TEST CANNOT BE COMPUTED BECAUSE THE FREQUENCY TABLE FOR THE LATENT CLASS INDICATOR MODEL PART IS TOO LARGE."
Could you please explain what this means?
Does this affect the interpretation of the results?
Is there a way to fix the issue and obtain the chi-square test?
The frequency table of the joint distribution of the categorical variables is too large and the Pearson chi-square (https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test) can not be computed. When the table is so large that goes over the computational limits the chi-square asymptotics break down anyway so I would not worry about it. With 30 binary variables for example the joint distribution of the binary variables has over a billion cells. So the chi-square will have over a billion degrees of freedom. This is not the chi-square that the WLSMV estimator computes.
There is no problem with the reported results or their interpretation.
Alvin posted on Sunday, October 11, 2015 - 3:54 pm
Hi Bengt, as mentioned previously, it is not yet possible to look at covariates using the alignment method. As I understand, it is possible to use factor scores derived from the analysis for further analysis, but does factor score indeterminancy impact on this type of analysis. And if so, how? Thanks
I am hoping for some guidance in determining the quality of my results from an 11-group (11 countries) model (n=3000). I have a self-report measure with 6 binary indicators. I am struggling to integrate the results.
You can compute the R2 by hand using the results in tech8. Most likely the issue is due to empty cells in certain groups or very small variation in the factor mean and variance across groups or very dis-balanced group design. If you still don't see why this happens send the example to firstname.lastname@example.org.
seulki jang posted on Friday, February 12, 2016 - 8:02 am
Hi Dr. Muthen and Dr. Asparouhov,
Hope you are having a good day. I have a question about the alignment optimization variance calculation. In the alignment optimization fit table, there are three indices (fit function contribution, r-squared, and variance). In the output, I see fit function contribution values and r-squred values, but not variance values. How do we calculate variances for each item factor loading and factor intercept? Thank you!
On the posting dated May 20, 2015 - 5:07 pm. Dr. Asparouhov, responded to a question regarding exact p-values. I would like to know how the standard errors are calculated for these pairwise comparisons?
I am trying out the alignment method and am a little uncertain how to write the code. Could you make the code for the 26-country study published in Struct Eq Modeling available? That may be sufficient to figure out what I do wrong.
I have run the alignment method on a set of 7 ordinal indicators (each having 4 categories) for 1 latent variable for 35 countries. When I look at the alignment output, I see, for example, that it is reported that approximate measurement invariance holds for the first item threshold for 34 countries, whereas approximate measurement invariance in this threshold does not hold for only one country. Upon inspection of the reported R-squared measurement invariance index, I find a value of only .019 for this threshold. I am trying to understand how the latter value can be so extremely low, given the other result. Any suggestion?
Alvin Tay posted on Friday, January 06, 2017 - 4:01 am
The residuals of the ordinal items in my 2-factor alignment model based on a sample of 8000+across 8 districts seem a bit odd. I understand that standardised residuals are at best indicative of model misspecifications but in the case of alignment analysis, what is the best approach to compare the fit of different models. Thanks Alvin
Alignment does not have an effect on fit. The fit of the alignment is the same as that of the configural model. Any model fit issue should be addressed prior to alignment by running a configural model and verifying that 2 factors are enough in each group.
Substantial model misfit can be addressed using an additional factor or by switching to Bayes where residual correlations for categorical variables can be included in the model.
Jian-Bin Li posted on Monday, March 06, 2017 - 10:57 am
As stated in the seminal paper about alignment analysis (Asparouhov & Muthen, 2014), this analysis starts from configural model. However, I am not sure if I need to test and report the model fit of each group as well as the configural model before using alignment analysis? If the model fit of the configural model doesn't go well, can I still use the alignment to compare the factor means across groups? Thank you.
Jian-Bin Li posted on Thursday, March 23, 2017 - 7:15 am
Thank you Dr Muthen.
I have couple of follow-up questions:
(1) can version 7.3 deal with categorical data in the alignment analysis? I tried to run the model and there was no warning message.
(2) when I tried to run the model (i.e., a comparison on 21 items rated on a 5-point Likert scale across two countries), I treated the 21 items as categorical data. However the output stated that the estimator is MLR instead of WLSMV. Why's that?
(3) when reporting the results, do I need to report the (non)invariance of all thresholds of each item as follows? LAY1$1 1 2 LAY1$2 1 2 LAY1$3 1 2 LAY1$4 1 2 LAY2$1 (1) (2) LAY2$2 1 2 LAY2$3 1 2 LAY2$4 1 2
(4) my case concerns only two countries. That means if the loading or threshold shows non-invariance for one country, then it is also noninvariant for the other country. In this case, how to calculate the percentage of non-invariance? Take the 8 thresholds listed above as an example, do I calculate the percentage as 1/16 or 2/16?
I am Sorry for my long questions and hope I have made my points clearly.Thank you very much in advance.
Thank you Prof. Muthen. I have two follow-up questions:
(1) I tried to compare the latent factor mean across 2 countries on v7.31(mac). The latent factor has 21 indicators(items) rated on a 5-point Likert scale. I treated all the items as categorical. The output showed the estimator is MLR instead of WLSMV which is used for categorical variable. I am wondering if I missed something in the syntax?
(2) Since the variables were treated categorical, each item has 4 thresholds. Does it mean that I need to report the (non-)invariance for all the 84 (4*21) thresholds?
Thank you. Now I am clear. Just one last question, I notice that a limit of 25% of non-invariance is a rough rule of thumb. I am wondering what if the number of non-invariance exceeds 25%? Is there any solution or guideline that help address this issue, or simply that the results should be abandoned? Thank you.
I have a question regarding the FIXED alignment method.
I have run the alignment method on a set of 5 indicators (each having 6 categories) for 1 latent variable for 33 countries.
The initial FREE alignment model provide the warning that the model may be poorly identified and suggests to switch to the FIXED method with group 17 as baseline group. But neither this group nor any other group has a mean close to 0 (as recommended in Asparouhov & Muthén, 2014). Group 17 in fact has a latent mean of -0.764.
My question is: Under this circumstances, does the FIXED method provide trustworthy parameter estimates?
Furthermore in the paper from Marsh et al. (2017) it is stated that "For the present purposes we used the FIXED option available in the Mplus CFA-MI.AL model, in which the latent factor mean and variance of one arbitrarily selected group (in this case the first group, Australia) were fixed to 0 and 1, respectively". However, as far as I understood, the choice of group selection is not arbitrary, but depends on the size of the latent mean which is closest to zero...
Could you provide a bit more insight into this? Thank you very much in advance.
The recommended group is the one with mean closest to 0 by absolute value. I don't have any reason to doubt the conclusion of the estimation, i.e., that the FIXED method is better than the FREE method in overall terms of bias and standard error considerations.
The FREE method poor identifiability could be due to not enough noninvariance.
I am trying out a measurement invariance analysis for 20 groups with four continous indicators.
As one might expect, scalar invariance cannot be established. So I switched to the alignment method using ML.
My more general question is (approximately) how many parameters should be invariant in alignment analyses to consider the resultung factor means as trustworthy? I am bit scared that althought the alignment approach works straightforward, a large share of invariant parameters (say, 40%) certainly does not helpt to reach robust conclusions?
I was under the impression that the alignment test for approximate measurement invariance is largely insensitive to sample size. But I'm not sure this is correct.
Tests of "significance" are generally sensitive to sample size, including the nested Chi-square test. Will this also apply to the alignment test for approximate measurement invariance?
As a practical example, I have now estimated a multi-group factor model across 15 narrowly defined age groups. The factor model has three categorical indicators (so I cannot use estimation=Bayes), and a sample size of more than 150,000. The alignment test in Mplus reports substantial non-invariance.
I also test the model with local structural equation modelling (LSEM), which allows us to inspect how factor loadings and thresholds/intercepts vary across a continuous variable. I then plot the results in R. The plots from LSEM testing for invariance across age as a continuous variable suggest only a moderate degree of non-invariance.
I found not detailed discussion of the issue of sample size and power in an alignment analysis. Could you please clarify?
Thank you for the clarification concerning sample size sensitivity! Very helpful.
Can Bayesian estimation be used *for alignment* with categorical indicators?
I thought not:
"It [alignment] is available when all variables are continuous or binary with the ML, MLR, MLF, and BAYES estimators and when all variables are ordered categorical (ordinal) with the ML, MLR, and MLF estimators."
Hi, I'm doing invariance analysis for 15 samples on different items. At the end of the exit, I find this warning. Mplus diagrams are currently not available for Mixture analysis. No diagram output was produced. 1-I like to know how I can get the diagrams or that is not possible. 2- My study is based on the analysis of invariance in tests that follow a matrix design, in case I want to perform the analysis with all the items of all the booklets at the same time, how could I treat the omitted values in the items since it is a characteristic of the matrix design of these tests.
I am working with Professor Bruno Zumbo of the UBC. We are conducting research with a complex database (65,000 students, 15 countries, 6 booklets), we have a matrix design of items, sample weights, senatorial weights and replicated weights.
We would like to know
* is it possible to use the alignment method with the sample weights?
Emily Haroz posted on Thursday, December 21, 2017 - 1:16 pm
I tried to post this before, but I am struggling a bit with a problem. We did a multi-group alignment analysis to identify DIF across data from 5 different countries. I then saved the plausible values across 10 iterations. This all worked fine. However, when I look at the range of plausible values they range from -2 to +2. This is not interpretable as our scale for summary scores on our measures ranges from 0-3. Is there any way to restrict the range of plausible values? Or some other way to generate DIF-adjusted factor scores?
Typically the observed measure Y is related to the factor through a measurement equation Y = nu + lambda*factor + error If the factor ranges from -2 to 2, the predicted observed measure will range from nu-2*lambda to nu+2*lambda. That should match your observed values of 0 to 3.
Hi, I am having a confusion about my results.I have several results that show a value of R2 close to zero when I understand that I must interpret in the following way: a value close to 1.00 implies a high degree of invariance, whereas a value close to 0.0 suggests a low degree of invariance.
Let's see the example from my data: Threshold IT1_2$1 Weighted Average Value Across Invariant Groups: 0.057 R-square/Explained variance/Invariance index: 0.008 Loadings for IT1_2 Weighted Average Value Across Invariant Groups: 0.919 R-square/Explained variance/Invariance index: 0.203 APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS Intercepts/Thresholds IT1_2$1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Loadings for F IT1_2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The value of the R does not correspond exactly with the noninvariance output. Do I did something wrong
It is true that R2 can be close to zero even for invariant items even though that is somewhat unusual. The best way to understand this is to compute the R2 by hand, see formula (13) and (14) https://www.statmodel.com/download/webnotes/webnote18.pdf This can happen for example if the power was not sufficient to establish the non-invariance (such as small sample size or many missing values for that item or unusually large SE due to empty cells in bivariate tables). Or it can happened if the average aligned loading is close to 0.
Marsh et al.'s (2017) alignment-within-CFA (AwC) approach involves using the output from the alignment method as starting values in subsequent models. If there is good evidence for metric invariance, is there any sense or value in estimating a more parsimonious model by using both the alignment method output as starting values and also constraining item intercepts to equivalence across groups?
The main issue in implementing successfully AwC is fixing the 2*m parameters that produce an identified model. It is not the starting values - those will help the estimation but are generally optional. Appendix 3 for example shows the most straight forward approach of fixing the mean and the loading of the first indicator for each factor in every group to the alignment estimate. Once you have this settled in you can add additional constraints to obtain a more parsimonious model - for example if the alignment solution indicates that the intercept of an indicator of the loading of an indicator is invariant - you can add that constraint across groups in the AwC model.
Bo Zhang posted on Monday, April 30, 2018 - 8:23 am
I am currently analyzing a big five personality dataset that covered 7 countries in total. I'm interested in country mean comparison. I first fitted a 5-factor CFA model. As expected, model fit was really bad. So I switched to ESEM, which improved the model fit a lot. Therefore, I further tested measurement invariance of the ESEM model across countries. However, MI was not supported even though the configural model fitted really well. I was thinking about using Alignment method. However, it seems that the current version of Alignment method can only be applied to CFA models, not ESEM. I am curious whether there is a way to use alignment method within ESEM.
When performing an alignment analysis on a correlated factor model, does the factor correlation also get an aligned value according to the group? If not, is factor correlation treated as a normal CFA model after the loading / intercept is aligned? or,, Is there another procedure for factor correlation processing in alignment?
Hi, thank you for your kind and prompt reply. I have one more question.
When factor means are constrained, I thought that "ALIGNMENT= FREE" is 1 higher DF(degree of freedom) than "ALIGNMENT = FIXED". I wonder how this DF 1 difference in "ALIGNMENT= FREE" handles estimation and identification when analyzing alignment.
Also, factor means are determined according to the simplicity function. I could not understand clearly whether this condition(simplicity function) is sufficient in "ALIGNMENT= FIXED".
1) The factor correlation is not altered by the alignment.
2) Both FREE/FIXED alignment have the same degrees of freedom and fit as the configural model. The free alignment uses a different alignment function which includes the means in all groups but that does not affect the fit or the degrees of freedom.
I saw your answer. ( Tihomir Asparouhov posted on Monday, April 21, 2014 - 8:38 am)
" Also the weight is standardized: scaled so the total weight is equal to the total number of cross group comparisons NG*(NG-1)/2 which is 1 in your case. So the actual weight that we use for the tech8 output is w=((NG-1)*NG/2)*w0/sum(w0) where w0=sqrt(N1*N2)"
Can i know "sum(wo)"? For example in 3 group case sample size 25(N1), 36(N2), 49(N3).
I understood sum(w0)= sqrt (N1*N2)+ sqrt(N1*N3) + sqrt(N2*N3) = 5*6 + 5*7 + 6*7 Is this right ?
" Tihomir Asparouhov posted on Wednesday, February 26, 2014 - 5:16 pm
The “Item Parameters In The Alignment Optimization Metric” section contains the alignment results in the metric in which the alignment optimization is performed, i.e., after all indicator variables are standardized and also under constraint (10) from web note 18, ~~~"
In optimizaion metric, multiply all of the factor variances to 1.
factor variance of optimization metric(3 group example) .744, .86, 1.563
But, multiplication of optimization metric's loadings(between group) was close to 1, but not exactly 1.(In some cases) I analyzed equal sample size data. 4 item, 3 groups. For example,
item6 .854, 1.065, 1.02 -> product 0.9277
I understood that the aligned loading of individual items is standardized to be 1 when multiplied by a group, which is applied to factor variance but not loading.
I would be grateful if you could give me specific methods or standards for standardizing the loading of optmization metrics.
Yes. We usually report coverage probability (which in simulations should be about 95%). This is the standard way of reporting simulations. Technically the type 1 error would be 1 - the coverage probability so near 5%.