Multigroup alignment method PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
Message/Author
 emmanuel bofah posted on Monday, August 12, 2013 - 9:48 am
on page 2 Mplus LANGUAGE ADDENDUM 7.1 is said that the
The alignment optimization method consists of three steps:
1. Analysis of a configural model with the same number of factors and same pattern of zero factor loadings in all groups.
2. Alignment optimization of the measurement parameters, factor loadings and intercepts/thresholds according to a
simplicity criterion that favors few non-invariant measurement parameters.
3. Adjustment of the factor means and variances in line with the optimal alignment.
my question is why is that a model with ALIGNMENT = FREE (CONFIGURAL);
give the same estimates as ALIGNMENT = FREE; because i was thinking you need to be able to compare the configural and the scalar but their estimates are the same.
 Linda K. Muthen posted on Monday, August 12, 2013 - 9:51 am
The default is CONFIGURAL so the two specifications you show are the same.
 emmanuel bofah posted on Monday, August 12, 2013 - 10:49 am
why not possible to specify:
ALIGNMENT = FREE (METRIC);
ALIGNMENT = FREE (SCALAR);
How can i specific scalar model with the alignment.
 Linda K. Muthen posted on Monday, August 12, 2013 - 11:34 am
The alignment method avoids using a metric or scalar model. The definition of the alignment method is that it is based on the configural model.
 Peter Halpin posted on Friday, February 14, 2014 - 9:40 am
Hello,

Is alignment implemented for ordered categorical data?
 Bengt O. Muthen posted on Friday, February 14, 2014 - 11:16 am
No, not yet. Only binary.
 Tait Medina posted on Tuesday, February 25, 2014 - 6:19 am
Is it possible to test the equality of regression coefficients across groups under the alignment method? For example, I am interested in regressing the factor on age and testing whether the coefficient for age is different across groups?

Thank you!
 Bengt O. Muthen posted on Tuesday, February 25, 2014 - 12:15 pm
The alignment method currently doesn't handle covariates. But you can divide people into age groups and then you get a*b groups in the alignment run, where a is the number of age groups and b is the number of original groups.
 Tait Medina posted on Tuesday, February 25, 2014 - 3:11 pm
Thank you for your reply.

I have a question about the output from Alignment=Fixed. How are the estimates given under MODEL RESULTS connected to the numbers given under “Item Parameters In The Alignment Optimization Metric”? I have read Web Note 18, but am having a hard time mapping the output in these two sections onto the equations.

Also, the R2 measures given in Table 7 in Web Note 18 need to be calculated by hand using Eq 13 and 14. Correct?

Thank you.
 Tihomir Asparouhov posted on Wednesday, February 26, 2014 - 5:16 pm
The “Item Parameters In The Alignment Optimization Metric” section contains the alignment results in the metric in which the alignment optimization is performed, i.e., after all indicator variables are standardized and also under constraint (10) from web note 18, also including the factor mean fixed to 0 in the corresponding group. These parameters are scale reversed back to the original metric of the variables and also to factor variance fixed to 1 in the corresponding group if that is the requested parameterization. There is a way to get these parameters as your final model estimates. You will need to standardize your variables using the standardized option of the define command as well as the analysis option METRIC=PRODUCT;

R2 is computed with the upcoming Mplus 7.2 but you can compute it by hand as well.
 Tait Medina posted on Monday, April 07, 2014 - 1:21 pm
Thank you for your response. I have another question about the alignment approach.

I have noticed that when I use ALIGNMENT=FREE I receive a warning that I should switch to ALIGNMENT=FIXED and a reference group (or baseline group) is suggested. How is the suggestion for a baseline group determined? I have played around with using different baseline groups trying to get a feel for this new approach and have noticed that the choice of group impacts the results under the APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS section. Could you provide a bit more insight into this? Thank you.
 Tihomir Asparouhov posted on Monday, April 07, 2014 - 2:50 pm
> I have noticed that when I use ALIGNMENT=FREE I receive a warning that I should switch to ALIGNMENT=FIXED and a reference group (or baseline group) is suggested. How is the suggestion for a baseline group determined?

It is the group with the smallest absolute factor mean value. Presumably fixing that parameter to 0 would lead to the smallest misspecification.


> I have played around with using different baseline groups trying to get a feel for this new approach and have noticed that the choice of group impacts the results under the APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS section. Could you provide a bit more insight into this?

This is explained in Section 5.3 in
http://statmodel.com/examples/webnotes/webnote18.pdf
Fixing the factor mean to 0 in one group can lead to biased results if that mean is not 0.

You can also try using TOLERANCE=0.01. This option seems to yield more robust results and will be Mplus default in the upcoming Mplus 7.2
 Tait Medina posted on Tuesday, April 15, 2014 - 9:24 am
I am a little confused by this output (below). The p-values seem to suggest that the item intercept for CHILD is noninvariant in group 3 as compared to groups 1 and 2.

APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS

Intercepts
NEIGHB 1 2 3
FRIEND 1 2 (3)
SOCIAL 1 2 (3)
WORK 1 2 3
MARRY 1 2 3
CHILD 1 2 3

Intercept for CHILD
Group Group Value Value Difference SE P-value
2 1 3.378 3.385 -0.006 0.027 0.817
3 1 3.501 3.385 0.117 0.039 0.003
3 2 3.501 3.378 0.123 0.042 0.003
Approximate Measurement Invariance Holds For Groups:
1 2 3
Weighted Average Value Across Invariant Groups: 3.430

Invariant Group Values, Difference to Average and Significance
Group Value Difference SE P-value
1 3.385 -0.046 0.019 0.019
2 3.378 -0.052 0.022 0.017
3 3.501 0.071 0.023 0.002
 Tihomir Asparouhov posted on Tuesday, April 15, 2014 - 10:05 am
The process is explained in Section 4
http://statmodel.com/examples/webnotes/webnote18.pdf
but to summarize the invariance is not determined by pairwise comparison but rather by this: compare group 3 against the average of group 1,2,3. Also due to multiple testing we use smaller p-value 0.001 as the cutoff value.
 Tait Medina posted on Tuesday, April 15, 2014 - 11:58 am
Is the pairwise comparison portion of the output related to the "first step" of the algorithm used to determine a starting set of invariant groups that is described in Section 4? "We conduct a pairwise test for each pair of groups and we "connect" two groups if the p-value obtained by the pairwise comparison test is bigger than 0.01." (pg. 15).

Finally, when dichotomous outcome variables are used, how are scale factors/residual variances handled? Are they fixed to 1 in all groups?

Thank you.
 Tihomir Asparouhov posted on Tuesday, April 15, 2014 - 3:06 pm
Yes on the first question.

The second question also yes - we use the theta parameterization where all residual variances are fixed to 1 during the configural model estimation. After that ... the alignment is done without any consideration for the residual variances, i.e., the alignment is for the intercepts and loadings only and it does not use residual variances in the computations.

I have to also correct my message from Feb 26. To get the
"Item Parameters In The Alignment Optimization Metric"
as your final parameter estimates you have to use a linear scale transformation for each indicator variable Y like this
define:Y=(Y-a)/b;
where a and b are obtained from the configural model estimates as follows

a=average Y intercept across the groups
b=average Y loading across the groups
 Tait Medina posted on Friday, April 18, 2014 - 12:51 pm
Thank you, that makes sense.

I have a follow-up question about Eq. 9 in Webnote 18. I am trying to make sure I understand Eq. 9 by plugging in the estimates taken from the output (using ML, Alignment=Fixed) using my own data. For now, I am using 2 groups. The loading for item 1 is .585 in group 1 and .588 in group 2. Taking the difference of these loadings gives me -.003. Scaling this by the CLF (using the small number .0001) gives me f(x)= .103. The Contribution to the Fit Function for this item, given in the output under Loadings, is -.316. I am not sure how to arrive at that number. The sample size for group 1 is 698 and for group 2 it is 949. The sqrt(N1*N2) is therefore 813.881. Weighting f(x) by 813.881 gives me 83.512. What am I misunderstanding about Eq. 9?
 Tihomir Asparouhov posted on Monday, April 21, 2014 - 8:38 am
The loss function that is reported in the output has a negative sign. See footnote 2 on page 10. You are also using 0.01 not 0.0001. Also the weight is standardized: scaled so the total weight is equal to the total number of cross group comparisons NG*(NG-1)/2 which is 1 in your case. So the actual weight that we use for the tech8 output is
w=((NG-1)*NG/2)*w0/sum(w0)
where
w0=sqrt(N1*N2)
and NG is the number of groups.
The weight standardization of course doesn't affect the optimization since it is a constant multiple. It is done so that all weights are 1 when the groups are of equal sizes. In your case the weight is 1 because there is just one cross group comparision. Thus the loss function for that loading is
-sqrt(sqrt(0.003^2+0.01))=-.316
 Tait Medina posted on Monday, April 28, 2014 - 6:54 am
Dr. Asparouhov, thank you so much for taking the time to address my questions. It has been tremendously helpful!

I have a general question about the Alignment approach. In many applications of multiple group factor analysis when the outcome variables are continuous, you will see a sequence of progressively more restrictive invariance tests performed, and distinction made between metric invariance (invariance of factor loadings) and scalar invariance (invariance of factor loadings, and item intercepts). It is only once metric invariance is found as a tenable hypothesis, that the hypothesis of scalar invariance is considered. In some ways the focus on testing for metric invariance first and then scalar invariance seems unnecessary to me when the goal is to compare factor means across groups. I wonder if this is perhaps a bit of a historical artifact stemming from the fact that EFA was based on correlation matrices, and then CFA expanded this to covariances matrices, and then Joreskog (1971) expanded CFA to multiple groups, and then Sorbom (1974) expanded multiple group CFA to include a mean structure. I am wondering if the Alignment approach makes this hierarchical distinction between metric and scalar invariance?
 Tihomir Asparouhov posted on Monday, April 28, 2014 - 7:39 pm
With alignment you can compute factor means and compare the factor means across groups even when full loading invariance is not fulfilled. As long as loading invariance and intercept invariance is violated to some minor extent factor means will be estimated well, see the simulation studies in Section 5
http://statmodel.com/examples/webnotes/webnote18.pdf
 Bengt O. Muthen posted on Tuesday, April 29, 2014 - 6:12 am
I agree that testing metric first is a bit of a historical artifact. Alignment does not make the distinction between metric and scalar invariance.
 Jessica Kay Flake posted on Wednesday, June 04, 2014 - 9:04 am
I know this has been asked before, but given there is a recent, new version out, I wanted to confirm. Can ordered categorical data be handled by the alignment in version 7.2?
 Tihomir Asparouhov posted on Wednesday, June 04, 2014 - 6:50 pm
Not yet. You can use BSEM with the Diff priors as an alternative.
 Joana posted on Thursday, July 03, 2014 - 7:33 am
Hello,

I’m testing measurement invariance of a scale in two groups because I need to compare means between both. For this and based on webonte 18 and on article from schoot et al (2013), I decided to use: alignment = fixed (bsem) approach.
But I have some difficulties to understand the output. More specifically on the following issues:

1 I got this message USE THE FBITERATIONS OPTION TO INCREASE THE NUMBER OF ITERATIONS BY A FACTOR OF AT LEAST TWO TO CHECK CONVERGENCE AND THAT THE PSR VALUE DOES NOT INCREASE. I added to the model (in 3 different times) FBITERATIONS= 1000, 5000, 20000 and I always get the same message. This means that the model doesn’t converge and I can’t continue with the invariance analysis?

2 The results from the alignment output indicate that the intercepts of 3 items (in 14) are variant between the 2 groups. How do I know the model has a good fit? Can I say that the measure is approximately invariant between the 2 groups? How can I calculate the factor scores to compare factor means between the 2 groups?

Thank you for help.

Best Regards

Joana Carvalho
 Bengt O. Muthen posted on Thursday, July 03, 2014 - 4:12 pm
1. That is an automatic message that always comes out and does not reflect on the quality of your run. You should check that the PSR is 1 for the different FBITER runs and if the results are approx the same.

2. When a minority of the measurement parameters are non-invariant the factor means and variances for the different groups are typically trustworthy. As our website handout for the May UCONN M3 workshop shows you can do a Monte Carlo study to check that the factor means and variances are dependable. No need to compute factor scores to compare the factor means.
 Joana posted on Wednesday, July 09, 2014 - 2:05 pm
Thanks very much for your help!

Before the means comparison I tried to study the quality of the alignment results and run a monte carlo simulation according to the following papers: IRT studies of many groups: the alignment method (version 2 - july 2014) and New Methods for the Study of Measurement Invariance with Many Group (october 2013), and I have one more doubt:

- After running the model I get the following warning message: "All variables are uncorrelated with all other variables within class" and I can't figure out what I did wrong on input specification... Is this the reason why I don't get the correlations results to evaluate the quality of alignment results?

Thanks again for your help.

Best regards
Joana


Here is an excerpt of the input:

Montecarlo: NAMES ARE mhc1-mhc14;
ngroups=2;
nobservations=2(2000);
Nreps=50;

ANALYSIS:
type= mixture;
ESTIMATOR=ml;
alignment=fixed(1);
processors=8;

Model population:

%OVERALL%

f1 BY mhc1-mhc3*1;
f2 BY mhc4-mhc9*1;
f3 BY mhc10-mhc14*1;

%g#1%

f1 BY mhc1*0.67816 ;
f1 BY mhc2*0.67552 ;
 Bengt O. Muthen posted on Wednesday, July 09, 2014 - 3:58 pm
Perhaps you are not specifying factor variances.
 Julia Higdon posted on Saturday, August 30, 2014 - 8:55 pm
I have used the alignment optimization method and want to use the results in a subsequent SEM analysis.

Is it possible in an alignment optimization analysis to save the factor scores and then use those factor scores in a subsequent analysis? As in:

Analysis:
Type = mixture;
estimator = bayes;
alignment = fixed(1);
thin=50;
fbiterations = (5000);

model: ...

savedata:
file is fscores.txt;
save = fscores(10);

Thank you
 Bengt O. Muthen posted on Sunday, August 31, 2014 - 10:58 am
Yes, this is possible.
 Tait Medina posted on Sunday, December 14, 2014 - 10:37 am
I have noticed that when I have few groups (<10) that ALIGNEMENT=FREE tends not to work and I have to move to ALIGNEMNENT=FIXED. However, when I have more groups (15 or more), ALIGNMENT=FREE does tend to work. Are there any characteristics of the data that you would say tend to support ALIGNMENT=FREE? Have you seen this in regards to increases in group number?
 Tihomir Asparouhov posted on Thursday, December 18, 2014 - 10:19 am
For ALIGNMENT=FREE to work well you need a certain level of non-invariance. The more non-invariance there is the better ALIGNMENT=FREE will be compared to ALIGNMENT=FIXED. The more groups you have the more likely it is that enough non-invariance will be accumulated to warrant ALIGNMENT=FREE.
 Stephus Daus posted on Friday, January 09, 2015 - 4:16 am
With 7.3 and the ordinal alignment method: I have two CFA factors specified, and in one of them there are a few (necessary) items with 3 categories, the rest have 4 categories. I notice that Mplus suddenly considers these 3-categorical items as 4-categorical items (with the last category empty), judging by the "Univariate proportions and counts" and the non-identification errors. Is this a bug/not yet implemented? Or is it yet impossible for this method? Any legitimate work-arounds? Collapsing all items is here problematic...
Cheers
 Linda K. Muthen posted on Friday, January 09, 2015 - 9:32 am
Please send the input, data, and output to support@statmodel.com.
 Tait Medina posted on Tuesday, February 10, 2015 - 8:30 am
Do you have any current recommendations for how to test if the factor variances estimated using the alignment method are significantly different across any two groups?

Thank you.
 Tihomir Asparouhov posted on Tuesday, February 10, 2015 - 12:18 pm
You can use MODEL TEST or MODEL CONSTRAINTS to look at the difference between the variances.
 Jessica Kay Flake posted on Tuesday, March 03, 2015 - 4:34 pm
Hello--
A few questions about ordered categorical alignment.

In the SEM paper you state the parameters are, by default, reported in a standardized metric in the MODEL RESULTS part of the output. Then, is it appropriate to interpret thresholds from a binary or ordered categorical alignment as zscore units and the loading as standardized loadings?

Related--Above Dr. Asparouhov said the theta parameterization was used for specifying the configural model, but not when minimizing the loss function with respect to the intercepts and loadings-- How does the theta parameterization in the earlier part of the estimation impact the results reported in the MODEL RESULTS section?
 Tihomir Asparouhov posted on Wednesday, March 04, 2015 - 10:00 am
The standardized results can be obtained using OUTPUT:STANDARDIZED command.

The theta parametrization is the only one available at this time with the ordered categorical variables. The MODEL RESULTS are reported in the theta parametrization.
 Natalia Dmitrieva posted on Wednesday, March 25, 2015 - 4:35 pm
I have a dataset with 835 participants and 45 continuous variables. Participants came from 7 different studies, which range from 40 to 346 participants. To examine invariance (in factor loadings, intercepts, etc) across the 7 samples, I am using multiple group factor analysis with alignment=fixed. My model terminates normally, however, I get the following warning messages:

WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE
OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH.
AN ADJUSTMENT TO THE ESTIMATION OF THE INFORMATION MATRIX HAS BEEN MADE.
THE CONDITION NUMBER IS -0.158D+02.
THE PROBLEM MAY ALSO BE RESOLVED BY DECREASING THE VALUE OF THE
MCONVERGENCE OR LOGCRITERION OPTIONS OR BY CHANGING THE STARTING VALUES
OR BY USING THE MLF ESTIMATOR.

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE
TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE
FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING
VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE
CONDITION NUMBER IS -0.315D-17. PROBLEM INVOLVING PARAMETER 117.

NOTE THAT THE NUMBER OF PARAMETERS IS GREATER THAN THE SAMPLE SIZE.

An earlier reply stated that the first warning message may be disregarded. May the other 2 warning messages be disregarded as well?
 Bengt O. Muthen posted on Wednesday, March 25, 2015 - 4:44 pm
We need to see the full output - please send to support@statmodel.com along with your license number.
 Jessica Kay Flake posted on Friday, March 27, 2015 - 10:20 am
Hello Dr. Asparohouv,
Thank you for your response about the parameterization of estimates in the model results section on 3/3. I have been thinking more about this and have a follow up question I hope you can answer.

I am doing a simulation study on the polytomous alignment for my dissertation. For my model population, I am specifying starting values that use results from a real, single group, data analysis. I want to make sure that there is not a mismatch between the metric of the results from the real data analysis and what I input for the alignment simulation. To ensure this, should I complete the real data analysis using PARAMETERIZATION=THETA, then input those results in my model population for simulating in the alignment framework? I am just trying to simulate data that are characteristic of the real factor analysis I did with one group. Previously I have completed a single group polytomous factor model with the default, DELTA. Does this create a mismatch? Or, in other words, when I input the results from the real single group, polyomous factor model using DELTA into model population using the alignment, is it interpreting those as being THETA parameterized starting values?

I apologize for my long question, I hope this is clear.
 Tihomir Asparouhov posted on Friday, March 27, 2015 - 4:45 pm
It is a mismatch. You should analyze the real data using PARAMETERIZATION=THETA and use those values in model population. Alternatively you can redo the real data analysis using the ML estimator (which is based on the theta parameterization as well).
 Jessica Kay Flake posted on Saturday, March 28, 2015 - 12:03 pm
Dr. Asparouhov,
Thank you for getting back to me so quickly. Very helpful to have confirmation on this issue, will rerun!
 Johannes Bauer posted on Wednesday, May 20, 2015 - 12:46 pm
I have two questions on the "FACTOR MEAN COMPARISON" part of the output. This output reports results using a 5% significance level.

1) I am wondering whether there is a way to apply a correction for multiple testing. Can e.g. the exact p-values be requested to apply Bonferroni correction?

2) Since the output is labeled "Groups With Significantly Smaller Factor Mean", is the applied test one-tailed or two-tailed?

Many thanks
Johannes
 Tihomir Asparouhov posted on Wednesday, May 20, 2015 - 5:07 pm
1) You can use code like this to get the exact p-value

model:
f by y1-y5;
%C2%
[f] (m2);
%C3%
[f] (m3);

model constraints:
new(md); md=m2-m3;

The p-value reported for md is the exact value.

2) It is two-tailed.
 Rafael Goldszmidt posted on Tuesday, May 26, 2015 - 2:41 pm
I am running a multigroup alignment analysis. Is it possible to obtain fit indexes (RMSEA, etc) for the final model?
 Tihomir Asparouhov posted on Tuesday, May 26, 2015 - 3:13 pm
You can get that using model=configural command. This is a sample input.

VARIABLE: NAMES=y1-y5;
ANALYSIS: MODEL = CONFIGURAL;
MODEL: f by y1-y5;
 Rafael Goldszmidt posted on Tuesday, June 02, 2015 - 5:09 am
Dear Prof Asparouhov,

Thanks for your fast reply! With this model, I get the fit indexes for the model with no equality constraints on loadings or intercepts.

The alignment model, on the other hand establishes some level of invariance. Would it be possible to get the fit indexes for the model considering the constraints defined by the alignment model?
 Tihomir Asparouhov posted on Tuesday, June 02, 2015 - 8:23 am
Rafael - the fit of the alignment model is the same as the fit of the configural model. You can see that they have the same log-likelihood value. There is no penalty for the level of invariance that the alignment provides - it is kind of the maximum invariance you can get for "free"/with no penalty in fit.
 Mircea Comsa posted on Thursday, September 24, 2015 - 11:41 pm
Hi,
Can I use alignment in conjunction with a bifactorial model? According to your paper (2014, MG factor analysis alignment) is not possible yet. There is an alternative? Thank you.
 Tihomir Asparouhov posted on Friday, September 25, 2015 - 9:03 am
It is not available yet.
 DC posted on Wednesday, October 07, 2015 - 1:14 pm
In running a model with the alignment method, I am getting the following message:
"THE CHI-SQUARE TEST CANNOT BE COMPUTED BECAUSE THE FREQUENCY TABLE FOR THE
LATENT CLASS INDICATOR MODEL PART IS TOO LARGE."

Could you please explain what this means?

Does this affect the interpretation of the results?

Is there a way to fix the issue and obtain the chi-square test?
 Tihomir Asparouhov posted on Wednesday, October 07, 2015 - 5:53 pm
The frequency table of the joint distribution of the categorical variables is too large and the Pearson chi-square
(https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test)
can not be computed. When the table is so large that goes over the computational limits the chi-square asymptotics break down anyway so I would not worry about it. With 30 binary variables for example the joint distribution of the binary variables has over a billion cells. So the chi-square will have over a billion degrees of freedom. This is not the chi-square that the WLSMV estimator computes.

There is no problem with the reported results or their interpretation.
 Alvin  posted on Sunday, October 11, 2015 - 3:54 pm
Hi Bengt, as mentioned previously, it is not yet possible to look at covariates using the alignment method. As I understand, it is possible to use factor scores derived from the analysis for further analysis, but does factor score indeterminancy impact on this type of analysis. And if so, how? Thanks
 Tihomir Asparouhov posted on Monday, October 12, 2015 - 8:01 pm
There is a paper by Herbet Marsh that I would recommend. Please contact him regarding the AwC (alignment with covariates) paper.

If you use the factor scores approach you should use the plausible values. See Section 4.2

https://www.statmodel.com/download/Plausible.pdf
 Alvin  posted on Monday, October 12, 2015 - 8:35 pm
Many thanks Tihomir. can you send me the link to the paper please? Best, Alvin
 Tihomir Asparouhov posted on Wednesday, October 14, 2015 - 3:32 pm
I don't think it is online yet. It is under review.
 Grace Icenogle posted on Monday, December 07, 2015 - 2:35 pm
Hello Drs. Muthen and Asparouhov,

I am hoping for some guidance in determining the quality of my results from an 11-group (11 countries) model (n=3000). I have a self-report measure with 6 binary indicators. I am struggling to integrate the results.

Under APPROXIMATE MEASUREMENT INVARIANCE, less than 25% of parameters are non-invariant:
Intercepts/Thresholds
I1$1 (1) 3 5 6 7 8 9 12 13 16 17
I2$1 1 3 5 6 (7) (8) (9) 12 13 16 17
I3$1 (1) 3 5 6 7 8 9 12 13 16 17
I4$1 (1) 3 5 6 7 8 9 12 (13) 16 17
I5$1 1 3 5 6 7 (8) 9 12 13 16 17
I6$1 1 3 5 6 7 8 9 12 13 16 17
Loadings for IMP
I1 thru I6 indicate invariance for all groups.

However, the R-square values seem low compared to the example in Asparouhov & Muthen (2014).

Intercepts:
Fit R-Sq
I1 -31.66 0.08
I2 -35.41 0.18
I3 -31.20 0.00
I4 -43.43 0.07
I5 -38.88 0.27
I6 -49.92 0.00
Loadings:
Fit R-Sq
I1 -32.75 0.10
I2 -36.84 0.25
I3 -35.97 0.00
I4 -32.18 0.42
I5 -30.61 0.09
I6 -46.05 0.16

"Fit" = fit function contribution.

Could you assist in interpreting these findings (which seem contradictory)?

Thank you so much for your time.

Kindly,
Grace
 Tihomir Asparouhov posted on Monday, December 07, 2015 - 4:16 pm
You can compute the R2 by hand using the results in tech8. Most likely the issue is due to empty cells in certain groups or very small variation in the factor mean and variance across groups or very dis-balanced group design. If you still don't see why this happens send the example to support@statmodel.com.
 seulki jang posted on Friday, February 12, 2016 - 8:02 am
Hi Dr. Muthen and Dr. Asparouhov,

Hope you are having a good day. I have a question about the alignment optimization variance calculation. In the alignment optimization fit table, there are three indices (fit function contribution, r-squared, and variance). In the output, I see fit function contribution values and r-squred values, but not variance values. How do we calculate variances for each item factor loading and factor intercept?
Thank you!

Best regards,
Seulki
 Tihomir Asparouhov posted on Friday, February 12, 2016 - 11:11 pm
It is the sample variance for the aligned parameter across the groups. It is not in the output yet but can be computed manually.
 Marc Wigley posted on Thursday, April 14, 2016 - 12:11 pm
Hi
Running multiple group alignment for 27 countries I get the error
'The number of variable/value pairs do not match the number of classes
for class variable C'.

I've specified
classes = c(27);
knownclass =c(country);

Is that correct?

Many thanks
 Bengt O. Muthen posted on Thursday, April 14, 2016 - 6:16 pm
Please send input, output, and data to Support along with your license number.
 aaron k Haslam posted on Tuesday, June 28, 2016 - 6:31 am
On the posting dated May 20, 2015 - 5:07 pm. Dr. Asparouhov, responded to a question regarding exact p-values. I would like to know how the standard errors are calculated for these pairwise comparisons?

Thank you,

Aaron
 Linda K. Muthen posted on Tuesday, June 28, 2016 - 1:46 pm
The standard errors are calculated using the Delta method.
 Jan-Benedict Steenkamp posted on Monday, August 01, 2016 - 2:18 pm
I am trying out the alignment method and am a little uncertain how to write the code. Could you make the code for the 26-country study published in Struct Eq Modeling available? That may be sufficient to figure out what I do wrong.

JB Steenkamp
 Bengt O. Muthen posted on Monday, August 01, 2016 - 4:22 pm
Will send them to you.
 John Gelissen posted on Thursday, December 22, 2016 - 3:40 am
I have run the alignment method on a set of 7 ordinal indicators (each having 4 categories) for 1 latent variable for 35 countries. When I look at the alignment output, I see, for example, that it is reported that approximate measurement invariance holds for the first item threshold for 34 countries, whereas approximate measurement invariance in this threshold does not hold for only one country. Upon inspection of the reported R-squared measurement invariance index, I find a value of only .019 for this threshold. I am trying to understand how the latter value can be so extremely low, given the other result. Any suggestion?

thanks, John
 Tihomir Asparouhov posted on Friday, December 23, 2016 - 9:55 am
There could be several different reasons.

1. The one threshold that is non-invariant is large (due to non-occurrence of a particular category in one group) and that accounts for the majority of the variability in the threshold.

2. The factor mean variability is small

3. The loading is small

4. It can also be a combination of the above and large standard errors that lean to not being able to establish significant non-invariance

It is quite straight forward to compute this by hand and figure out exactly why this happens: R2= (computation across groups)

Var(loading*factor_mean)
--------------------------
Var(threshold-loading*factor_mean)
 Alvin Tay posted on Friday, January 06, 2017 - 4:01 am
The residuals of the ordinal items in my 2-factor alignment model based on a sample of 8000+across 8 districts seem a bit odd. I understand that standardised residuals are at best indicative of model misspecifications but in the case of alignment analysis, what is the best approach to compare the fit of different models. Thanks Alvin
 Tihomir Asparouhov posted on Friday, January 06, 2017 - 5:39 pm
Alignment does not have an effect on fit. The fit of the alignment is the same as that of the configural model. Any model fit issue should be addressed prior to alignment by running a configural model and verifying that 2 factors are enough in each group.

Substantial model misfit can be addressed using an additional factor or by switching to Bayes where residual correlations for categorical variables can be included in the model.
 Jian-Bin Li posted on Monday, March 06, 2017 - 10:57 am
Hello,

As stated in the seminal paper about alignment analysis (Asparouhov & Muthen, 2014), this analysis starts from configural model. However, I am not sure if I need to test and report the model fit of each group as well as the configural model before using alignment analysis? If the model fit of the configural model doesn't go well, can I still use the alignment to compare the factor means across groups? Thank you.
 Bengt O. Muthen posted on Monday, March 06, 2017 - 5:51 pm
Q1. That would be good.

Q2. No; the configural model needs to fit ok.
 Jian-Bin Li posted on Thursday, March 23, 2017 - 7:15 am
Thank you Dr Muthen.

I have couple of follow-up questions:

(1) can version 7.3 deal with categorical data in the alignment analysis? I tried to run the model and there was no warning message.

(2) when I tried to run the model (i.e., a comparison on 21 items rated on a 5-point Likert scale across two countries), I treated the 21 items as categorical data. However the output stated that the estimator is MLR instead of WLSMV. Why's that?

(3) when reporting the results, do I need to report the (non)invariance of all thresholds of each item as follows?
LAY1$1 1 2
LAY1$2 1 2
LAY1$3 1 2
LAY1$4 1 2
LAY2$1 (1) (2)
LAY2$2 1 2
LAY2$3 1 2
LAY2$4 1 2

(4) my case concerns only two countries. That means if the loading or threshold shows non-invariance for one country, then it is also noninvariant for the other country. In this case, how to calculate the percentage of non-invariance? Take the 8 thresholds listed above as an example, do I calculate the percentage as 1/16 or 2/16?

I am Sorry for my long questions and hope I have made my points clearly.Thank you very much in advance.
 Jian-Bin Li posted on Friday, March 24, 2017 - 6:30 am
Thank you Prof. Muthen. I have two follow-up questions:

(1) I tried to compare the latent factor mean across 2 countries on v7.31(mac). The latent factor has 21 indicators(items) rated on a 5-point Likert scale. I treated all the items as categorical. The output showed the estimator is MLR instead of WLSMV which is used for categorical variable. I am wondering if I missed something in the syntax?

(2) Since the variables were treated categorical, each item has 4 thresholds. Does it mean that I need to report the (non-)invariance for all the 84 (4*21) thresholds?

Thank you.
 Tihomir Asparouhov posted on Monday, March 27, 2017 - 10:52 pm
(1) Yes
(2) see bottom of page 612 in the user's guide
(3) yes
(4) 2/16

(1) MLR is available for categorical
(2) yes
 Jian-Bin Li posted on Friday, March 31, 2017 - 4:58 am
Thank you. Now I am clear. Just one last question, I notice that a limit of 25% of non-invariance is a rough rule of thumb. I am wondering what if the number of non-invariance exceeds 25%? Is there any solution or guideline that help address this issue, or simply that the results should be abandoned? Thank you.
 Bengt O. Muthen posted on Saturday, April 01, 2017 - 4:39 pm
Alignment is probably better than alternatives also in that case. But you could do a Monte Carlo simulation study to find out.
 Philipp Sischka posted on Thursday, June 01, 2017 - 5:58 am
Hello,

I have a question regarding the FIXED alignment method.

I have run the alignment method on a set of 5 indicators (each having 6 categories) for 1 latent variable for 33 countries.

The initial FREE alignment model provide the warning that the model may be poorly identified and suggests to switch to the FIXED method with group 17 as baseline group.
But neither this group nor any other group has a mean close to 0 (as recommended in Asparouhov & Muthén, 2014). Group 17 in fact has a latent mean of -0.764.

My question is: Under this circumstances, does the FIXED method provide trustworthy parameter estimates?

Furthermore in the paper from Marsh et al. (2017) it is stated that
"For the present purposes we used the FIXED option available in the Mplus CFA-MI.AL model, in which the latent factor mean and variance of one arbitrarily selected group (in this case the first group, Australia) were fixed to 0 and 1, respectively".
However, as far as I understood, the choice of group selection is not arbitrary, but depends on the size of the latent mean which is closest to zero...

Could you provide a bit more insight into this? Thank you very much in advance.
 Tihomir Asparouhov posted on Thursday, June 01, 2017 - 6:29 pm
The recommended group is the one with mean closest to 0 by absolute value. I don't have any reason to doubt the conclusion of the estimation, i.e., that the FIXED method is better than the FREE method in overall terms of bias and standard error considerations.

The FREE method poor identifiability could be due to not enough noninvariance.
 Tino Nsenene posted on Wednesday, June 07, 2017 - 9:41 pm
Hello,

I am trying out a measurement invariance analysis for 20 groups with four continous indicators.

As one might expect, scalar invariance cannot be established. So I switched to the alignment method using ML.

My more general question is (approximately) how many parameters should be invariant in alignment analyses to consider the resultung factor means as trustworthy? I am bit scared that althought the alignment approach works straightforward, a large share of invariant parameters (say, 40%) certainly does not helpt to reach robust conclusions?

Many thanks for your response
Tino
 Bengt O. Muthen posted on Saturday, June 10, 2017 - 12:10 pm
The assumption of the alignment method is that a majority of the parameters should be invariant and a minority of the
parameters should be non-invariant.
 Christopher Bratt posted on Sunday, October 08, 2017 - 1:15 am
Hi,

I was under the impression that the alignment test for approximate measurement invariance is largely insensitive to sample size. But I'm not sure this is correct.

Tests of "significance" are generally sensitive to sample size, including the nested Chi-square test. Will this also apply to the alignment test for approximate measurement invariance?

As a practical example, I have now estimated a multi-group factor model across 15 narrowly defined age groups. The factor model has three categorical indicators (so I cannot use estimation=Bayes), and a sample size of more than 150,000. The alignment test in Mplus reports substantial non-invariance.

I also test the model with local structural equation modelling (LSEM), which allows us to inspect how factor loadings and thresholds/intercepts vary across a continuous variable. I then plot the results in R. The plots from LSEM testing for invariance across age as a continuous variable suggest only a moderate degree of non-invariance.

I found not detailed discussion of the issue of sample size and power in an alignment analysis. Could you please clarify?

Best,
Christopher Bratt
 Bengt O. Muthen posted on Monday, October 09, 2017 - 12:28 pm
Alignment uses chi-square testing of the configural model and is therefore sensitive to sample size, although less so than the metric or scalar models.

I assume you have more than 3 indicators so the configural model is testable. Bayes can handle categorical indicators.
 Christopher Bratt posted on Monday, October 09, 2017 - 2:13 pm
Thank you for the clarification concerning sample size sensitivity! Very helpful.

Can Bayesian estimation be used *for alignment* with categorical indicators?

I thought not:

"It [alignment] is available when all variables are continuous or binary with the ML, MLR, MLF, and BAYES estimators and when all variables are ordered categorical (ordinal) with the ML, MLR, and MLF estimators."

Were you referring to ordinary CFA with Bayes?
 Christopher Bratt posted on Monday, October 09, 2017 - 2:17 pm
PS. I only have three indicators in the model. So the question was about approximate measurement invariance for these three indicators.
 Bengt O. Muthen posted on Monday, October 09, 2017 - 4:17 pm
You are right that Bayes alignment can't handle ordinal variables, but it can handle binary ones.

With only 3 indicators I would think the chi-square is zero because each group's model is just-identified. Alignment does not impose restrictions across groups in its first configural step.
 Pamela Woitschach posted on Thursday, November 30, 2017 - 4:31 pm
Hi, I'm doing invariance analysis for 15 samples on different items. At the end of the exit, I find this warning.
Mplus diagrams are currently not available for Mixture analysis.
No diagram output was produced.
1-I like to know how I can get the diagrams or that is not possible.
2- My study is based on the analysis of invariance in tests that follow a matrix design, in case I want to perform the analysis with all the items of all the booklets at the same time, how could I treat the omitted values in the items since it is a characteristic of the matrix design of these tests.
 Bengt O. Muthen posted on Friday, December 01, 2017 - 3:00 pm
1 - Not possible.

2 - Use multiple-group analysis where each group corresponds to each unique test form that is used.
 Pamela Woitschach posted on Friday, December 15, 2017 - 5:21 pm
I am working with Professor Bruno Zumbo of the UBC. We are conducting research with a complex database (65,000 students, 15 countries, 6 booklets), we have a matrix design of items, sample weights, senatorial weights and replicated weights.

We would like to know

* is it possible to use the alignment method with the sample weights?
 Bengt O. Muthen posted on Friday, December 15, 2017 - 5:25 pm
Yes, you can use for example:

stratification = stratum7;
weight = housewgt;
cluster = school10;


Analysis:
type = complex mixture;
 Emily Haroz posted on Thursday, December 21, 2017 - 1:16 pm
Hi there,

I tried to post this before, but I am struggling a bit with a problem. We did a multi-group alignment analysis to identify DIF across data from 5 different countries. I then saved the plausible values across 10 iterations. This all worked fine. However, when I look at the range of plausible values they range from -2 to +2. This is not interpretable as our scale for summary scores on our measures ranges from 0-3. Is there any way to restrict the range of plausible values? Or some other way to generate DIF-adjusted factor scores?

Thanks so much!
 Tihomir Asparouhov posted on Thursday, December 21, 2017 - 3:28 pm
Typically the observed measure Y is related to the factor through a measurement equation
Y = nu + lambda*factor + error
If the factor ranges from -2 to 2, the predicted observed measure will range from
nu-2*lambda to nu+2*lambda. That should match your observed values of 0 to 3.
 Pamela Woitschach posted on Monday, February 05, 2018 - 5:27 pm
Hi, I am having a confusion about my results.I have several results that show a value of R2 close to zero when I understand that I must interpret in the following way: a value close to 1.00 implies a high degree of invariance, whereas a value close to 0.0 suggests a low degree of invariance.

Categorical= IT1_1-IT1_30;
USEVARIABLES = IT1_1-IT1_30;
STRATIFICATION = country1;
WEIGHT= swgc;
CLASSES= c1(16);
CLUSTER= ID_cluster;
KNOWNCLASS = c1(country1= 1-16);
ANALYSIS: TYPE = COMPLEX MIXTURE;
ESTIMATOR = MLR;
PROCESSORS = 2;
ALIGNMENT = FREE;
ALGORITHM=INTEGRATION;
MODEL: %OVERALL%
f BY IT1_1-IT1_30;
OUTPUT: TECH1 TECH8 ALIGN;
PLOT: TYPE = PLOT2;

Let's see the example from my data:
Threshold IT1_2$1
Weighted Average Value Across Invariant Groups: 0.057
R-square/Explained variance/Invariance index: 0.008
Loadings for IT1_2
Weighted Average Value Across Invariant Groups: 0.919
R-square/Explained variance/Invariance index: 0.203
APPROXIMATE MEASUREMENT INVARIANCE (NONINVARIANCE) FOR GROUPS
Intercepts/Thresholds
IT1_2$1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Loadings for F
IT1_2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

The value of the R does not correspond exactly with the noninvariance output. Do I did something wrong

regards
 Tihomir Asparouhov posted on Tuesday, February 06, 2018 - 8:22 pm
It is true that R2 can be close to zero even for invariant items even though that is somewhat unusual. The best way to understand this is to compute the R2 by hand, see formula (13) and (14)
https://www.statmodel.com/download/webnotes/webnote18.pdf
This can happen for example if the power was not sufficient to establish the non-invariance (such as small sample size or many missing values for that item or unusually large SE due to empty cells in bivariate tables). Or it can happened if the average aligned loading is close to 0.
 Joe Wasserman posted on Monday, March 05, 2018 - 3:26 pm
Marsh et al.'s (2017) alignment-within-CFA (AwC) approach involves using the output from the alignment method as starting values in subsequent models. If there is good evidence for metric invariance, is there any sense or value in estimating a more parsimonious model by using both the alignment method output as starting values and also constraining item intercepts to equivalence across groups?
 Tihomir Asparouhov posted on Tuesday, March 06, 2018 - 4:48 pm
Appendix 2
https://www.statmodel.com/download/Marsh%20et%20al%202016%20Alignment%20pre-pub%20version%2010AUG2016.pdf
gives the summary for the AwC method. As point 1 states - if you already have scalar invariance you do not need alignment or AwC.

The main issue in implementing successfully AwC is fixing the 2*m parameters that produce an identified model. It is not the starting values - those will help the estimation but are generally optional. Appendix 3 for example shows the most straight forward approach of fixing the mean and the loading of the first indicator for each factor in every group to the alignment estimate. Once you have this settled in you can add additional constraints to obtain a more parsimonious model - for example if the alignment solution indicates that the intercept of an indicator of the loading of an indicator is invariant - you can add that constraint across groups in the AwC model.
 Joe Wasserman posted on Tuesday, March 06, 2018 - 4:58 pm
Thank you for the very helpful clarification!
 Bo Zhang posted on Monday, April 30, 2018 - 8:23 am
Hi there,

I am currently analyzing a big five personality dataset that covered 7 countries in total. I'm interested in country mean comparison. I first fitted a 5-factor CFA model. As expected, model fit was really bad. So I switched to ESEM, which improved the model fit a lot. Therefore, I further tested measurement invariance of the ESEM model across countries. However, MI was not supported even though the configural model fitted really well. I was thinking about using Alignment method. However, it seems that the current version of Alignment method can only be applied to CFA models, not ESEM. I am curious whether there is a way to use alignment method within ESEM.

Thank you so much!

Best,
Bo
 Tihomir Asparouhov posted on Monday, April 30, 2018 - 4:49 pm
I would recommend this approach

https://www.statmodel.com/download/Marsh%20et%20al%202016%20Alignment%20pre-pub%20version%2010AUG2016.pdf

Another very relevant article for you would be this (somewhat more ESEM based approach)

Marsh, H. W., Nagengast, B., & Morin, A. J. S. (2013). Measurement invariance of big-five factors over the life span: ESEM tests of gender, age, plasticity, maturity, and la dolce vita effects.

and possibly this one

https://www.statmodel.com/download/Marsh%202010-PA-A%20New.pdf

I think you will need to employ either of these techniques EwC or AwC or possibly both of them.
 Han, Chang Hoon posted on Wednesday, September 12, 2018 - 9:41 pm
When performing an alignment analysis on a correlated factor model, does the factor correlation also get an aligned value according to the group?
If not, is factor correlation treated as a normal CFA model after the loading / intercept is aligned?
or,, Is there another procedure for factor correlation processing in alignment?
 Han, Chang Hoon posted on Thursday, September 13, 2018 - 12:12 am
Hi, thank you for your kind and prompt reply.
I have one more question.

When factor means are constrained, I thought that "ALIGNMENT= FREE" is 1 higher DF(degree of freedom) than "ALIGNMENT = FIXED".
I wonder how this DF 1 difference in "ALIGNMENT= FREE" handles estimation and identification when analyzing alignment.

Also, factor means are determined according to the simplicity function.
I could not understand clearly whether this condition(simplicity function) is sufficient in "ALIGNMENT= FIXED".
 Han, Chang Hoon posted on Thursday, September 13, 2018 - 5:28 am
sorry, I corrected my question last sentence.

I could not understand clearly whether this condition(simplicity function) is sufficient in "ALIGNMENT= *FREE*".
 Tihomir Asparouhov posted on Sunday, September 16, 2018 - 2:15 pm
1) The factor correlation is not altered by the alignment.

2) Both FREE/FIXED alignment have the same degrees of freedom and fit as the configural model. The free alignment uses a different alignment function which includes the means in all groups but that does not affect the fit or the degrees of freedom.

See Section "Simulation Study 3: Comparing FIXED and FREE Alignment" for comparison of the free and fixed methods and their advantages and disadvantages.
https://www.statmodel.com/examples/webnote18.pdf
 Han, Chang Hoon posted on Sunday, September 16, 2018 - 3:03 pm
Thankyou for your helpful answer.
I'll check what you attached.
 Han, Chang Hoon posted on Wednesday, October 10, 2018 - 3:10 pm
Hi there,

Can i get information about alignment optimization process(or history),loss function value ?
 Tihomir Asparouhov posted on Wednesday, October 10, 2018 - 4:25 pm
Yes - you can get that with

output: tech8 tech5 align;
 Han, Chang Hoon posted on Sunday, November 25, 2018 - 9:02 pm
Hello, i have a question about weight function.

I saw your answer. (
Tihomir Asparouhov posted on Monday, April 21, 2014 - 8:38 am)

" Also the weight is standardized: scaled so the total weight is equal to the total number of cross group comparisons NG*(NG-1)/2 which is 1 in your case. So the actual weight that we use for the tech8 output is
w=((NG-1)*NG/2)*w0/sum(w0)
where
w0=sqrt(N1*N2)"

Can i know "sum(wo)"?
For example in 3 group case sample size 25(N1), 36(N2), 49(N3).

I understood
sum(w0)= sqrt (N1*N2)+ sqrt(N1*N3) + sqrt(N2*N3) = 5*6 + 5*7 + 6*7
Is this right ?
 Tihomir Asparouhov posted on Monday, November 26, 2018 - 10:34 am
Yes
 Han, Chang Hoon posted on Thursday, November 29, 2018 - 8:38 am
Hi, there.

Can i know how calculated
"Item Parameters In The Alignment Optimization Metric-TECH 8 output"?
(Especially intercept part)

The loading part can be roughly guessed. But I do not know intercept part at all.

I got below values.
(4 item, 2 group)

Intercepts: Variables (Rows) by Groups (Columns)
-0.447 -0.212
-0.330 -0.472
-0.505 -0.217
-0.342 -0.385

Factor Means
0.000 0.725
Factor Variances
0.923 1.084

These are configural model result.
(factor mean 0, variance 1)

first group intercept
Estimate
Intercepts
S2 13.588
S6 10.749
S11 9.597
S15 10.578

second group.
Intercepts
S2 17.801
S6 12.464
S11 11.005
S15 11.943
 Tihomir Asparouhov posted on Thursday, November 29, 2018 - 3:16 pm
See formulas (2) and (3)
http://www.statmodel.com/examples/webnotes/webnote18_3.pdf
 Han, Chang Hoon posted on Monday, December 03, 2018 - 4:59 am
I have more question about "Optimizaion metric".

I referred your answer.

"
Tihomir Asparouhov posted on Wednesday, February 26, 2014 - 5:16 pm

The “Item Parameters In The Alignment Optimization Metric” section contains the alignment results in the metric in which the alignment optimization is performed, i.e., after all indicator variables are standardized and also under constraint (10) from web note 18, ~~~"

In optimizaion metric, multiply all of the factor variances to 1.

factor variance of optimization metric(3 group example)
.744, .86, 1.563

But, multiplication of optimization metric's loadings(between group) was close to 1, but not exactly 1.(In some cases)
I analyzed equal sample size data.
4 item, 3 groups.
For example,

item6
.854, 1.065, 1.02 -> product 0.9277

I understood that the aligned loading of individual items is standardized to be 1
when multiplied by a group, which is applied to factor variance but not loading.

I would be grateful if you could give me specific methods or standards for standardizing the loading of optmization metrics.
 Tihomir Asparouhov posted on Tuesday, December 04, 2018 - 4:45 pm
What is printed in tech 8 is formulas (2) and (3) where Lambda_0 and Nu_0 come from model M0 and alpha_g and psi_g come from the alignment optimization that minimizes formula (4).
 Tihomir Asparouhov posted on Tuesday, December 04, 2018 - 4:47 pm
Alpha_g and psi_g are also reported at the bottom of that output.
 WEN Congcong posted on Tuesday, April 02, 2019 - 10:44 pm
Dear professors,

Hello!I am now performing a monte carlo simulation study about alignment in 3 factor models. I need to present the results of alignment.

My question is: Is it necessary to present the type 1 error rates of the factor mean parameters? How do we get type 1 error rates with Mplus? Any command can get the results?

Thank you very much!
 Tihomir Asparouhov posted on Thursday, April 04, 2019 - 3:41 pm
You can see how we presented monetcarlo simulations here
http://statmodel.com/examples/webnote18.pdf

All the scripts for doing these montecarlo simulations are available here
http://statmodel.com/examples/Web18.zip
 WEN Congcong posted on Thursday, April 04, 2019 - 8:42 pm
Thanks for your recommendations.

But my reviewer wanted me to present the type 1 error rates of the factor mean parameters. Is the type 1 error rate equal to 1-coverage rate?

Thank you!
 Tihomir Asparouhov posted on Friday, April 05, 2019 - 9:51 am
Yes. We usually report coverage probability (which in simulations should be about 95%). This is the standard way of reporting simulations. Technically the type 1 error would be 1 - the coverage probability so near 5%.
 Youngshin Ju posted on Monday, June 10, 2019 - 8:23 am
Dear. Mplus Team

Hello,
I'm trying to alignment method using bayes estimation and I want to get 'Fit Function Contribution' value. However, I can't find this results my results. I used OUTPUT: TECH1 TECH8 ALIGN;
Are these not available bayesian alignment?

Thank you so much in adavance.
 Tihomir Asparouhov posted on Monday, June 10, 2019 - 4:50 pm
They are not printed currently. Note, however, that alignment is performed at every MCMC iteration. Thus - it is not a single value but an entire distribution as it changes over every iteration. For comparative purposes one possibility is to switch to ML for that information. If the Bayes and the ML solutions are very different (usually they are not) one can fix the ML parameters: the intercepts and loadings to the Bayes estimates.
 Pamela Woitschach posted on Tuesday, August 13, 2019 - 2:06 pm
In the output that list the group mean's differences, are the means made/calculated from all the parameters (invariant and non-invariant) or are the means calculated by only using the invariant parameters?
 Tihomir Asparouhov posted on Wednesday, August 14, 2019 - 8:53 am
Only the invariant parameters weighted by the group sizes. That mean is reported as
Weighted Average Value Across Invariant Groups:
 Mina Mayr posted on Thursday, November 21, 2019 - 11:36 am
Dear Mplus team,

which measurement models can be implemented using the alignment approach? Is it possible to specify a higher order model or bifactor model using alignment in the latest Mplus version?

Thanks
- Mina
 Tihomir Asparouhov posted on Thursday, November 21, 2019 - 6:49 pm
You can use use the AwC approach to do higher order factor models.

https://researchbank.acu.edu.au/cgi/viewcontent.cgi?article=10039&context=fhs_pub

I am afraid the bi-factor model is out of reach at this time.
 Lim Jie Xin posted on Wednesday, April 15, 2020 - 10:04 pm
Dear Mplus developers,

I am also interested to know more about the post-alignment invariance analysis algorithm. Other than the brief description of the algorithm in section 4 of webnote 18, is there a more detailed documentation of the algorithm available for reference?
 Tihomir Asparouhov posted on Thursday, April 16, 2020 - 10:27 am
There is no more detailed version but that section really provides all the details on the algorithm.
 Martin Kanovsky posted on Friday, April 17, 2020 - 7:36 am
Dear Professor Asparouhov,
does the alignment invariance method or the new AwC method allow for multidimensional estimation? Or should unidimensional models be fitted separately?
 Tihomir Asparouhov posted on Friday, April 17, 2020 - 10:45 am
Our current situation is that you can have multiple factors in an alignment model but they can't have cross loadings. Anything that doesn't fit this pattern can be done in two-steps using AwC.
 Martin Kanovsky posted on Tuesday, June 16, 2020 - 7:49 am
Dear colleagues,
I was pretty sure that the configural model should have acceptable fit to proceed with alignment method. However, some authors do not agree. For example, do you think that fit CFI = 0.869; RMSEA = 0.072, 90% CI = 0.069, 0.075; SRMR = 0.80 is OK? (recently published paper reported them). Or are there some more relaxed criteria of fit for multigroup alignment method?
 Tihomir Asparouhov posted on Tuesday, June 16, 2020 - 8:25 am
Not at all. SRMR of 0.80 is not ok by any standard.
http://www.statmodel.com/download/SRMR2.pdf
 Martin Kanovsky posted on Tuesday, June 16, 2020 - 8:41 am
CFI is not OK either. However, to be just, they probably misspelled 0.08 and 0.80. Anyway, I have my doubts that such a configural model is OK for the alignment method:
http://psicothema.com/pdf/4433.pdf
pp.545
 Tihomir Asparouhov posted on Tuesday, June 16, 2020 - 10:08 am
The cost of adding some residual correlations to the model to improve fit is minimal and could recover CFI as well as improve the alignment results. All of the criteria are designed to account for multiple groups and in some cases we even print these for each group separately (ex chi-square). I would say that having many groups is not a reason to accept worse criteria.
 Martin Kanovsky posted on Wednesday, June 17, 2020 - 7:09 am
Dear Professor Asparouhov,
is alignment method already implemented for ordered categorical data (WLSMV estimator), or only for binary data?
 Tihomir Asparouhov posted on Wednesday, June 17, 2020 - 10:59 am
Only these estimators are available for alignment with ordered categorical data: ML, MLF, MLR.
 Yu Hui Zhang posted on Thursday, June 18, 2020 - 10:30 pm
Dear Prof. Asparouhou,

I used the alignment optimization procedure (Asparouhov & Muthén, 2014; Muthén & Asparouhov, 2018) in an invariance study with dichotomous items. Configural invariance was established, and the results in terms of R-squared and the benchmark of less than 25% percentage of noninvariant parameters suggested mean comparisons were feasible.

However, when I ran the simulations I kept getting an error of

THE POPULATION COVARIANCE MATRIX THAT YOU GAVE
AS INPUT IS NOT POSITIVE DEFINITE AS IT SHOULD BE.

I checked ten of the data sets generated and I found out that two of the dichotomous items had zeroes in all the responses and for two groups. The model ran fine for the two groups with the original data set.

Would you have any suggestions re. how one would proceed? Thanks!
 Tihomir Asparouhov posted on Saturday, June 20, 2020 - 3:01 pm
I would recommend to ignore those data sets. This is what we do in similar situations and we justify like this. Those problematic data sets are clearly qualitatively different form the original data.
 Yu Hui Zhang posted on Tuesday, June 23, 2020 - 9:59 am
Much thanks. Would you have any advice on how one would then proceed with the Monte Carlo investigation to check the stability of the factor means across groups and to provide additional support for the quality of the alignment solution using Mplus?

The simulation run stopped at the 160th repetition--I asked for 500. Would one remove the two groups? Or the two items? Or would one just use R-squared and percentage of noninvariant parameters as the benchmarks? Thanks.
 Yu Hui Zhang posted on Tuesday, June 23, 2020 - 10:59 am
Much thanks! I would like to ask how one would then proceed with the Monte Carlo investigation to assess the stability of the factor means across groups and to provide extra support for the quality of the alignment solution (Muthén and Asparouhov 2014) in Mplus.

I asked for 500 replications, and the program stopped at the 160th replication. Should one drop the two groups, or two items, or one would just rely on the R-squared and percentage of non-invariant parameter as the two criteria for establishing the validity of mean comparisons?

Thank you!
 Tihomir Asparouhov posted on Tuesday, June 23, 2020 - 4:17 pm
You can generate the data separately and then analyze them as in User's Guide example 12.6 step 2 (excluding problematic data sets).

The easiest way to proceed however is to adjust you model population parameters a little so that you don't get the extreme cases of constant outcomes within a group or/and increase the group sizes.

You might also be able to utilize the SEED option of the Montecarlo command to generate different data sets to whatever you want.

I usually use 100 replications as that is usually sufficient to get good quality montecarlo results.
 aiza khan posted on Thursday, June 25, 2020 - 12:43 am
Dear muthen,

How to creat cluster variable in multilevel SEM .what is the criteria.

Thanking you!
Regards
Aiza
 aiza khan posted on Thursday, June 25, 2020 - 12:50 am
Dear muthen,

i have a follow up questions to my previous post
I have two questions
First , How to control demographic variables in multilevel SEM?

second, How to aggregate data of demographic variables e.g age, gender , education having different population in multilevel study (level1 and level2) in order to make the sample equal at both levels .
THanking you!
Regards
Aiza
 Bengt O. Muthen posted on Thursday, June 25, 2020 - 8:05 am
I suggest that you study our Short Course Topics on multilevel analysis. See also our User's Guide examples in Chapter 9. The UG is on our website.
 Maria-Therese Friehs posted on Tuesday, July 07, 2020 - 12:33 am
Dear all,

I am planning to use the alignment optimization procedure in a dataset comparing the latent means of 13 different targets. However, I have not the "classical" between-person data structure in which each target is evaluated by a separate sample, but a within-person-comparison structure where one sample was required to rate all 13 samples using repeated measures. I assume that using the alignment procedure on these data would bias my results by treating dependent data as independent, is that right? And if so, is there any way so I could counter this problem, e.g., using the alignment procedure in a multi-level framework in Mplus, or with some other strategy?

Thanks a lot for your reply and best

Maria
 Tihomir Asparouhov posted on Tuesday, July 07, 2020 - 9:50 am
The alignment method is available only in the multiple group settings but the way the groups are formed is not important, i.e., they can be repeated measurements coming from the same individual.
 Maria-Therese Friehs posted on Tuesday, July 07, 2020 - 11:57 pm
Thank you very much for your quick and helpful reply.
 Yu Hui Zhang posted on Saturday, August 08, 2020 - 8:16 pm
Thanks, Dr. Asparouhov!

Regarding handling sparse dichotomous data in alignment (some items have all zeroes in the simulated data sets), you recommended:

The easiest way to proceed however is to adjust you model population parameters a little so that you don't get the extreme cases of constant outcomes within a group or/and increase the group sizes.

Would these be appropriate ways to adjust:

changing

1. physical BY p_slap*42.75313;

to

physical BY p_slap*4.75313;

and

2. [ p_attack$1*9.00195 ];

to

[ p_attack$1*5.00195 ];

Would one focus on the factor loadings or thresholds? Would there be any guidelines on the magnitude of the adjustment?

Thank you!
 Tihomir Asparouhov posted on Monday, August 10, 2020 - 4:52 pm
It depends a little on how big the groups are. If group size is 100 I would probably not recommend using anything higher than
physical BY p_slap*2;
[p_attack$1*5];
With larger groups you will be able to work with larger values.

You generally want to aim for at least 10% in the smaller category but again with larger groups you can get even lower such as 5%.

Items that have much less than 5% in the smaller category are most likely not very informative and most likely don't contribute much to the model.
 Yu Hui Zhang posted on Wednesday, August 12, 2020 - 12:38 am
Much thanks for the general guidelines, Dr. Asparouhov.
 Amanda Lemmon posted on Monday, August 17, 2020 - 2:10 pm
Hi -

I am working with a two-group CFA model with continuous indicators. I employed the alignment method (fixed) to estimate the mean difference between the two groups. I get somewhat different results, depending on which group I use as a reference. Why is that?

Also, I tried to free the intercepts in scalar MI (those intercepts that modification indices suggested as differing between groups; there already was metric invariance). And the mean difference is somewhat different from the estimate in the alignment method. Which one should be more trustworthy?
 Tihomir Asparouhov posted on Monday, August 17, 2020 - 3:43 pm
1. I would suggest that you look at the scalar MI model and compare the results when you switch the reference group there (i.e. take out alignment). It's the same reason.

2. The models are nested so you can use likelihood ratio test to decide which model is more reliable. Alignment is meant to automate the modification indices process but it is not the same model as it has the fit of the configural model.
 Amanda Lemmon posted on Tuesday, August 18, 2020 - 9:45 am
Thank you! As for one, I followed your suggestion and checked that in the scalar MI model -- I didn't realize that results would be different depending on the choice of the ref group in this model. I also noticed that the unstandardized mean with one ref group = the negative standardized mean with another ref group. Why does this happen? I have been trying to Google and look through textbooks but no success so far... Or maybe you know where it's explained? I am guessing this has something to do with different variances in the two groups but not sure how exactly variances affect mean estimates here...
 Tihomir Asparouhov posted on Tuesday, August 18, 2020 - 10:30 am
Changing the reference group in scalar MI is just a reparameterization of the model.

If the first group is the reference group the indicator means will be
Nu in the first group and Nu+Lambda*Alpha in the second group.

If you switch the reference group the new intercept in the reference group is estimated to be
Nu1= Nu+Lambda*Alpha
and the new factor mean will be
Alpha1=-Alpha
so the mean in the first group will be
Nu1+Lambda*Alpha1=Nu+Lambda*Alpha-Lambda*Alpha=Nu
so it is the same model, i.e., the models provide the same estimates for the indicator variables

You can find some more general help with SEM here

https://listserv.ua.edu/archives/semnet.html
 Tihomir Asparouhov posted on Tuesday, August 18, 2020 - 10:34 am
This document also goes into more details

https://www.statmodel.com/download/RefGroup.pdf
 Tihomir Asparouhov posted on Tuesday, August 18, 2020 - 10:38 am
Actually, the PDF link is more accurate then my writing above since it takes into account the change in the variance as well.
 Amanda Lemmon posted on Tuesday, August 18, 2020 - 2:48 pm
Tihomir, thank you very much! This was very helpful. So essentially the two results should be close enough to each other, and the discrepancies are essentially not practically meaningful.

I also wanted to ask about the units of the estimated mean and variance. Are they in units of the SD of the ref group? Or pooled SD? Or something else?
 Tihomir Asparouhov posted on Tuesday, August 18, 2020 - 4:34 pm
These are units of the SD of the ref group.
 Amanda Lemmon posted on Wednesday, August 19, 2020 - 8:41 am
Thank you!
 Youngshin Ju posted on Sunday, August 30, 2020 - 4:41 pm
Hello,

I read the paper: M&A (2018). Recent Methods for the Study of Measurement Invariance With Many Groups: Alignment and Random Effects.

I have a question about table 4. I can't understand 'Variance' column. In Alignment analysis output, how can I found the information about 'Variance' column? Should I calculate these values by hand?
Please let me know variance information of Alignment analysis.

Thanks a lot in advance!
 Tihomir Asparouhov posted on Monday, August 31, 2020 - 5:22 pm
Yes you can compute it by hand. See
http://statmodel.com/download/Alignment%20R-square.pdf
 aiza khan posted on Tuesday, September 08, 2020 - 6:09 am
Dear muthen
i am running two level CFA with random slopes .I am bit new in mplus.Usually we check fit indexes while evaluating the model e.g CFI ,TLI SRMR etc .But when i run two level cfa with random slopes i do not find these indexes .There is sa statistics of pvalue ,S.E and others like mean etc .Please guide me usually which statistics we follow in evaluating the model while running two level cfa with random slopes

thanks
 Bengt O. Muthen posted on Wednesday, September 09, 2020 - 3:26 pm
With random slopes, you don't analyze covariance matrices any more but raw data. Therefore, the usual model fit statistics are not available. You can compare models using BIC.
 aiza khan posted on Thursday, September 10, 2020 - 12:04 am
Thank you so much .

Regards
 aiza khan posted on Monday, September 21, 2020 - 12:36 pm
Dear muthen

Hope you are doing fine.
Dear i want to ask .How can we compare BIC values by conducting multilevel cfa with random slopes in mplus .As model is nested with one IV And one Dv non catagorical in nature .statistics show AIC,BIC and adjusted sample size BIC of one integrated model .How can we access the goodness of model .Generally we use BIC And AIC value for non-nested data .we can easily compare BIC of two or three models .MY questions is how to access comparative fit while evaluating nested model .
Thanks
 aiza khan posted on Monday, September 21, 2020 - 2:33 pm
Dear muthen
my follow_up question is
how BIC value in comparison to other statistics like AIC will be used to evaluate the goodness of model or model fit while conducting multilevel CFA with random slopes as we have one nested model with one IV and one DV as a continues variable ?

Thanks
 Bengt O. Muthen posted on Monday, September 21, 2020 - 5:02 pm
Goodness of fit is not easily assessed with random slopes. BIC is just a measure of relative fit - how two models compare - not their absolute fit to the data.
 aiza khan posted on Monday, September 21, 2020 - 5:15 pm
Thank you so much for response
my next question is how we can access the relative fit of the model while making comparison between baseline model and alternative model in multilevel cfa with random slopes in mplus.


Thanks
 aiza khan posted on Monday, September 21, 2020 - 5:17 pm
Dear muthen Thank you so much for response
my next question is how we can access the relative fit of the model while making comparison between baseline model and alternative model in multilevel cfa with random slopes in mplus.


Thanks
 Lian van Vemde posted on Tuesday, September 22, 2020 - 8:24 am
Hello,

I recently ran the alignment method to test for measurement invariance in a paper I am working on.

However, I was wondering if it is possible to also specify my full regression model; including interactions in the same syntax as the alignment method.

This full regression is first only performed for one of the two groups, afterwards the final model for this group is specified for the comparison group.

Thanks for your help in advance!
 Bengt O. Muthen posted on Tuesday, September 22, 2020 - 10:02 am
Answer to the posting by Khan:

You ask about making a comparison between a baseline model and an alternative model in multilevel cfa with random slopes in mplus. There is no baseline model that corresponds to the random slope model and therefore there is not overall test of model fit for random slope models. This is because random slope models do not rely on information from only mean and covariance structures but from raw data.
 Tihomir Asparouhov posted on Tuesday, September 22, 2020 - 10:07 am
Answer to Lian:

You can use the extended alignment method described here
https://core.ac.uk/download/pdf/212756893.pdf
 aiza khan posted on Tuesday, September 22, 2020 - 11:37 pm
Answer to khan..


Thank you very much .I got the point
 Lian van Vemde posted on Friday, September 25, 2020 - 6:50 am
Thank you! After reading this paper I have a question though.
So if I understand it correctly, I first run the alignment model and then copy the starting values it gives me to a new syntax in which I use these values to specify my latent factors.

However, if I do this (and specify the rest of the model) I get an error stating that the standard errors of my parameters could not be computed. This seems to be caused by a problem concerning a WITH statement that followed from the alignment factor model results. Is this solveble?

Also, I do not completely see how the alignment method is extended with the regression. Can I just add my normal regression code underneath the specification of the factor model and then for both groups? That is to say, follow the BY statements with ON statements using the factors just created as well as some other variables in the dataset that were no factors like age?

Thank you so much in advance for the answers!
 Lian van Vemde posted on Friday, September 25, 2020 - 7:29 am
I also have another question regarding the alignment method and a simulation I did with this method (not sure if this is the right topic for this but here I go).

I ran a simulation with different sample sizes matching my actual sample size to check whether the alignment method performs well given my sample size and type of invariance.

When I then look at the results in the table "Correlations and Mean Square Error of Population and Estimate Values for all Simulations" in the output; the results for the correlations are very good (correlations mostly above .98) for all factors.

However, for one of the factors, the mean square error for both the mean and variance is around 1.5 (factor is scaled from 0-4). Given that the mean square error should be as close to zero as possible this seems rather large to me.
When I use a transformation for this factor (it is somewhat skewed in one group) this decreases the mean square error of the variance in the simulation to 0.40 but the mean still is 1.10.

So I am just wondering how to interpret these results. Do I indicate based on this that the estimated results of this factor correspond not well with the actual values? Or is the 1.10 not that high for a mean square error in a simulation study.

Thanks for your help in advance!
 Tihomir Asparouhov posted on Friday, September 25, 2020 - 5:31 pm
You have to fix some of the parameters, See Appendix 3

! For identification
purposes, the first item per
factor is constrained to its
estimated values from the
alignment solution, and
factor variances and means
are free


The MSE is primarily a function of sample size. If MSE doesn't go to 0 when you increase the sample size, send your example to support@statmodel.com
 Lian van Vemde posted on Friday, October 02, 2020 - 7:41 am
Yes I did that. However, then I get the error message that my model can't be identified if I add my regression model.

So there is where I get stuck in running the extended model and I can't seem to fix it.

I will check the MSE with a larger sample size and otherwise send my example to the listed emailadress.

Thank you for your answers!
 Tihomir Asparouhov posted on Friday, October 02, 2020 - 4:14 pm
Do you have the regression parameter to vary across groups? Maybe the covariate is constant in one of the groups. The first step is again to make sure you have an identified model without the regression, then add the regression with equal coefficients across the groups, then unequal coefficients.

The best way to identify coefficients involved in non-identification is to use
estimator=mlf; condition=0;
and look for parameters with huge standard errors.
 Amanda Lemmon posted on Thursday, October 08, 2020 - 10:48 pm
Hi -

I wanted to ask about the interpretation of the non-invariant parameters that the alignment method marks with (). Can they be interpreted together (meaning, can each of them be interpreted)? Or are they like modification indices, which can't really be interpreted together because they change after making a single modification in the model?

Relatedly, if I have more than 25% of non-invariant parameters, I understand that I can't trust the mean differences results. But are non-invariant parameters marked with () still interpretable? Meaning, can I still make interpretations on which parameters are not invariant, or they can't be trusted either?

I also wanted to ask about the average invariant index. Are there any rules of thumb for interpreting it as small/medium/large?
 Lian van Vemde posted on Friday, October 09, 2020 - 8:22 am
Well technically I want to run my model for one group first, and then when I have the most optimal model for that group see if that also holds for the other group.
So I am going to try to specify my full model first for one group.

If that does not work I will resort to saving my factor loadings and intercepts and using these in a new analyses.

Then regarding the simulation, I've run them with sample sizes up to 1000 per group, but the values for the MSE do not go down (maybe 0.02). So I want to send this to the email address given before. I am however not sure what to include in my email. I do not have a license number as Mplus was provided to me by my university on my personal computer from there.
 Tihomir Asparouhov posted on Friday, October 09, 2020 - 3:21 pm
Answer to Amanda:

Yes the parameters can be interpreted even if they are not invariant - they are not like modification indices, and are actual model parameters.

If you have more than 25% of non-invariant parameters, the best thing to do is to do a simulation study based on the alignment results that you got and see how well the parameters are recovered across many simulated data sets.

small/medium/large Invariant Index:

Probably anything above .80 is large, between .40 and .80 would be medium and below .40 would be small.
 Tihomir Asparouhov posted on Friday, October 09, 2020 - 3:43 pm
Answer to Lian:

There are two possible causes on the simulation study.

You can try different values of the TOLERANCE option, compare for example
TOLERANCE=0.01 and
TOLERANCE=0.0001

The second issue is that the generating parameters may not represent the best alignment. Generally a simple strategy is to identify the invariant parameters and the non-invariant parameter and make sure the invariant parameters use the same value for the model generation. The fewer number of non-invariant parameters you have the easier it will be for alignment to recover all the parameters.

MSE of 0.02 looks small so you might need to have an even bigger sample to make it smaller.
 Lian van Vemde posted on Sunday, October 11, 2020 - 10:20 am
Thank you for the answers.

I've tried the tolerance option and it does not seem to have that much effect (MSE drops from 1.27 to 1.25 for example) for all sample sizes I have in the simulation study.

I do not think it is the second issue as according to the alignment method I have no non-invariance items in my model.

And what I meant with the MSE of 0.02 is that the values between two sample sizes, so for example 250 and 500 drops with 0.02. And then the same holds for an increase of N from 500 to 1000.
 Amanda Lemmon posted on Sunday, October 11, 2020 - 9:30 pm
Tihomir, thank you!

I have been trying to figure out how to run a monte carlo study for the alignment method. I used Example 12.7 from the guide. But I ran into the following error message:

*** FATAL ERROR
THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE.

Here is my code:

MONTECARLO: NAMES = V_1 V_2 V_3 V_4;
NGROUPS = 4;
NOBSERVATIONS = 4(200);
NREPS = 50;
SEED = 45335;
POPULATION = DIF_estimates.dat;
COVERAGE = DIF_estimates.dat;

ANALYSIS: ESTIMATOR = MLR;

ALIGNMENT = FIXED (1);

MODEL POPULATION:

F1 BY V_1* V_2 V_3 V_4; F1;
[V_1 V_2 V_3 V_4];

MODEL:

F1 BY V_1* V_2 V_3 V_4; F1;
[V_1 V_2 V_3 V_4];


The file DIF_estmates.dat was saved from the actual alignment analysis.

What am I doing wrong?

On a different note, is it possible to specify the number of observations in each group?

Thank you very much.
 Tihomir Asparouhov posted on Monday, October 12, 2020 - 1:14 pm
You should use the setups that are featured here:
http://statmodel.com/examples/Web18.zip

For the group sizes take a look at page 861 in the User's Guide.
NOBSERVATIONS = 500 1000;
NOBSERVATIONS = 2(1000) 38(500);
 Tihomir Asparouhov posted on Monday, October 12, 2020 - 1:20 pm
Answer to Lian:

I would suggest that you start with teh examples featured here
http://statmodel.com/examples/Web18.zip
and then try to figure out how your example is different. Maybe the reference group is not the same between the generating and the estimated model. MSE of 1.27 shouldn't really happen. If you can't figure it out send your example to support@statmodel.com
 Lian van Vemde posted on Tuesday, October 13, 2020 - 10:41 am
Dear Tihomir,

Thank you for all your answers and time so far.

Regarding the examples in the file you send me, my example differs from these as in that I use the starting values extracted from my alignment results as input for the simulation (see also this tutorial listed on your website: https://maksimrudnev.com/2019/05/01/alignment-tutorial/#more-1768).
In both my alignment and simulations I used the FIXED alignment option with the same reference group.

I will send my example to the email address.
 Amanda Lemmon posted on Tuesday, October 13, 2020 - 5:34 pm
Tihomir, thank you! I looked at (some of) the documents you shared, and I didn't seem to see codes that would incorporate estimates that were saved from the actual analysis. Does that mean that using the saved files with estimates cannot be done? Is the only option to type up these estimates as starting values in the MODEL POPULATION and MODEL statements?

Thanks again!
 Tihomir Asparouhov posted on Wednesday, October 14, 2020 - 9:44 pm
Using the saved files with estimates should work. Alternatively you can use
OUTPUT:SVALUES;
 Amanda Lemmon posted on Thursday, October 15, 2020 - 5:28 pm
Thank you! I found a mistake in my syntax (the one that tries to use an external file with saved parameters to be used as population parameters/starting values). The new error message says that there are too much data in the file. I think that’s because the file also includes error variances and covariances (the latter are all zeros).

So I decided to use the MODEL COMMAND WITH FINAL ESTIMATES USED AS STARTING VALUES produced by OUTPUT:SVALUES; that you recommended. This is very handy to just copy paste as population model and model in the Monte Carlo syntax! (1) I am assuming that I don’t need error variances there? So I guess I can delete that part.
(2) Another question I have is about the part for %OVERALL%. SVALUES don’t provide values for that… what should I use for overall parameters? Further, the %OVERALL% part does give me group means but I think I don’t need those in the Monte Carlo syntax?
 Amanda Lemmon posted on Thursday, October 15, 2020 - 5:28 pm
I also have a few questions about the interpretation of the output:

(3) There seem to be different values for group means reported in different parts of the output. One set of values is reported in the model results (and also the same values are reported in the Factor mean comparison at the 5% significance level). Then there is a different set in Categorical latent variables (Means) right at the end of model results and before Approximate measurement invariance (noninvariance) for groups. And another set is in the Tech 8 output in Factor means. Why are these sets of group factor means different? And which should I use? A similar question for sets of values for group variances – one set in model results and another set in Tech 8.

(4) What are the formulas for R^2 for ordinal variables? I am guessing that formulas 13 and 14 from Webnote18 should be expanded to include multiple thresholds but I am not sure how…

(5) In Tech 8, there are loadings and intercepts/thresholds in the alignment optimization metric. Are values in this metric essentially standardized values? Should they be interpreted, or should I interpret only original values (from the model results section)?

Thank you very much!
 Bengt O. Muthen posted on Friday, October 16, 2020 - 3:07 pm
Send your output to Support along with your license number.
 Amanda Lemmon posted on Thursday, October 22, 2020 - 12:48 pm
I am running an alignment analysis using ordinal variables (so declared as categorical). The analysis using my real data ran fine. But the Monte Carlo analysis using the results from real data produced different error messages (for many replications):

ONE OR MORE PARAMETERS WERE FIXED TO AVOID SINGULARITY OF THE INFORMATION MATRIX.

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX.

WARNING: THE MODEL ESTIMATION HAS REACHED A SADDLE POINT OR A POINT WHERE THE OBSERVED AND THE EXPECTED INFORMATION MATRICES DO NOT MATCH.

The issue seems to be in the sample size, as increasing it resolved the error. I am wondering if this error message in the analysis with actual sample sizes means that all of the results should be discarded (including the real data analysis, even though it did not produce an error) or if I should discard only the Monte Carlo results? Or maybe preserving the real data sample size in the Monte Carlo step is not critical, and I should just increase the sample size and interpret the results that were based on the increased sample size?

Thank you!
 Bengt O. Muthen posted on Friday, October 23, 2020 - 4:00 pm
We need to see your full output - send to Support along with your license number.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: