T-Test PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 ben kübler posted on Tuesday, May 06, 2008 - 9:57 am
Hi,

is M+ using a one dimensional T-Test or an two dimensional?
 Linda K. Muthen posted on Tuesday, May 06, 2008 - 12:32 pm
If you are talking about the ratio of the parameter estimate to the standard error of the parameter estimate in column three of the results, this is a univariate z-test.
 ben kübler posted on Wednesday, May 07, 2008 - 4:23 am
Yes, i mean the tird column of the Results from the Structural Model "SEM" with the command
"Factor on Factor" which shows the significance for the Path.
In LISREL its used a T-Test so we thought in M+ its used the same Test?
 Linda K. Muthen posted on Wednesday, May 07, 2008 - 6:51 am
They call it a t-test in LISREL. It is really an approximate z-test just like in Mplus.
 ben kübler posted on Tuesday, May 13, 2008 - 7:11 am
Tank you for this informations.
So I have to use a t-value or z-value table?
 Linda K. Muthen posted on Tuesday, May 13, 2008 - 7:38 am
You would use a z-test table.
 Michael Strambler posted on Tuesday, August 24, 2010 - 7:16 am
I am interested in conducting a simple paired t-test (comparing time1 to time2 on the same variable) while using FIML to handle the missing data. However, I have not been able to find documentation on how to achieve this in Mplus. Is this possible in Mplus? Thank you.
 Linda K. Muthen posted on Tuesday, August 24, 2010 - 8:22 am
You can do this using MODEL TEST.

MODEL:
[y1] (p1);
[y2] (p2);

MODEL TEST:
0 = p1 - p2;
 Michael Strambler posted on Tuesday, August 24, 2010 - 1:09 pm
Thank you very much. I just ran the data as you described. Can you provide some guidance on the values in the output that I should be using to determine whether the pairs are significantly different? Thanks again.
 Linda K. Muthen posted on Tuesday, August 24, 2010 - 2:51 pm
In the results where fit statistics are given, you will find the results under Wald Test of Parameter Constraints. The null hypothesis is that the parameters are equal. If your p-value is greater than .05, you cannot reject this hypothesis.
 Jonathan posted on Thursday, November 07, 2013 - 12:44 pm
Hi there:

I know that I can use MODEL TEST to run a Wald test, which is an omnibus test to see if coefficients are different than each other. So I could run--

MODEL TEST:
0 = a1-a2;
0 = a2-a3;
0 = a3-4;

To see if any of the relationships I specified are different (please correct me on this if I am wrong).

I would like to test to see if *each* relationship is different from each other. So I would like to run something similar to this, but to get separate output for each contrast that I specify. So on the output, I would hopefully have one test statistic for the a1-a2 contrast; another for a2-a3 contrast; etc.

Is this possible in MPlus? Thank you for your time.
 Linda K. Muthen posted on Thursday, November 07, 2013 - 1:16 pm
The test you shows above tests all effects together. You would need to run them one at a time if you want separate tests. Alternatively, you could create new parameters in MODEL CONSTRAINT for each difference and you will get a z-test for each one.
 Jon Stange posted on Saturday, December 19, 2015 - 8:32 pm
I have repeated-measures data across four time points, and I would like to conduct pairwise comparisons using FIML for missing data. I have been following the instructions above (from 24-Aug-2010) for the pairwise comparisons. However, it seems to only be using FIML to estimate missing data based on the maximum number of data points available for either of the variables in the pairwise comparison, rather than for the maximum number of data points in the whole data set (e.g., across times 1-4). So whereas I have 200 people in the full data set (who have completed time 1 measures), for any given pairwise comparison, even when using FIML for the pairwise comparison I wind up with perhaps 150 people in the comparison (e.g., if I’m using times 2 and 3 which have fewer cases with valid data). Since there are different cases missing at each time point, this means that the different pairwise comparisons I’m running (e.g., T1-2, T2-3, T3-4) aren’t actually on the same individuals. Is there a way to use FIML to estimate missing data for all 200 participants (i.e., using all available data for that variable across times 1-4, or for all variables in the data set)?

Thank you very much for your help.
 Bengt O. Muthen posted on Sunday, December 20, 2015 - 5:14 pm
You should not have only the two variables on your USEV list, but all variables.

If this doesn't help, please send output to Support along with your license number.
 Jon Stange posted on Monday, January 11, 2016 - 1:29 pm
Thank you very much. I am just following up as I am planning to report these paired-samples tests in a research paper, and I would like to know what technically this type of statistical test is called, so that I can adequately describe it in the Analysis section.

My understanding is that it is essentially a paired-samples t-test (since I am comparing one measure at two time points), but that it uses FIML. Since (per Linda Muthen's above comment on 8-24-2010) the p-value I report is under Wald test of parameter constraints (the null hypothesis being that the parameters are equal), I am assuming that this means that this is a structural equation model of some sort. If you have any advice about how to briefly describe this analysis for repeated measures tests, it would be helpful.

Releatedly, if you happen to be aware of any papers that have used this method for repeated-measures tests, I would appreciate your directing me to them for reference, if you have them at hand.
 Bengt O. Muthen posted on Wednesday, January 13, 2016 - 12:24 pm
You need to analyze all outcomes in a single analysis to get the full benefit from using FIML.

I would not stress the analogy with a paired t-test, but simply say that you are testing various detailed aspects of your estimated model. I don't know about papers but I don't think reviewers used to SEM will have objections.
 Jon Stange posted on Wednesday, January 13, 2016 - 1:12 pm
Thank you. So to clarify possible wording to explain the model:

The syntax below tests a structural equation model, and the “estimated model” tests whether the difference between the two variables differs significantly from zero, while using the other variables in the “use variables” list to use FIML for the model based on all available data points. Is that correct?

USEVARIABLES ARE
y1 p1 z1 z2 z3;

MODEL:
[y1] (p1);
[y2] (p2);

MODEL TEST:
0 = p1 - p2;
 Bengt O. Muthen posted on Wednesday, January 13, 2016 - 6:25 pm
Correct. I would change

and the estimated model tests whether...

to

and based on the estimated model one can test whether...
 Anne Black posted on Monday, February 27, 2017 - 11:01 am
Hello,
Regarding the comparisons above, how would one differentiate comparison of repeated measures from comparison of means from independent groups?
 Bengt O. Muthen posted on Monday, February 27, 2017 - 3:08 pm
For means from independent groups, the mean estimates are not correlated. Mplus figures that out using Model Test in that the p1 and p2 estimates have zero estimated covariance. In other words, the Model Test statement is the same.
 EJ Horberg posted on Sunday, March 25, 2018 - 6:57 pm
Hello,

I would like to do the same analysis as above (testing whether two means in my dataset are significantly different from each other), but with the addition of a covariate.

What is the syntax for controlling for a third variable while testing the difference between two means?

Thank you.
 Bengt O. Muthen posted on Monday, March 26, 2018 - 11:06 am
With a covariate, you can directly test that the intercepts are equal or not. But to test that the means are equal you have to express the mean of the DV using model parameters and the mean of the covariate, like

ymean = int + b*xmean;
 EJH posted on Tuesday, March 27, 2018 - 8:16 pm
Thank you for your help.
I wrote the commands below, hoping to test the difference between VarA and VarB while controlling for our covariate of Gender. The output showed that the Wald test was significant (p = .004). May I interpret that to mean that, when controlling for Gender, there is a significant difference between VarA and VarB?


model:
VarA VarB on Gender;
[VarA] (p1);
[VarB] (p2);

model test:
0 = p1 - p2;
 Bengt O. Muthen posted on Wednesday, March 28, 2018 - 11:52 am
Yes.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: