If you are talking about the ratio of the parameter estimate to the standard error of the parameter estimate in column three of the results, this is a univariate z-test.
ben kübler posted on Wednesday, May 07, 2008 - 4:23 am
Yes, i mean the tird column of the Results from the Structural Model "SEM" with the command "Factor on Factor" which shows the significance for the Path. In LISREL its used a T-Test so we thought in M+ its used the same Test?
I am interested in conducting a simple paired t-test (comparing time1 to time2 on the same variable) while using FIML to handle the missing data. However, I have not been able to find documentation on how to achieve this in Mplus. Is this possible in Mplus? Thank you.
Thank you very much. I just ran the data as you described. Can you provide some guidance on the values in the output that I should be using to determine whether the pairs are significantly different? Thanks again.
In the results where fit statistics are given, you will find the results under Wald Test of Parameter Constraints. The null hypothesis is that the parameters are equal. If your p-value is greater than .05, you cannot reject this hypothesis.
Jonathan posted on Thursday, November 07, 2013 - 12:44 pm
I know that I can use MODEL TEST to run a Wald test, which is an omnibus test to see if coefficients are different than each other. So I could run--
MODEL TEST: 0 = a1-a2; 0 = a2-a3; 0 = a3-4;
To see if any of the relationships I specified are different (please correct me on this if I am wrong).
I would like to test to see if *each* relationship is different from each other. So I would like to run something similar to this, but to get separate output for each contrast that I specify. So on the output, I would hopefully have one test statistic for the a1-a2 contrast; another for a2-a3 contrast; etc.
Is this possible in MPlus? Thank you for your time.
The test you shows above tests all effects together. You would need to run them one at a time if you want separate tests. Alternatively, you could create new parameters in MODEL CONSTRAINT for each difference and you will get a z-test for each one.
Jon Stange posted on Saturday, December 19, 2015 - 8:32 pm
I have repeated-measures data across four time points, and I would like to conduct pairwise comparisons using FIML for missing data. I have been following the instructions above (from 24-Aug-2010) for the pairwise comparisons. However, it seems to only be using FIML to estimate missing data based on the maximum number of data points available for either of the variables in the pairwise comparison, rather than for the maximum number of data points in the whole data set (e.g., across times 1-4). So whereas I have 200 people in the full data set (who have completed time 1 measures), for any given pairwise comparison, even when using FIML for the pairwise comparison I wind up with perhaps 150 people in the comparison (e.g., if I’m using times 2 and 3 which have fewer cases with valid data). Since there are different cases missing at each time point, this means that the different pairwise comparisons I’m running (e.g., T1-2, T2-3, T3-4) aren’t actually on the same individuals. Is there a way to use FIML to estimate missing data for all 200 participants (i.e., using all available data for that variable across times 1-4, or for all variables in the data set)?
You should not have only the two variables on your USEV list, but all variables.
If this doesn't help, please send output to Support along with your license number.
Jon Stange posted on Monday, January 11, 2016 - 1:29 pm
Thank you very much. I am just following up as I am planning to report these paired-samples tests in a research paper, and I would like to know what technically this type of statistical test is called, so that I can adequately describe it in the Analysis section.
My understanding is that it is essentially a paired-samples t-test (since I am comparing one measure at two time points), but that it uses FIML. Since (per Linda Muthen's above comment on 8-24-2010) the p-value I report is under Wald test of parameter constraints (the null hypothesis being that the parameters are equal), I am assuming that this means that this is a structural equation model of some sort. If you have any advice about how to briefly describe this analysis for repeated measures tests, it would be helpful.
Releatedly, if you happen to be aware of any papers that have used this method for repeated-measures tests, I would appreciate your directing me to them for reference, if you have them at hand.
You need to analyze all outcomes in a single analysis to get the full benefit from using FIML.
I would not stress the analogy with a paired t-test, but simply say that you are testing various detailed aspects of your estimated model. I don't know about papers but I don't think reviewers used to SEM will have objections.
Jon Stange posted on Wednesday, January 13, 2016 - 1:12 pm
Thank you. So to clarify possible wording to explain the model:
The syntax below tests a structural equation model, and the “estimated model” tests whether the difference between the two variables differs significantly from zero, while using the other variables in the “use variables” list to use FIML for the model based on all available data points. Is that correct?
For means from independent groups, the mean estimates are not correlated. Mplus figures that out using Model Test in that the p1 and p2 estimates have zero estimated covariance. In other words, the Model Test statement is the same.
EJ Horberg posted on Sunday, March 25, 2018 - 6:57 pm
I would like to do the same analysis as above (testing whether two means in my dataset are significantly different from each other), but with the addition of a covariate.
What is the syntax for controlling for a third variable while testing the difference between two means?
With a covariate, you can directly test that the intercepts are equal or not. But to test that the means are equal you have to express the mean of the DV using model parameters and the mean of the covariate, like
Thank you for your help. I wrote the commands below, hoping to test the difference between VarA and VarB while controlling for our covariate of Gender. The output showed that the Wald test was significant (p = .004). May I interpret that to mean that, when controlling for Gender, there is a significant difference between VarA and VarB?
model: VarA VarB on Gender; [VarA] (p1); [VarB] (p2);