F test vs T test. What is the real diference?











up vote
6
down vote

favorite












As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.










share|cite|improve this question









New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    4 hours ago















up vote
6
down vote

favorite












As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.










share|cite|improve this question









New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    4 hours ago













up vote
6
down vote

favorite









up vote
6
down vote

favorite











As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.










share|cite|improve this question









New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.







statistics normal-distribution statistical-inference variance






share|cite|improve this question









New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question









New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question








edited 4 hours ago





















New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 4 hours ago









Oncidium

313




313




New contributor




Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Oncidium is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    4 hours ago


















  • Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    4 hours ago
















Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
– gd1035
4 hours ago




Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
– gd1035
4 hours ago










2 Answers
2






active

oldest

votes

















up vote
1
down vote













A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






share|cite|improve this answer




























    up vote
    1
    down vote













    The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



    As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



    The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



    To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



    One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });






      Oncidium is a new contributor. Be nice, and check out our Code of Conduct.










       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3012298%2ff-test-vs-t-test-what-is-the-real-diference%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      1
      down vote













      A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



      The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






      share|cite|improve this answer

























        up vote
        1
        down vote













        A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



        The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






        share|cite|improve this answer























          up vote
          1
          down vote










          up vote
          1
          down vote









          A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



          The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






          share|cite|improve this answer












          A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



          The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 2 hours ago









          bob

          508




          508






















              up vote
              1
              down vote













              The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



              As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



              The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



              To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



              One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






              share|cite|improve this answer

























                up vote
                1
                down vote













                The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



                As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



                The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



                To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



                One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






                share|cite|improve this answer























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



                  As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



                  The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



                  To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



                  One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






                  share|cite|improve this answer












                  The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



                  As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



                  The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



                  To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



                  One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 2 hours ago









                  heropup

                  61.9k65997




                  61.9k65997






















                      Oncidium is a new contributor. Be nice, and check out our Code of Conduct.










                       

                      draft saved


                      draft discarded


















                      Oncidium is a new contributor. Be nice, and check out our Code of Conduct.













                      Oncidium is a new contributor. Be nice, and check out our Code of Conduct.












                      Oncidium is a new contributor. Be nice, and check out our Code of Conduct.















                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3012298%2ff-test-vs-t-test-what-is-the-real-diference%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      404 Error Contact Form 7 ajax form submitting

                      How to know if a Active Directory user can login interactively

                      TypeError: fit_transform() missing 1 required positional argument: 'X'