• Tolstoshev@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    4 months ago

    P<0.05 means one in 20 studies are relevant just by chance. If you have 20 researchers studying the same thing then the 19 researchers who get non significant results don’t get published and get thrown in the trash and the one that gets a “result” sees the light of day.

    Thats why publishing negative results is important but it’s rarely done because nobody gets credit for a failed experiment. Also why it’s important to wait for replication. One swallow does not make a summer no matter how much breathless science reporting happens whenever someone announces a positive result from a novel study.

    TL;DR - math is hard

    • notapantsday@feddit.de
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      Also, check out this one weird trick to get positive results almost every time: just use 20 different end points!

      • Tolstoshev@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        4 months ago

        P<0.05 means the chance of this result being a statistical fluke is less than 0.05, or 1 in 20. It’s the most common standard for being considered relevant, but you’ll also see p<0.01 or smaller numbers if the data shows that the likelihood of the results being from chance are smaller than 1 in 20, like 1 in 100. The smaller the p value the better but it means you need larger data sets which costs more money out of your experiment budget to recruit subjects, buy equipment, and pay salaries. Gotta make those grant budgets stretch so researchers will go with 1 in 20 to save money since it’s the common standard.

    • Grunt4019@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      I feel like this applies more to flaws in how studies are published and the incentives surrounding that more than the scientific method.