A biologist was shocked to find his name was mentioned several times in a scientific paper, which references papers that simply don’t exist.

  • krayj@lemmy.world
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    1
    ·
    1 year ago

    Brandolini’s law, aka the “bullshit asymmetry principle” : the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

    Unfortunately, with the advent of large language models like ChatGPT, the quantity of bullshit being produced is accelerating and is already outpacing the ability to refute it.

    • calabast@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      I’m curious to see if AI tech can actually help fight some of the bullshit out there someday. I agree that current AI is only making it easier to produce bullshit, but I think with some advances it could be used to parse a long-winded batch of bullshit, and summarize it, maybe with bullet points about how the source material is wrong. If they can make an AI as confident as chatgpt, but without as much of the “makes stuff up left and right” it could be useful.

      THEN we just have to worry about who owns the AI that parses and summarizes the info we take in, and what kind of biases they’ve baked into the tech…

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        I’m curious to see if AI tech can actually help fight some of the bullshit out there someday.

        It is one of the most difficult problems on earth: to decide between lie or truth.

        And then think about the fine line when detecting irony, half-irony or other forms of humoristic non-truth.

      • Barack_Embalmer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I have high hopes for concepts like Toolformer where the model has to learn to use external APIs and resources like Wikipedia or Wolfram to get answers, rather than relying on the inscrutable and garbled soup of knowledge absorbed from the text training corpus directly. Systems plugged into knowledge graphs could have the best of both worlds - able to generate well-written novel text outputs AND the added rigor of “classical AI” style interpretability.

      • gjghkk@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m curious to see if AI tech can actually help fight some of the bullshit out there

        Those AI are the best ones to produce fake scientific papers. It’s a cat and mouse game again. Those who can detect bullshit can produce the best bullshit.