• littleblue✨@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    4
    ·
    10 months ago

    Oh, FFS. Here we go. “I did my research” == “I plunked letters into a box until the computer used words I don’t know. Science, bitch!”

    We’re so fucked.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      32
      ·
      10 months ago

      If your claim that what is posted is incorrect, then state so, and provide reasons. Otherwise, I do not understand why using tools such as ChatGPT4 you consider as something bad? Do you use search engines? They are tools too. Do you use wiki? That is a tool too. Or do you spit sarcasm on anyone who does not produce original research?

      • partial_accumen@lemmy.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        3
        ·
        10 months ago

        If your claim that what is posted is incorrect, then state so, and provide reasons.

        “That which can be asserted without evidence, can be dismissed without evidence.” – Christopher Hitchens.

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          24
          ·
          10 months ago

          You do not consider ChatGPT4 as any sort of evidence? Go ahead and ask it questions in medicine or biology and keep tally how many answers are right and are wrong.

          I do admit that there is low probability that it is wrong, but simply dismissing it as no evidence at all is intentional dishonesty.

          • HikingVet@lemmy.ca
            link
            fedilink
            arrow-up
            9
            ·
            10 months ago

            ChatGPT4 is a fucking toy that regurgitates random shit it finds on the Internet. The only evidence it provides is the lack of understanding its user has.

          • partial_accumen@lemmy.world
            link
            fedilink
            arrow-up
            8
            ·
            10 months ago

            I do admit that there is low probability that it is wrong

            By your own admission it can get things wrong, yet you’re arguing it should be trusted at face value.

            but simply dismissing it as no evidence at all is intentional dishonesty.

            The whole point of citing a source is so that you can confirm the veracity of the how the source came to its conclusion.You have no idea why the LLM gave you the answer it did. You don’t know how credible its input data was. Hopefully those involved in these discussions on both sides are searching for truth. The critical examination of the data and the origin of that data is the bedrock of that. Simply pasting raw LLM output doesn’t allow any of that to occur.

            LLM/AI ML can have a place in these discussions as a tool you use for yourself, and then you can search for supporting sources to back up the LLM’s claim. However, that’s work you have to do. Its not my job when you’re the one trying to convince me of your LLM’s conclusion.

            Dishonesty is passing off raw LLM output as researched fact. Its also lazy.

            • MxM111@kbin.social
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              10 months ago

              I am arguing that it should be given relatively high credence, not “trusted at face value”. Same as with Wikipedia, by the way. As an indication that likely things are true. On Internet forums it is much higher credence than most of the people supply. I am not writing scientific paper here, I am discussing topic with you. Would you rather me stating acts without any sources at all?

              For this discussion if you have different opinion, with better argumentation and sources please do so, and I will change my view. This is what discussion on discussion board suppose to be.

              And you can absolutely confirm the veracity (or not) of ChatGPT4 itself. You can ask the question yourself. You can collect statistics how likely it gives correct answers to similar questions, or find already published data about this topic. Based on that you can calculate probability that the statement is true. And it is much higher than 50%.

              In short, don’t attack the messenger, attack the message.

              • partial_accumen@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                10 months ago

                I am arguing that it should be given relatively high credence, not “trusted at face value”. Same as with Wikipedia, by the way.

                I know you are, and I disagree. Your example of Wikipedia is a great differentiator.

                The reason that Wikipedia is generally a good source is that it too cites its sources. If a Wikipedia entry makes a claim, I can see where that data came from or if its not cited, I know the claim is suspect and not to trust it. ChatGPT has none of that.

                I am discussing topic with you. Would you rather me stating acts without any sources at all?

                From my perspective not citing any source is exactly what you’re doing. ChatGPT isn’t a trusted or challenge-able source

                And you can absolutely confirm the veracity (or not) of ChatGPT4 itself. You can ask the question yourself. You can collect statistics how likely it gives correct answers to similar questions, or find already published data about this topic.

                If you want ChatGPT involved, that’s your job. Why is it you can’t use ChatGPT to find the real source which backs its claim?

                Based on that you can calculate probability that the statement is true. And it is much higher than 50%.

                "much higher that 50% is way way too low a bar to be considered a factual source.

                In short, don’t attack the messenger, attack the message.

                I can’t attack the message, its not backed by any sources to question it. My only option is to trust it absolutely, which is absurd.

          • Flying Squid@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            10 months ago

            You do not consider ChatGPT4 as any sort of evidence?

            When it comes to science? No. ChatGPT does not write peer-reviewed journal articles.