Personally seen this behavior a few times in real life, often with worrying implications. Generously I’d like to believe these people use extruded text as a place to start thinking from, but in practice is seems to me that they tend to use extruded text as a thought-terminating behavior.

IRL, I find it kind of insulting, especially if I’m talking to people who should know better or if they hand me extruded stuff instead of work they were supposed to do.

Online it’s just sort of harmless reply-guy stuff usually.

Many people simply straight-up believe LLMs to be genie like figures as they are advertised and written about in the “tech” rags. That bums me out sort of in the same way really uncritical religiosity bums me out.

HBU?

  • BlackRoseAmongThorns@slrpnk.net
    link
    fedilink
    arrow-up
    6
    ·
    8 hours ago

    It’s absolutely insulting and infuriating and i want to grab them and slap them more than a couple tomes.

    I’m first year into university, studying software engineering, and sometimes i like doing homework with friends, because calculus and linear algebra are hard on my brain and i specifically went to uni to understand the hard parts.

    Not once, not twice, have i asked for help with something from a friend, only for them to just open the dumbass chatbot, asking it how to solve the question, and just believing in the answer like it’s the moses coming down with the commandments, and then giving me the same explanation, full of orgasmic enthusiasm until i go “applying that theorem in the second step is invalid” or “this contradicts an earlier conclusion”, and then they shut their fucking brains off, tell monsieur shitbot his mistake, and again, explain to me like I’m a child (I’d say mansplaining because honest to god it looked and sounded the same but I’m also a man, so… ) word for word the bot output.

    This doesn’t stop at 3 or 4 times, i could only wish, sometime i got curious and burnt an hour like that with the same guy, on the same question, on the same prompt streak.

    Like after the 7th time they don’t understand that they are in more trouble than me and still talk like they have a phd.

    So I’ll sum up:

    • they turn off their brain
    • they make bot think for them
    • they believe bot like gospel
    • they swear bot knows best
    • they’re shown it does not know shit
    • “just one more prompt bro i swear bro please one more time bro i swear bro Claude knows how to solve calculus bro it has LaTeX bro so it probably knows bro please bro one more prompt bro-”
  • Strider@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    8 hours ago

    A friend of mine who works in tech and is very well aware of what ‘AI’ is is a big fan. He runs his own bots and stuff for personal use and thinks he has the situation under control.

    While he is more and more relying on the ‘benefits’.

    My fear is that he will not be aware how his llm interpreted output might change him and it’s kind of a deal with the devil situation.

    I hope I am wrong.

  • Rizo@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    11 hours ago

    My boss uses it to convert 3 lines of text into multiple paragraphs of PR text for our newsletter and was excited about this… 2 or 3 weeks later he told me how cool it is that he can take a multi paragraph newsletter from other companies and summaries it into 3 sentences… Let us burn through our energy grid for marketing… (Slow clap)

  • Arthur Besse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    i’ve had friends and colleagues I respect, who I really thought would know better, do this.

    to say it bums me out would be a massive understatement :(

  • HollowNaught@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    14 hours ago

    More than a few of my work colleagues will search up something and then blindly trust the ai summary

    It’s infuriating

  • Ecco the dolphin@lemmy.ml
    link
    fedilink
    arrow-up
    27
    ·
    2 days ago

    It happened to me on Lemmy here

    Far too many people defended it. I could have asked an Ai myself, but I preferred a human, which is the point of this whole Lemmy thing

  • Akasazh@feddit.nl
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    I had a friend ask me what time the tour the France would cross Claremont Ferrand om the day the stage was in Normandy. Because ai told them as part of their ‘things to do in Clermont Ferrand om that day’ query.

    It had started in Clermont in 2023, but not even on that day, b kind of puzzling me.

  • Aqarius@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    Absolutely. People will call you a bot, then vomit out an argument ChatGPT have them without even reading it.

  • ThisIsNotHim@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    Slightly different, but I’ve had people insist on slop.

    A higher up at work asked the difference between i.e. e.g. and ex. I answered, they weren’t satisfied and made their assistant ask the large language model. Their assistant reads the reply out loud and it’s near verbatim to what I just told them. Ugh

    This is not the only time this has happened

      • ThisIsNotHim@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 hours ago

        I.e. is used to restate for clarification. It doesn’t really relate to the other two, and should not be used when multiple examples are listed or could be listed.

        E.g. and ex. are both used to start a list of examples. They’re largely equivalent, but should not be mixed. If your organization has a style guide consult that to check which to use. If it doesn’t, check the document and/or similar documents to see if one is already in use, and continue to use that. If no prior use of either is found, e.g. is more common.

        • deaddigger@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          12 hours ago

          Thanks

          So i.e. would be like “the most useful object in the galaxy i.e. a towel”

          And eg would be like “companies e.g. meta, viatris, ehrmann, edeka” Right?

          • ThisIsNotHim@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            Exactly. If you’ve got a head for remembering Latin, i.e. is id est, so you can try swapping “that is” into the sentence to see if it sounds right.

            E.g. is exempli gratia so you can try swapping “for example” in for the same trick.

            If you forget, avoiding the abbreviations is fine in most contexts. That said, I’d be surprised if mixing them up makes any given sentence less clear.

  • Ffs, I had one of those at work.

    One day, we bought a new water sampler. The thing is pretty complex and requires from a licensed technician from the manufacturer to come and commission it.

    Since I was overseeing the installation and later I would be the person responsible of connecting it to our industrial network, I had quite a few questions about the device, some of them very specific.

    I swear the guy couldn’t give me even the most basic answers about the device without asking chatgpt. And at a certain point, I had to answer myself one question by reading the manual (that I downloaded on the go, because the guy didn’t have a paper copy of it) because chatgpt couldn’t give him an answer. This guy was someone hired by the company making the water sampler as an “expert”, mind you.

    • flandish@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      2 days ago

      assuming you were in meatspace with this person, I am curious, did they like… open gpt in mid convo with you to ask it? Or say “brb”?

      • Since I was inspecting the device (it’s a somewhat big object, similar to a fridge), I didn’t realize at first because I wasn’t looking at him. I noticed the chat gpt thing when, at a certain question, I was standing next to him and he shamelessly, with the phone in hand, typed my question on chatgpt. That was when he couldn’t give me the answer and I had to look for the product manual on the internet.

        Funniest thing was when I asked something I couldn’t find in the manual and he told me, and I quote, “if you manage to find out, let me know the answer!”. Like, dude? You are the product expert? I should be the one saying that to you, not the other way!

  • flandish@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    2 days ago

    I respond in ways like we did when Wikipedia was new: “Show me a source.” … “No GPT is not a source. Ask it for its sources. Then send me the link.” … “No, Wikipedia is not a source, find the link used in that statement and send me its link.”

    If you make people at least have to acknowledge that sources are a thing you’ll find the issues go away. (Because none of these assholes will talk to you anymore anyway. ;) )

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        1 day ago

        Tracing and verifying sources is standard academic writing procedure. While you definitely can’t trust anything an LLM spits out, you can use them to track down certain types of sources more quickly than search engines. On the other hand, I feel that’s more of an indictment of the late-stage enshittification of search engines, not some special strength of LLMs. If you have to use one, don’t trust it, demand supporting links and references, and verify absolutely everything.

      • flandish@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        2 days ago

        Yep. 100% aware. That’s one of my points - showing its fake. Sometimes enlightening to some folks.

        • BroBot9000@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          2 days ago

          I’ll still ask the person shoving Ai slop in my face for a source or artist link just to shame these pathetic attempts to pass along slop and misinformation.

          Edit for clarity

          • Ulrich@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            You can ask it for whatever you want, it will not provide sources.

            • BroBot9000@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              2 days ago

              Ask the person shoving Ai slop in my face for their source.

              Not going to ask a racist pile of linear algebra for a fake source.

  • BroBot9000@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    2 days ago

    A lot of uneducated people out there without the ability to critically evaluate new information that they receive. So to them any new information is true and no further context is sought because they are lazy too.

    • DrDystopia@lemy.lol
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Anybody, at any level, can fall into that trap unless externally evaluated. And if never getting a reality check, they just keep on perpetually. Why not, it’s worked up until now…

  • galoisghost@aussie.zone
    link
    fedilink
    arrow-up
    21
    ·
    2 days ago

    The worst thing is when you see that the AI summary is then repeated word for word on content farm sites that appear in the result list. You know that’s just reinforcing the AI summary validity to some users.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    Absolutely. All the time.

    Also had a guy that I do a little bit of work with ask me to use it. I told them no haha

  • lapes@lemmy.zip
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    I work in customer support and it’s very annoying when someone pastes generic GPT advice on how I should fix their issue. That stuff is usually irrelevant or straight up incorrect.