• corroded@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    The problem isn’t the rise of “AI” but more so how we’re using it.

    If a company wants to create a machine learning model that analyzes metrics on an automated production line and spits out parameters to improve the efficiency of their equipment, that’s a great use of the technology. We don’t need a LLM to produce a useless summary of what it thinks is a question when all I want is a page of search results.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Thats fucking bullshit, the people developing it and shipping it as a product have been very clear and upfront about their uses and none of it is ethical.

  • ZeroHora@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Is not the entire picture, we are destroying our planet to generate bad art, fake tities and search a little bit faster but with the same chance of being entirely wrong as just googleing it.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    There’s literally no point.

    Like, humans aren’t really the “smartest” animals. We’re just the best at language and tool use. Other animals routinely demolish us in everythig else measured on an IQ test.

    Pigeons get a bad rap at being stupid, but their brains are just different than ours. Their image and pattern recognition is so insane, they can recognize words they’ve never seen aren’t gibberish just by letter structure.

    We weren’t even trying to get them to do it. They were just introducing new words and expected the pigeons to have to learn, but they could already tell despite never seeing that word before.

    Why the hell are we jumping straight to human consciousness as a goal when we don’t even know what human consciousness is? It’s like picking up Elden Ring on whatever the final boss is for your very first time playing the game. Maybe you’ll eventually beat it. But why wouldn’t you just start from the beginning and work your way up as the game gets harder?

    We should at least start with pigeons and get an artificial pigeon and work our way up.

    Like, that old reddit repost about pigeon guided bombs, that wasn’t a hail Mary, it was incredibly effective.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    I mean, it also made the first image of a black hole, so there’s that part.

    I’d also flag that you shouldn’t use one of these to do basic sums, but in fairness the corporate shills are so desperate to find a sellable application that they’ve been pushing that sort of use super hard, so on that one I blame them.

  • alienanimals@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    2
    ·
    2 months ago

    This is a strawman argument. AI is a tool. Like any tool, it’s used for negative things and positive things. Focusing on just the negative is disingenuous at best. And focusing on AI’s climate impact while completely ignoring the big picture is asinine (the oil industry knew they were the primary cause of climate change more than 60 years ago).

    AI has many positive use-cases yet they are completely ignored by people who lack logic and rationality.

    AI is helping physicists speed up experiments into supernovae to better understand the universe.

    AI is helping doctors to expedite cancer screening rates.

    AI is powering robots that can do the dishes.

    AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.

    • reddithalation@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      but those are the cool interesting research related AIs, not the venture capital hype LLMs that will gift us AGI any day now with just a bit more training data/compute.

    • Floey@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Obviously by AI they mean stuff like ChatGPT. An energy intensive toy where the goal is to get it into the hands of as many paying customers as possible. And you’re doing free PR for them by associating it with useful small scale research projects. I don’t think most researchers will want to associate their projects with AI now that the term has been poisoned, though they might have to because many bigwigs have been sucked into the hype. The term AI has basically existed nebulously since the beginning of computing, so whether we call one thing or another AI is basically personal taste. Companies like OpenAI have successfully attached their product to the term and have created the strongest association, so ultimately if you say AI in a contemporary context a lot of people are hearing GPT-like.

      • AccountMaker@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Yeah, but it doesn’t really help that this is a community “Fuck AI” made as “A place for all those who loathe machine-learning…”. It’s like saying “I loathe Dijsktra’s algorithm”. The term machine learning has been used since at least the 50’s and it involves a lot of elegant mathematics which all essentially just try to perform optimizations of various functions in various ways. And yet, at least in places I’m exposed to, people constantly present any instance of machine learning as useless, morally wrong, theft, ineffective compared to “traditional methods” and so on, to the point where I feel uneasy telling people that I’m doing research in that area, since there’s so much hate towards the entire field, not just LLMs. It might be because of them, sure, but in my experience, the popular hating of AI is not limited to ChatGPT, corporations and the like.

    • UltraHamster64@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Yes, ai is a tool. And the person in the screenshot is criticizing a generative gpt-like and midjorney-like ai, which has a massive impact on the climate and almost no useful results.

      In your examples, as I can see, they always train their own model (supernovae research, illegal fishing) or heavily customize it and use it in close conjunction with people (cancer screenings).

      And so I think we talking about two different things, so I want to clarify:

      ai as in neural-network algorithm that can digest massive amounts of data and give meaningful results - absolutely is useful and, I think, the more the time will pass (and more grifters move on to other fields) the more actual useful niches and cases would be solved with neural-nets.

      But, ai as in we-gonna-shove-this-bot-down-your-throut gpt-like bots trained on all the data from all the internet (mostly reddit) that struggle with basic questions, hallucinate glue on pizza, generate 6-fingered hands and are close to useless in any use-case are absolutely abismal and not worth it to ruin our climate for.

    • CompostMaterial@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      If all fossil fuel power plants were converted to nuclear then tech power consumption wouldn’t even matter. Again, it was the oil industry that railroaded nuclear power as being unsafe.