• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    26
    ·
    edit-2
    13 days ago

    Yeah? Well what if they got very similar results with traditional image processing filters? Still unethical?

    • superniceperson@sh.itjust.works
      link
      fedilink
      arrow-up
      26
      arrow-down
      9
      ·
      13 days ago

      The effect isn’t the important part.

      If I smash a thousand orphan skulls against a house and wet it, it’ll have the same effect as a decent limewash. But people might have a problem with the sourcing of the orphan skulls.

      It doesn’t matter if you’we just a wittle guwy that collects the dust from the big corporate orphan skull crusher and just add a few skulls of your own, or you are the big corporate skull crusher. Both are bad people despite producing the same result as a painter that sources normal limewash made out of limestone.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        17
        ·
        edit-2
        13 days ago

        Even if all involved data is explicity public domain?

        What if it’s not public data at all? Like artifical collections of pixels used to train some early upscaling models?

        That’s what I was getting: some upscaling models are really old, used in standard production tools under the hood, and completely legally licensed. Where do you draw the line between ‘bad’ and ‘good’ AI?

        Also I don’t get the analogy. I’m contributing nothing to big, enshittified models by doing hobbyist work, if anything it poisons them by making public data “inbred” if they want to crawl whatever gets posted.

          • untakenusername@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            11 days ago

            Depends on what ur producing, running llama 3.1 locally on a raspberry pi doesnt produce any meaningful impact on the climate.

          • hedgehog@ttrpg.network
            link
            fedilink
            arrow-up
            4
            arrow-down
            8
            ·
            13 days ago

            The energy consumption of a single AI exchange is roughly on par with a single Google search back in 2009. Source. Was using Google search in 2009 unethical?

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            7
            arrow-down
            17
            ·
            edit-2
            13 days ago

            Total nonsense. ESRGAN was trained on potatoes, tons of research models are. I fintune models on my desktop for nickels of electricity; it never touches a cloud datacenter.

            At the high end, if you look past bullshiters like Altman, models are dirt cheap to run and getting cheaper. If Bitnet takes off (and a 2B model was just released days ago), inference energy consumption will be basically free and on-device, like video encoding/decoding is now.

            Again, I emphasize, its corporate bullshit giving everything a bad name.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            12 days ago

            I’m trying to make the distinction between local models and corporate AI.

            I think what people really hate is enshittification. They hate the shitty capitalism of unethical, inefficient, crappy, hype and buzzword-laden AI that’s shoved down everyone’s throats. They hate how giant companies are stealing from everyone with no repercussions to prop up their toxic systems, and I do too. It doesn’t have to be that way, but it will be if the “fuck AI” attitude like the one on that website is the prevalent one.