• Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    9
    ·
    2 hours ago

    Some many in these comments are like “what about the ethical source data ones?”

    Which ones? Name one.

    None of the big ones are. Wtf is ethically sourced? E.g. Ebay wants to collect data for ai shit. My mom has an account, and she could opt out of them using her data but when I told her about it, she told me that she didn’t understand. And she moved on. She just didn’t understand what the fuck they are doing and why she might should care. But I guess it is “ethically” sourced as they kinda asked by making it opt out, I guess.

    That surely is very ethical and you can not critic it for it… As we all know, an 50yo adult fucking a 14yo would also be totally cool as long as the 14yo doesn’t say no. Right? That is how our moral compass work. /S

    Fucking disgusting. All of you tech bro complain about people not getting ai or tech in general and then talk about ethically sourced data. I spit on you.

    I love IT, I work in it and I live it, but I have morals and you could too

  • HappyTimeHarry@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 hours ago

    It seems we’ve come full circle with “copying is not theft”… I have to admit I’m really not against the technology in general, but the people who are currently in control of it are all just the absolute worst people who are the least deserving of control over such a thing.

    Is it hypocritical to think there should be rules for corporations that dont apply to real people? Like why is it the other way around and I can go to jail or get a fine for sharing the wrong files but some company does it and they just say its for the “common good” and they “couldnt make money if they had to follow the laws” and they get a fucking pass?

    • haverholm@kbin.earth
      link
      fedilink
      arrow-up
      3
      ·
      1 hour ago

      Honestly, I didn’t intend to block a dozen AI Bros today, but this has been like shooting fish in a barrel.

  • maxxadrenaline@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 hours ago

    The general rule was it had to be 25 percent different. This is why AI cant directly copy an image. You may remember some horrendous boundary pushing of art in the 2000s like that artist who straight up blew up celebrity and media influencers instagram posts and sold massive photos of it directly without giving the influencer/celebrity a cent. Avril Lavinge’s ex published her song lyric notes and won the case against her. Copyright has always been awful. The Marvin Gaye estate is notorious for bluffing that his IP is stolen, but music can legally sample 6 seconds of any song or sound without permission. Robin Thickes song was completely different and when that family is hard up they go after another obscure artist. Dont be swayed. If its original its original. Not like Selena Photos Y Recueredos and Back on the Chain Gang by the Pretenders, that one was blatant. And all she did was change the lyrics back in the 80s. Copyright changes, but you are protected just like the big guys. Don’t be afraid to create, you’ll be missing out on experience. Copy dont wory about originality just make art. Trust me I couldnt paint more than a stroke for years because of fear of being a copycat and infringing and unoriginal. Just copy copy until you have your own style. I promise it will come. Its impossible for two people to play the moonlight sonata exactly the same. I was friends with an Oxford music professor. He can tell anyone by the way they play a piano. The nuances are always going to show. You’re too original, you’re not a robot. Even 3d printers never print the same piece the same because of environmental factors.

    All you need to know is change your art 25 percent from the original. Even if it is color choice, and anything you publish online is automatically protected in American courts. It doesn’t matter if you copy AI. If its 25 percent different its yours. Also I;ll remind you that AI legally cannot duplicate images to infringe on copyright. Thats why all images look slightly off. The nuances are set with parameters partially to keep it legal. If courts find it is copy beyond artistic expression, then in comes the hammers and bats to the ai server stacks. Serious.

  • yyprum@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    11
    ·
    3 hours ago

    I guess it is inevitable that self centred ego-stroking bubble communities appear in platforms such as Lemmy. Where reasoned polite discussion is discouraged and opposing opinions are drowned.

    Well, I’ll just leave this comment here in the hope someone reads it and realises how bad these communities actually are. There’s a lot to hate about AI (specially companies dedicated to sell it), but not all is bad. As any technology, is about how you use it and this kind of community is all about cancelling everything and never advance and improve.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      2 hours ago

      There is utility in ai. E.g. in medical stuff like detecting cancer.

      Sadly, the most funded ai stuff is LLMs and image generation AIs. These are bad.

      And a lot of ai stuff have major issue with racism.

      “But ai has potential!!!” Yeah but it isn’t there and actively harms people. “But it could…” but it isn’t. Hilter could have fought against discrimination but sadly he chose the Holocaust and war. The potential of good is irrelevant to the reality of bad. People hate the reality of it and not the pipe dreams of it.

    • Tibi@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 hours ago

      Well the environmental costs are pretty high… One request to chatgpt 3.5 was guessed to be 100 times more expensive than a Google search request. Plus training.

      This doesn’t change no matter the use case. Same with the copyright issues.

    • techclothes@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      I don’t know the guy, but I do know the site, which is really nice for seeing how other games implemented UI. Makes complete sense for AI assholes to want all that data.

  • AnimalsDream@slrpnk.net
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    42
    ·
    7 hours ago

    The more I see dishonest, blindly reactionary rhetoric from anti-AI people - especially when that rhetoric is identical to classic RIAA brainrot - the more I warm up to (some) AI.

    • uienia@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      It is in fact the opposite of reactionary to not uncritically embrace your energy guzzling, disinformation spreading, proft-driven “AI”.

      As much as you don’t care about language, it actually means something and you should take some time to look inwards, as you will notice who is the reactionary in this scenario.

  • drkt@scribe.disroot.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    37
    ·
    7 hours ago

    Oh boy here we go downvotes again

    regardless o the model you’re using, the tech itself was developed and fine-tuned on stolen artwork with the sole purpose of replacing the artists who made it

    that’s not how that works. You can train a model on licensed or open data and they didn’t make it to spite you even if a large group of grifters are but those aren’t the ones developing it

    If you’re going to hate something at least base it on reality and try to avoid being so black-and-white about it.

    • sixty@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      You CAN train a model on licensed or open data. But we all know they didn’t keep it to just that.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      Name one that is “ethically” sourced.

      And “open data” is a funny thing to say. Why is it open? Could it be open because people who made it didn’t expect it to be abused for ai? When a pornstar posted a nude picture online in 2010, do you think they thought of the idea that someone will use it to create deepfakes of random women? Please be honest. And yes, a picture might not actually be “open data” but it highlights the flaw in your reasoning. People don’t think about what could be done to their stuff in the future as much as they should but they certainly can’t predict the future.

      Now ask yourself that same question with any profession. Please be honest and tell us, is that “open data” not just another way to abuse the good intentions of others?

    • pretzelz@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      2
      ·
      6 hours ago

      I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.

      I’m not sure I agree with that argument. It’s like saying that if you invented a cure for cancer that relied on morally bankrupt means you shouldn’t use that cure. I’d say that there should be a legal process involved against the person who did the illegal acts but once you have discovered something it stands on its own two feet. Perhaps there should be some kind of reparations however given to the people who were abused in that process.

  • blinx615@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    86
    ·
    11 hours ago

    Rejecting the inevitable is dumb. You don’t have to like it but don’t let that hold you back on ethical grounds. Acknowledge, inform, prepare.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 hours ago

      Ai isn’t magic. It isn’t inevitable.

      Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn’t threaten the livelihood of millions of people after stealing their labor.

      Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.

        • uienia@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          That is such a disingeous argument. “Making murder illegal? People will just kill each other anyways, so why bother?”

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      41
      arrow-down
      3
      ·
      10 hours ago

      You probably create AI slop and present it proudly to people.

      AI should replace dumb monotonous shit, not creative arts.

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        42
        ·
        edit-2
        9 hours ago

        I couldn’t care less about AI art. I use AI in my work every day in dev. The coworkers who are not embracing it are falling behind.

        Edit: I keep my AI use and discoveries private, nobody needs to know how long (or little) it took me.

          • ArtificialHoldings@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 hour ago

            The objections to AI image gens, training sets containing stolen data, etc. all apply to LLMs that provide coding help. AI web crawlers search through git repositories compiling massive training sets of code, to train LLMs.

        • msage@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          4 hours ago

          Then most likely you will start falling behind… perhaps in two years, as it won’t be as noticable quickly, but there will be an effect in the long term.

        • Tartas1995@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          “i am fine with stolen labor because it wasn’t mine. My coworkers are falling behind because they have ethics and don’t suck corporate cock but instead understand the value in humanity and life itself.”

        • corsicanguppy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          I use gpt to prototype out some Ansible code. I feel AI slop is just fine for that; and I can keep my brain freer of YAML and Ansible, which saves me from alcoholism and therapy later.

  • Scubus@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    arrow-down
    73
    ·
    edit-2
    2 hours ago

    Tools have always been used to replace humans. Is anyone using a calculator a shitty person? What about storing my milk in the fridge instead of getting it from the milk man?

    I don’t have an issue with the argument, but unless they’re claiming that any tool which replaced human jobs were unethical then their argument is not self consistent and thus lacks any merit.

    Edit: notice how no one has tried to argue against this

    People have begun discussing it, although i suppose it was an unfair expectation to have this discussion here. Regardless, after i originally edited this, you guys did have tons of discussions with me. I do appreciate it, and it seems that most of us support the same things. It kinda just seems like an issue with framing and looking at things in the now vs the mid term future.

    • redwattlebird@lemmings.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      3 hours ago

      The issue isn’t automation itself. The issue is the theft, the fact that art cannot be automated and the use of it to further enshittification.

      First, the models are based off theft of OUR data and then sold back to us for profit.

      Secondly, most AI art is utter crap and doesn’t contribute anything to human society. It’s shallow slop.

      Thirdly, having it literally everywhere while also being completely energy inefficient is absolutely dumb. Why are we building nuclear reactors and coal plants to replace what humans can do for cheap??

      Edit: further, the sole purpose of AI is to hoard wealth to a small number of people. Calculators, hammers etc. do not have this function and do not also require lots of energy to use.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Ive responded to a lot of that elsewhere, but in short: i agree theft bad. Capitalism also bad. Neither of those are inherit to ai or llms though, although theft is definitely the easy way.Art can be automated, nature does it all the time. We cant do it to a high degree now, i will concede.

        Quality is of course low, its new. The progress in the last year has been astounding, it will continue to improve. Soon this will no longer be a valid argument.

        I agree, modern ai is horribly innefficient. It’s a prototype, but its also a hardware issue. Soon there will be much more efficient designs and i suspect a rather significant alteration to the architecture of the network that may allow for massively improved efficiency. Disclaimer: i am not professionally in the field, but this topic in particular is right up mutiple fields of study i have been following for a while.

        Edit: somehow missed your edit when writing. To some extent every tool of productivity exists to exploit the worker. A calculator serves this function as much as anything else. By allowing you to perform calculations more quickly, your productivity massively increases in certain fields, sometimes in excess of thousands of times. Do you see thousands of times the profits of your job prior to the advent of calculators, excluding inflation? Unlikely. Or the equivelent pay of the same amount of “calculators” required for your work? Equally unlikely. Its inherit to capitalism.

    • SexDwarf@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      5 hours ago

      Would you replace a loved-one (a child, spouse, parent etc.) with an artificial “tool”? Would it matter to you if they’re not real even when you couldn’t tell the difference? And if your answer is yes, you had no trouble replacing a loved-one with an artificial copy, then our views/morals are fundamentally so different that I can’t see us ever agreeing.

      It’s like trying to convince me that having sex with animals is awesome and great and they like it too, and I’m just no thanks, that’s gross and wrong, please never talk to me again. I know I don’t necessarily have the strongest logic in the AI (and especially “AI art”) discussion but that’s how I feel.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        4 hours ago

        Thats a lot of different questions in a lot of different contexts. If my parent decided to upload their conciousness near the end of their life into a mech suit covered in latex(basically) that was indistinguishable physically from a human(or even not, who am I to judge) and the process of uploading a conciousness was well understood and practiced, then yes, I would respect their decision. If you wouldn’t, you either have difficulty placing yourself in hypothetical situations designed to test the limits of societal norms, or you abjectly do not care about the autonomy of your parent.

        Child, I have no issue adopting. If they happen to be an artificial human I don’t see why that should proclude them from being allowed to have parents.

        Spouse, I’m not going to create one to my liking. But if we lived in a world with AI creating other AI that are all sentient, some of which presumably choosing to take a physical form in an aforementioned mech, why shouldnt i date them? Your immediate response is sex, but lets ignore that. Is an asexual relationship with a sentient robot ok? What about a friendship with said robot? Are you even allowed to treat a sentient robot as a human? Whats the distinction? I’m not attempting a slippery slope, I genuinely would like to hear where your distinctions between what is and isn’t acceptable lies. Because I think this miscommunication either stems from a misunderstanding about the possible sentience of ai in the future, or from the lack of perspective of what it might be like from their perspective.

        Edit: just for the record, i dont downvote comments like yours, but someone did, so i had to upvote you.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 hours ago

          Are you even allowed to treat a sentient robot as a human?

          Oh, boy, this one’s really hard. I’ll give it my best shot, though. Phoo. Okay, here goes.

          Yes.

          Ohhhh fuck. Oh god. Oh please. Scubus, how did I do? Did I win?

          Now please argue to me that chatgpt is sentient.

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            3 hours ago

            Ah, sorry. I misunderstood your argument. No, I would never replace a loved one with a “tool”. But replacing loved ones with tools was never something I was arguing for. Chatgpt is a very crude prototype for the type of AI I am referring to. And he didnt say chatgpt, he said “degenerative AI” but also stated “AI art”.

            The entire argument is centered around those who use or make ai art being “shitty people”, no exceptions. But that falls apart when you ever remotely analyze it. There are ethical ways to do the entire process.

    • rolling@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      3 hours ago

      This may come as a shock to you, but nobody was working as a refrigerator. Refrigerators didn’t replace the milk man, the stores did. Which was fine at first since those stores were supposed to buy the milk from the milkman and just make it more readily accessible. Then human greed took over, the stores or big name brands started to fuck over the milk man, and conspired with other big name stores to artificially increase the price of bread while blaming covid and inflation, and now some, although few people are trying to buy it back from the milk man if they can afford / access it.

      Those tools that did replace humans, did not steal human work and effort, in order to train themselves. Those tools did not try to replace human creativity with some soulless steaming pile of slop.

      You see, I believe open source, ethically trained AI models can exist and they can accomplish some amazing things, especially when it comes to making things accessible to people with disabilities for an example. But Edd Coates’ is specifically talking about art, design and generative AI. So, maybe, don’t come to a community called “Fuck AI”, change the original argument and then expect people argue against you with a good will.

      • ArtificialHoldings@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        The “milkman” is a delivery person who works for milk producers. The company that produces milk still exists, the role of the milkman was just made unnecessary due to advances in commercial refrigeration - milk did not have to be delivered fresh, it could be stored and then bought on-demand.

        https://en.wikipedia.org/wiki/Milk_delivery

        “Human greed” didn’t take over to fuck over the milkman, they just didn’t need a delivery person any more because milk could be stored on site safely between shipments.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        Tons of people do! I browse /all and dont want to block /fuck_ai because a ton of you do have great discussions with me. Im not brigading, i have never once saught out this community, but ive always tried to be respectful and i havent gotten banned. So I’d say all is well.

        As far as the crappy stuff, that really seems like just another extension of consumerism. Modern art has irked people for a while because some of it is absurdly simplistic. But if people are willing to buy into it, thats on them. Llms have very limited use case, and ethically sourcing your data is clinically neccassary for both ethical and legal reaosns. But the world needs to be prepared for the onset of the next generation of ai. Its not going to be sentient quite yet, but general intelligence isnt too far away. Soon one ai will be able to outperform humans on most daily tasks as well as some more spcified tasks. Llms seemingly took the world by surprise, but if youve been following the tech the progression has been somewhat obvious. And it is continuing to progress.

        Honestly, the biggest concern i have with modern ai outside of how its being implemented is that it is environmentally very bad, but im hoping that the increase in the ai bubble will lead to more specialised energy efficient designs. I don’t remember what the paper was but they were using ai to generatively design more efficient chips and it was showing promising results. On a couple of the designs they werent entirely sure how they functioned(they have several strong theories, but theyre not certain. Not trying to misrepresent this), but when they fucked with them they stopped behaving as predicted/expected(relative to them being fucked with, of course a broken circuit isnt going to function correctly)

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        5 hours ago

        “If you facilitate AI art, you are a shitty person”

        There are ethical means to build models using consentually gathered data. He says those artists are shitty.

      • BrainInABox@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        9
        ·
        7 hours ago

        Didn’t they? Did they get consent from the mathematicians to use their work?

            • rolling@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              3 hours ago

              I think the fact that AI sucks ass at even the most basic math proves that the difference between discovery and creation is, indeed, not arbitrary.

              Unless you are the kind of person to use AI to do math, then yeah I can see how it can look that way.

              • BrainInABox@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                3 hours ago

                I think the fact that AI sucks ass at even the most basic math proves that the difference between discovery and creation is, indeed, not arbitrary.

                I don’t follow your reasoning at all.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      36
      arrow-down
      2
      ·
      edit-2
      4 hours ago

      Yes, I also think the kitchen knife and the atom bomb are flatly equivalent. Consistency, people!

      Edit: 🤓 erm, notice how no one has tried to argue against this

          • ober@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            14
            ·
            9 hours ago

            I like how he made an edit to say no one is arguing his point and the only response he got is arguing his point and then he replies to that with no argument.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              6
              ·
              8 hours ago

              They virtually always do this. People are, very often, not actually motivated by logic and reason; logic and reason are a costume they don to appear more authoritative.

            • Scubus@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              arrow-down
              4
              ·
              5 hours ago

              He made a completely irrelevant observation. There was no argument. He didnt try to refute anything I said, he tried to belittle the argument. No response was neccassary. If anyone else has reaponded, i havent had a chance to look.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    39
    arrow-down
    83
    ·
    edit-2
    13 hours ago

    And this is where I split with Lemmy.

    There’s a very fragile, fleeting war between shitty, tech bro hyped (but bankrolled) corporate AI and locally runnable, openly licensed, practical tool models without nearly as much funding. Guess which one doesn’t care about breaking the law because everything is proprietary?

    The “I don’t care how ethical you claim to be, fuck off” attitude is going to get us stuck with the former. It’s the same argument as Lemmy vs Reddit, compared to a “fuck anything like reddit, just stop using it” attitude.


    What if it was just some modder trying a niche model/finetune to restore an old game, for free?

    That’s a rhetorical question, as I’ve been there: A few years ago, I used ESRGAN finetunes to help restore a game and (seperately) a TV series. Used some open databases for data. Community loved it. I suggested an update in that same community (who apparently had no idea their beloved “remaster” involved oldschool “AI”), and got banned for the mere suggestion.


    So yeah, I understand AI hate, oh do I. Keep shitting on Altman an AI bros. But anyone (like this guy) who wants to bury open weights AI: you are digging your own graves.

    • forrgott@lemm.ee
      link
      fedilink
      arrow-up
      50
      arrow-down
      12
      ·
      13 hours ago

      Oh, so you deserve to use other people’s data for free, but Musk doesn’t? Fuck off with that one, buddy.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          7
          ·
          10 hours ago

          Except gen AI didn’t exist when those people decided on their license. And besides which, it’s very difficult to specify “free to use, except in ways that undermine free access” in a license.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        7
        ·
        edit-2
        11 hours ago

        Musk does too, if its openly licensed.

        Big difference is:

        • X’s data crawlers don’t give a shit because all their work is closed source. And they have lawyers to just smash anyone that complains.

        • X intends to resell and make money off others’ work. My intent is free, transformative work I don’t make a penny off of, which is legally protected.

        That’s another thing that worries me. All this is heading in a direction that will outlaw stuff like fanfics, game mods, fan art, anything “transformative” of an original work and used noncommercially, as pretty much any digital tool can be classified as “AI” in court.

    • haverholm@kbin.earth
      link
      fedilink
      arrow-up
      23
      arrow-down
      6
      ·
      13 hours ago

      What if it was just some modder trying a niche model/finetune to restore an old game, for free?

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        20
        ·
        edit-2
        13 hours ago

        Yeah? Well what if they got very similar results with traditional image processing filters? Still unethical?

        • superniceperson@sh.itjust.works
          link
          fedilink
          arrow-up
          23
          arrow-down
          6
          ·
          12 hours ago

          The effect isn’t the important part.

          If I smash a thousand orphan skulls against a house and wet it, it’ll have the same effect as a decent limewash. But people might have a problem with the sourcing of the orphan skulls.

          It doesn’t matter if you’we just a wittle guwy that collects the dust from the big corporate orphan skull crusher and just add a few skulls of your own, or you are the big corporate skull crusher. Both are bad people despite producing the same result as a painter that sources normal limewash made out of limestone.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            13
            ·
            edit-2
            11 hours ago

            Even if all involved data is explicity public domain?

            What if it’s not public data at all? Like artifical collections of pixels used to train some early upscaling models?

            That’s what I was getting: some upscaling models are really old, used in standard production tools under the hood, and completely legally licensed. Where do you draw the line between ‘bad’ and ‘good’ AI?

            Also I don’t get the analogy. I’m contributing nothing to big, enshittified models by doing hobbyist work, if anything it poisons them by making public data “inbred” if they want to crawl whatever gets posted.

              • hedgehog@ttrpg.network
                link
                fedilink
                arrow-up
                2
                arrow-down
                6
                ·
                9 hours ago

                The energy consumption of a single AI exchange is roughly on par with a single Google search back in 2009. Source. Was using Google search in 2009 unethical?

              • brucethemoose@lemmy.world
                link
                fedilink
                arrow-up
                5
                arrow-down
                14
                ·
                edit-2
                11 hours ago

                Total nonsense. ESRGAN was trained on potatoes, tons of research models are. I fintune models on my desktop for nickels of electricity; it never touches a cloud datacenter.

                At the high end, if you look past bullshiters like Altman, models are dirt cheap to run and getting cheaper. If Bitnet takes off (and a 2B model was just released days ago), inference energy consumption will be basically free and on-device, like video encoding/decoding is now.

                Again, I emphasize, its corporate bullshit giving everything a bad name.