Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • Pup Biru@aussie.zone
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    you know how the neurons in our brain work, right?

    because if not, well, it’s pretty similar… unless you say there’s a soul (in which case we can’t really have a conversation based on fact alone), we’re just big ol’ probability machines with tuned weights based on past experiences too

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      You are spitting out basic points and attempting to draw similarities because our brains are capable of something similar. The difference between what you’ve said and what LLMs do is that we have experiences that we are able to glean a variety of information from. An LLM sees text and all it’s designed to do is say “x is more likely to appear before y than z”. If you fed it nonsense, it would regurgitate nonsense. If you feed it text from racist sites, it will regurgitate that same language because that’s all it has seen.

      You’ll read this and think “that’s what humans do too, right?” Wrong. A human can be fed these things and still reject them. Someone else in this thread has made some good points regarding this but I’ll state them here as well. An LLM will tell you information but it has no cognition on what it’s telling you. It has no idea that it’s right or wrong, it’s job is to convince you that it’s right because that’s the success state. If you tell it it’s wrong, that’s a failure state. The more you speak with it, the more fail states it accumulates and the more likely it is to cutoff communication because it’s not reaching a success, it’s not giving you what you want. The longer the conversation goes on, the more crazy LLMs get as well because it’s too much to process at once, holding those contexts in its memory while trying to predict the next one. Our brains do this easily and so much more. To claim an LLM is intelligent is incredibly misguided, it is merely the imitation of intelligence.

      • Pup Biru@aussie.zone
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        but that’s just a matter of complexity, not fundamental difference. the way our brains work and the way an artificial neural network work aren’t that different; just that our brains are beyond many orders of magnitude bigger

        there’s no particular reason why we can’t feed artificial neural networks an enormous amount of … let’s say tangentially related experiential information … as well, but in order to be efficient and make them specialise in the things we want, we only feed them information that’s directly related to the specialty we want them to perform

        there’s some… “pre training” or “pre-existing state” that exists with humans too that comes from genetics, but i’d argue that’s as relevant to the actual task of learning, comprehension, and creating as a BIOS is to running an operating system (that is, a necessary precondition to ensure the correct functioning of our body with our brain, but not actually what you’d call the main function)

        i’m also not claiming that an LLM is intelligent (or rather i’d prefer to use the term self aware because intelligent is pretty nebulous); just that the structure it has isn’t that much different to our brains just on a level that’s so much smaller and so much more generic that you can’t expect it to perform as well as a human - you wouldn’t expect to cut out 99% of a humans brain and have them be able to continue to function at the same level either

        i guess the core of what i’m getting at is that the self awareness that humans have is definitely not present in an LLM, however i don’t think that self-awareness is necessarily a pre-requisite for most things that we call creativity. i think that’s it’s entirely possible for an artificial neural net that’s fundamentally the same technology that we use today to be able to ingest the same data that a human would from birth, and to have very similar outcomes… given that belief (and i’m very aware that it certainly is just a belief - we aren’t close to understanding our brains, but i don’t fundamentally thing there’s anything other then neurons firing that results in the human condition), just because you simplify and specialise the input data doesn’t mean that the process is different. you could argue that it’s lesser, for sure, but to rule out that it can create a legitimately new work is definitely premature

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      “Soul” is the word we use for something we don’t scientifically understand yet. Unless you did discover how human brains work, in that case I congratulate you on your Nobel prize.

      You can abstract a complex concept so much it becomes wrong. And abstracting how the brain works to “it’s a probability machine” definitely is a wrong description. Especially when you want to use it as an argument of similarity to other probability machines.

      • Pup Biru@aussie.zone
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        “Soul” is the word we use for something we don’t scientifically understand yet

        that’s far from definitive. another definition is

        A part of humans regarded as immaterial, immortal, separable from the body at death

        but since we aren’t arguing semantics, it doesn’t really matter exactly, other than the fact that it’s important to remember that just because you have an experience, belief, or view doesn’t make it the only truth

        of course i didn’t discover categorically how the human brain works in its entirety, however most scientists i’m sure would agree that the method by which the brain performs its functions is by neurons firing. if you disagree with that statement, the burden of proof is on you. the part we don’t understand is how it all connects up - the emergent behaviour. we understand the basics; that’s not in question, and you seem to be questioning it

        You can abstract a complex concept so much it becomes wrong

        it’s not abstracted; it’s simplified… if what you’re saying were true, then simplifying complex organisms down to a petri dish for research would be “abstracted” so much it “becomes wrong”, which is categorically untrue… it’s an incomplete picture, but that doesn’t make it either wrong or abstract

        *edit: sorry, it was another comment where i specifically said belief; the comment you replied to didn’t state that, however most of this still applies regardless

        i laid out an a leads to b leads to c and stated that it’s simply a belief, however it’s a belief that’s based in logic and simplified concepts. if you want to disagree that’s fine but don’t act like you have some “evidence” or “proof” to back up your claims… all we’re talking about here is belief, because we simply don’t know - neither you nor i

        and given that all of this is based on belief rather than proof, the only thing that matters is what we as individuals believe about the input and output data (because the bit in the middle has no definitive proof either way)

        if a human consumes media and writes something and it looks different, that’s not a violation

        if a machine consumes media and writes something and it looks different, you’re arguing that is a violation

        the only difference here is your belief that a human brain somehow has something “more” than a probabilistic model going on… but again, that’s far from certain