In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Ah, this old paper again. When it first came out it got raked over the coals pretty thoroughly. The authors used an older, poorly-trained version of Stable Diffusion that had been trained on only 160 million images and identified 350,000 images from the training set that had many duplicates and therefore could potentially be overfitted. They then generated 175 million images using tags commonly associated with those duplicate images.

    After all that, they found 109 images in the output that looked like fuzzy versions of the input images. This is hardly a triumph of plagiarism.

    As for the watermark, look closely at it. The AI clearly just replicated the idea of a Getty-like watermark, it’s barely legible. What else would you expect when you train an AI on millions of images that contain a common feature, though? It’s like any other common object - it thinks photographs often just naturally have a grey rectangle with those white squiggles in it, and so it tries putting them in there when it generates photographs.

    These are extreme stretches and they get dredged up every time by AI opponents. Training techniques have been refined over time to reduce overfitting (since what’s the point in spending enormous amounts of GPU power to produce a badly-artefacted copy of an image you already have?) so it’s little wonder there aren’t any newer, better papers showing problems like these.

    • frog 🐸@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Nevertheless, the Getty watermark is a recognisable element from the images the model was trained on, therefore you cannot state that the models don’t spit out images with recognisable elements from the training data.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Take a close look at the “watermark” on the AI-generated image. It’s so badly mangled that you wouldn’t have a clue what it says if you didn’t already know what it was “supposed” to say. If that’s really something you’d consider “copyrightable” then the whole world’s in violation.

        The only reason this is coming up in a copyright lawsuit is because Getty is using it as evidence that Stability AI used Getty images in the training set, not that they’re alleging the AI is producing copyrighted images.

        • frog 🐸@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I said “recognisable”, and it is clearly recognisable as Getty’s watermark, by virtue of the fact that many people, not only I, recognise it as such. You said that the models don’t use any “recognizable part of the original material that it was trained on”, and that is clearly false because people do recognise parts of the original material. You can’t argue away other people’s ability to recognise the parts of the original works that they recognise.