French immigrants are eating our pets!

  • nicerdicer@feddit.org
    link
    fedilink
    arrow-up
    23
    ·
    edit-2
    8 days ago

    It’s a snat. They are not easy to catch, because they are fast. Also, they never land on their shell.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 days ago

    honestly, its pretty good, and it still works if I use a lower resolution screenshot without metadata (I haven’t tried adding noise, or overlaying something else but those might break it). This is pixelwave, not midjourney though.

  • Todd Bonzalez@lemm.ee
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    9 days ago

    I don’t get it. Maybe it’s right? Maybe a human made this?

    The picture doesn’t have to be “real”, it just has to be non-AI. Maybe this was made in Blender and Photoshop or something.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    192
    ·
    9 days ago

    I mean, it could be a manual photoshop job. Just because it’s not AI doesn’t mean it’s real.

    But also the detector is probably wrong - it’s likely an AI image using a different model than the detector was trained to detect.

    • kronisk @lemmy.world
      link
      fedilink
      arrow-up
      52
      ·
      edit-2
      9 days ago

      I mean, it could be a manual photoshop job.

      It could, but the double spiral in the shell indicates AI to me. Snail shells don’t grow like that. If it was a manual job, they would have used a picture of a real shell.

      Edit: plus the cat head looks weird where it connects to the head, and the markings don’t look right to me.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        9 days ago

        Agreed. The aggressive depth of field is another smoking gun that usually indicates an AI image.

      • fishbone@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        12
        ·
        9 days ago

        Also the fact that the grain on the side of the shell is perpendicular to the grain on the top, and it changes where the cat ear comes up in front of it.

        Very telltale sign of AI is a change of pattern in something when a foreground object splits it.

        Not saying it’s always a guarantee, but it’s a common quirk and it’s pretty easy to identify.

    • db2@lemmy.world
      link
      fedilink
      arrow-up
      53
      ·
      9 days ago

      There were a lot of really good images like that well before AI. Anyone remember Photoshop Friday?

      • Venator@lemmy.nz
        link
        fedilink
        arrow-up
        1
        ·
        8 days ago

        The shell looks ai generated though, if it was photoshopped it would’ve been a snail shell used for the source image.

      • Paradachshund@lemmy.today
        link
        fedilink
        arrow-up
        30
        arrow-down
        1
        ·
        9 days ago

        There’s a sort of… Sheen, to a lot AI images. Obviously you can prompt this away if you know what you’re doing, but its developing a bit of a look to my eye when people don’t do that.

    • Hobbes_Dent@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      9 days ago

      Where the fuck are you from that they aren’t called catsnails? Odd. Been catsnails here since I can remember.

  • mm_maybe@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    9 days ago

    There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…