cross-posted from: https://programming.dev/post/3974080
Hey everyone. I made a casual survey to see if people can tell the difference between human-made and AI generated art. Any responses would be appreciated, I’m curious to see how accurately people can tell the difference (especially those familiar with AI image generation)
14 / 20 here. I dunno why there are so many people, particularly on Reddit, who absolutely hate AI art. Yeah some of it can look janky, uncanny valley, or such but a lot of it looks really damn cool.
And not all of us have talents to create visual art of our own so text creation is much more accessible for us to explore our imaginations. Or lack the money to commission pieces from human artists.
I suspect they hate it not because of any features of the actual images themselves, but for what it means to how society as a whole treats art.
For some it’s simply financial. Their career is at stake, an industry that they thought was a stable source of employment is now on the leading edge of a huge shake-up that might not need them at all in the future.
For others it’s seen as an attack on their personal self-worth. For years - for generations - there has been a steady drumbeat insistence that art is what makes humans “special.” Both specific artists, and humanity in general. It was supposed to be a special skill that we had that set us above the animals and the machines. And now that’s been usurped.
It’s like the old folk take of John Henry, the steel-driving man who made a heroic last stand against Skynet’s forces in the railroad construction industry. People want to think humans are irreplaceable and art seemed like a rock-solid anchor for that. Turns out it was actually not.
Spot on!
Agree and I sympathize with all the points.
On the financial point, we, as a society, badly need to stop depending on jobs for survival before it’s too late. But I know that we’re unlikely to change until a lot of people get hurt.
And on the self-worth point, it feels awful to be replaced, even if the money isn’t an issue. People take pride in their work and want their work to be celebrated. Yet, we’re quickly approaching a point where it’s going to be very difficult for people to create art by hand that can hold a candle to AI art. Sure, there’s still many master artists, but they got where they are through hard work. How many new potential artists will be willing to put in that hard work when any random Joe Blow can generate something better in seconds? Human made art (from scratch) won’t go away, but it is harder to feel good about what you create when it feels like your art has no place anymore.
I suspect that society isn’t going to stop depending on jobs for survival until it’s too late. That is, it’ll only implement something like UBI or equivalent solution once most jobs have been replaced and there’s a legion of permanent unemployed who are forcing the issue to be addressed. Unfortunately that just seems to be the way of things, very few problems ever get addressed preemptively.
IMO this isn’t really a reason to try to slow down AI, because that will only slow down the eventual UBI-like solution to it. At this point I don’t think “change human nature first” is a viable approach.
Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you’re just exploiting labor without consent.
(Also if you’re just making random images for yourself, w/e)
((Also also, text models are a separate debate and imo much worse considering they’re literally misinformation generators))
Note: if anybody wants to reply with “actually AI models learn like people so it’s fine”, please don’t. No they don’t. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.
This paper is just about stock photos or video game art with enough dupes or variations that they didn’t get cut from the training set. The repeated images were included frequently enough to overfit. Which is something we already knew. That doesn’t really go to proving if diffusion models learn like humans or not. Not that I think they do.
Sure, it’s not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it’s not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).
A lot of Redditors don’t even know why they think a certain way, they think that way because everyone else around them thinks that way. There are some legit criticisms of AI art but most of the time it’s just bullshit lip service to artists when they don’t actually care