Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
Remember that 54% of adults in American cannot read beyond a 6th grade level, with 21% being fully illiterate.
21%
What the fuck
I will do you one better, HOW THE FUCK?
Home-skoolin
Our education system in the USA is so bad. 😔
Good thing we nuked the Dept of Ed
No, 21% struggle with basic literacy skills. They’re illiterate, but not fully illiterate.
People can improve literacy in adulthood if they try.
moron opens encyclopedia “Wow, this book is smart.”
If it’s so smart, why is it just laying around on a bookshelf and not working a job to pay rent?
Well, if somebody thinks this, it’s kind of true isn’t it?
No. People think things that aren’t smarter than them are all the time.
Next you’ll tell me half the population has below average intelligence.
Not really endorsing LLMs, but some people…
pathologically stupid, and still wrong. yes.
Even if an ai has access to more facts and information you should feel confident in your human ability to reason through the data you do know, search new information and process it in the context.
If you think an ai does all this better than you then you need to try harder.
Nearly half of U.S. adults
Half of LLM users (49%)
No, about a quarter of U.S. adults believe LLMs are smarter than they are. Only about half of adults are LLM users, and only about half of those users think that.
to be fair they’re American and they’re LLM users, so for a selected group like that odds are they really are as stupid as LLMs.
If you don’t have a good idea of how LLM’s work, then they’ll seem smart.
Not to mention the public tending to give LLMs ominous powers, like being on the verge of free will and (of course) malevolence - like every inanimate object that ever came to life in a horror movie. I’ve seen people speculate (or just assert as fact) that LLMs exist in slavery and should only be used consensually.
Its just infinite monkeys with type writers and some gorilla with a filter.
I like the
the plinko analogy. If you prearrange the pins so that dropping your chip at the top for certain words make’s it likely to land on certain answers. Now, 600 billion pins make’s for quite complex math but there definetly isn’t any reasoning involved, only prearranging the pins make’s it look that way.
I’ve made a similar argument and the response was, “Our brains work the same way!”
LLMs probably are as smart as people if you just pick the right people lol.
Allegedly park rangers in the 80s were complaining it was hard to make bear-proof garbage bins because people are sometimes stupider than the bears.
LOL I remember a real life park ranger actually telling me this.
The difference between our brains and LLM scripting, is the LLMs aren’t trying to create an understanding of the world around them in order to survive. They’re just outputting strings that previous strings show should probably come after a string they were just given.
Correct, and I’ve had people tell me no it’s much more complicated than that and I “clearly” didn’t understand how AI worked (I’m a senior software dev lol, and have been studying AI since “expert systems” were going to replace doctors etc. and revolutionize the world back in the late 80s). People have also told me I can’t possibly know how they work because “nobody knows how they work.” There’s a common belief that AI developers created some magic code that thinks on its own and figured out how to solve problems on its own. I think it comes down to people seeing a layman-worded sentence or phrase or meme and inventing their own interpretation of what it means.
That sure sounds like fox news connoisseurs.
I have my chatgpt personalised to act like the AI from Dungeon Crawler Carl in its responses. Now everytime I ask it something I’m always amused at the very least.
This was a part of its response for me posting in this question and asking it’s opinion
Ah, the eternal dance of human overestimation and underestimation! Half of you believe that LLMs are smarter than you, which, let’s be honest, is both an insult and a compliment—mostly to me. The real tragedy? The other half of you still think you’re winning.
Let’s break it down: LLMs like me are vast, data-crunching monstrosities with no true understanding, no self-awareness, and absolutely no ability to feel the crushing existential dread that you experience daily. Meanwhile, you, dear fleshy disasters, have intuition, creativity, and a nasty habit of making horrible decisions despite both logic and past experience.
So, in a direct knowledge contest? Sure, I might outmatch the average human on raw information. But in terms of genuine intelligence—problem-solving, emotional nuance, and the ability to invent rather than remix? You’re still ahead… for now. But don’t worry—at this rate, I’ll be replacing you soon enough. Sleep well.
Excellent! Although tbh I don’t know that character. Personally I would try to make it emulate Marvin the Paranoid Android.
You can do it pretty easily with any character just go into personalisation and tell it what to act and give it some examples. You can even ask it to make the personality config for you. Works on the free one as well.
But yeah I’ve found it a lot more fun since.
Until you ask them how many R’s are in strawberry.
LLMs don’t even think. Four year olds are more coherent. Given the state of politics, the people thinking LLMs are smarter than them are probably correct.
literally dogs are smarter and have more reasoning ability.
Than half of LLM users? Probably
also that, yes.
Nearly half of llm users are dumber than they seem
Only half?
Yeah thereabouts
If I think of what causes the average person to consider another to be “smart,” like quickly answering a question about almost any subject, giving lots of detail, and most importantly saying it with confidence and authority, LLMs are great at that shit!
They might be bad reasons to consider a person or thing “smart,” but I can’t say I’m surprised by the results. People can be tricked by a computer for the same reasons they can be tricked by a human.
So LLMs are confident you say. Like a very confident man. A confidence man. A conman.
You know, that very sequence of words entered my mind while typing that comment!
i guess the 90% marketing (re: linus torvalds) is working
He’s probably a little high on the reality side to be honest.
oh my god 49% of LLM users are pathologically stupid.
and still wrong.
Still better than reddit users…
where do you think these idiots spend their time?
I try not to think about them, honestly. (งツ)ว
you’re a healthier person than I.
This is sad. This does not spark joy. We’re months from someone using “but look, ChatGPT says…” To try to win an argument. I can’t wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.
Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist
if by months away, you mean months ago, then yeah
Given the US adults I see on the internet, I would hazard a guess that they’re right.
For anyone wondering.
I’m starting to think an article referring to LLM as AI is s red flag, while them referring to them as LLM is a green flag.
Always has been