If people were using Photoshop to create spreadsheets you don’t say Photoshop is terrible spreadsheet software, you say the people are dumb for using the tool for something that it isn’t designed for.
People are using LLMs as search engines and then pointing out that they’re bad search engines. This is mass user error.
I’ve used it a few times when I struggled to find answers with regular searches and felt like giving up or just wanted to see what it has to say.
I took it for a spin for a test right now, asking “Which is the safest LLM service for a company in regards to privacy? ChatGPT, Anthropic or Mistral” and it actually found stuff that I didn’t before when I was looking into it.
Tell that to the companies slowly replacing conventional search with AI.
AI search is a game-changer for those companies. It keeps you on their site instead of clicking away. So they retain your attention, and needn’t share any of the economic benefit with the sources that make it possible.
And when we criticize the quality of the results, who’s gonna hold them accountable for nonsense? “It’s just a tool, after all”, they say, Caveat emptor!”
Nevermind that they have a financial incentive to yield results that avoid disrespecting your biases, and offer no more than a homeopathic dose of utility — to keep you searching but never finding.
It’s a sprawling problem, that stems from the lack of protections around monopoly power, the attention economy, cribbing off other people’s work, and misinformation.
Your comment is technically correct. “You’re using the wrong tool” is a valid footnote. But it’s not the crux of the issue.
Google and to some extent Micro$oft (and Amazon) have all sunk hundreds of billions of dollars into this bullshit technidiocy because they want AI to go out and suck up all the data on the Internet and then come back to (google or wherever) and present it as if it’s “common knowledge”.
Thereby rendering all authoritative (read; human, expensive) sources unnecessary.
Search and making human workers redundant has always been the goal of AI.
AI does not understand what any words mean. AI does not understand what the word “word” means. It was never going to work. It’s been an insanity money pit from day one. This simple fact is only now beginning to leak out because they can’t hide it anymore.
It’s actively destroying their search ability as well.
My 15 year old son got a lesson in not trusting Google search yesterday. He wanted pizza for dinner so I had him call a chain and order it. So he hit the call button on the AI bullshit section and ordered it.
When we got there we found out that every phone number listed on the summary was scrambled. He ordered a pizza at a place 150 miles away.
When you clicked into the webpage or maps the numbers were right. On the AI summary, it was all screwed up.
I’m confused. These are large language models, not search engines?
But they are used like search engine… A lot… That is a huge issue.
If people were using Photoshop to create spreadsheets you don’t say Photoshop is terrible spreadsheet software, you say the people are dumb for using the tool for something that it isn’t designed for.
People are using LLMs as search engines and then pointing out that they’re bad search engines. This is mass user error.
Correction: companies are implementing it into their search engines. Users are just providing feedback.
Ironically, Google’s original non-LLM summary was pretty great. That’s gone now.
They do have search functionality. For Perplexity it’s even the main focus. Yeah, it’s hard to stop them from confidently making things up.
Except Perplexity, which is indeed a search engine… which might explain why it does so well there.
I’m curious how Kagi would hold up, but the AI BS is entirely opt-in there so maybe they didn’t include it because of that.
Edit: lmao Perplexity is gross. Who would use this instead of an actual search engine?
I’ve used it a few times when I struggled to find answers with regular searches and felt like giving up or just wanted to see what it has to say. I took it for a spin for a test right now, asking “Which is the safest LLM service for a company in regards to privacy? ChatGPT, Anthropic or Mistral” and it actually found stuff that I didn’t before when I was looking into it.
Tell that to the companies slowly replacing conventional search with AI.
AI search is a game-changer for those companies. It keeps you on their site instead of clicking away. So they retain your attention, and needn’t share any of the economic benefit with the sources that make it possible.
And when we criticize the quality of the results, who’s gonna hold them accountable for nonsense? “It’s just a tool, after all”, they say, Caveat emptor!”
Nevermind that they have a financial incentive to yield results that avoid disrespecting your biases, and offer no more than a homeopathic dose of utility — to keep you searching but never finding.
It’s a sprawling problem, that stems from the lack of protections around monopoly power, the attention economy, cribbing off other people’s work, and misinformation.
Your comment is technically correct. “You’re using the wrong tool” is a valid footnote. But it’s not the crux of the issue.
Google and to some extent Micro$oft (and Amazon) have all sunk hundreds of billions of dollars into this bullshit technidiocy because they want AI to go out and suck up all the data on the Internet and then come back to (google or wherever) and present it as if it’s “common knowledge”.
Thereby rendering all authoritative (read; human, expensive) sources unnecessary.
Search and making human workers redundant has always been the goal of AI.
AI does not understand what any words mean. AI does not understand what the word “word” means. It was never going to work. It’s been an insanity money pit from day one. This simple fact is only now beginning to leak out because they can’t hide it anymore.
It’s actively destroying their search ability as well.
My 15 year old son got a lesson in not trusting Google search yesterday. He wanted pizza for dinner so I had him call a chain and order it. So he hit the call button on the AI bullshit section and ordered it.
When we got there we found out that every phone number listed on the summary was scrambled. He ordered a pizza at a place 150 miles away.
When you clicked into the webpage or maps the numbers were right. On the AI summary, it was all screwed up.