Musk’s gork is as stupid as he is! And he claims it’s waaaaaayyyyy better than other AI. 🤡🤡🤡
How does this surprise anyone?
LLMs are just pattern recognition machines. You give them a sequence of words and they tell you what is the most statistically likely word to follow based solely on probability, no logic or reasoning.
It’s amazing that they get it right 40 % of the time then.
AI as a search engine is terrible.
Because if you treat it as such, it will just look at the first result, which is usually wrong or has incomplete info.
If you give the AI a source document, then it is amazing as a search engine. But if the source doc is the entire internet… its fucking bad.
Shit quality in, shit quality out. And we/corporations have made the internet abundant of shit.
just for clarity, some kind of learning algorithms have been used in web searches prior to this generative AI boom. I know for a fact that google used an AI to rank pages for its search before even gpt was a thing.
But you’re totally right. generative models shouldn’t be used as search engines.
Hilarious that Gemini is so bad. Not like Google had a good starting position on internet search
The only thing Gemini is good for is bringing up sources that don’t appear in the regular Google search results. Which only leads to another question: why are those links not in the regular Google search results?
I find the same with perplexity. It’s more of a search assistant in finding some sources that a search engine likely wouldn’t. Sometimes it’s summarized answers are accurate, sometimes it’s a jumble of several slightly unrelated sources.
My only guess is that they’re trying to see if de-enshittifying results for AI can make it profitable
I was talking about this with a webdev buddy the other day, wondering if webmasters might start optimizing for AI indexing rather than SEO.
That’s an interesting thought. I would wonder if there’s too much change/movement in the ai models, and would think that we won’t see something like that until there’s more stability, or one of the ai models comes out on top of all the others. Right now you’d have to optimize for half a dozen different models, and still be missing a few ‘popular’ ones.
Infinite money, all the data on the internet, and nothing to show for it. I wrote about my experience with Gemini assistant for people who enjoy suffering.
I’ve genuinely been wondering what the hell the average googler has been up to in the last 5 years. They’re killing services, barely developing new features or hardware, and have been talking for so long (as in, they were genuinely at the forefront) about AI and how they’re in a unique position to make the most out of data, AI, services, and hardware, then failed spectacularly to keep that advantage, and even more spectacularly to keep up.
That can only mean they used that position in an even more profitable way, one that the general public is not even aware of.
Amazing
isn’t it? all this tech advancement over the past half century, followed billions of dollars of investment on a tech that wastes a monumental amount of energy and water to give you the wrong answer to questions even the most basic calculators can answer.
Is this an ad for Perplexity? I’ve never heard of it, and now I’m googling it. So effective ad if so.
Would be weird for an ad to bash on the paid tier
Yeah, it’s one of those “no bad press” kind of things. It’s bashing on AI, but Perplexity actually looks pretty good by comparison.
I’m saying the Perplexity paid tier is about 2x more likely to be confidently wrong than Perplexity
Oh, right, good point.
deleted by creator
Idk but search is what they do, they use regular AI models like chatgpt or claude with their own search tool.
It’s just asking it m to find sources from excerpts. I don’t think this is something they have been trained on with much emphasis is it?
I mean, the tech is changing faster than science can analyize it, but isnt this now outdated?
I dont use AI but a friend showed me a query that returned the sources, most of which were academic and appeared trustworthy
It’s making ten billion calculations per second and they’re all wrong!
That’s one of my skills as a certified genius. I’m wicked fast at math.
37/2.4 boom 16.38.
Is it right, maybe, maybe not. But I did it fast
Seriously, TRY and get an AI chat to give an answer without making stuff up. It is impossible. You can tell it “you made that data up, do not do that” … and it will apologize and say you were right, then make up more dumb shit.
You can tell it “you made that data up, do not do that”
I wish people would stop treating these tools as intelligent.
Yeah, LLMs are great if you treat them like a tool to create drafts or give you ideas, rather than like an encyclopedia.
I’ll get hate for this but in most tasks people use them for they are pretty dang accurate. I’m talking about frontier models fyi
Google Gemini gives me solid results, but I stick to strictly factual questions, nothing ambiguous. Got a couple of responses I thought were wrong, turns out I was wrong.
I got a Firefox plugin to block Gemini results because whenever I look up something for my medical studies, it runs a really high chance of spitting out garbage or outright lies when I really just wanted the google result for the NIH StatPearls article on the thing.
As a medical professional, generative AI and search adjuncts like Gemini only make my job harder.
I like how when you go pro with perplexity, all you get is more wrong answers
That’s not true, it looks like it does improve. More correct and so-so answers.
Yes, having tested this myself it is absolutely correct. Hell, even when it finds something, it’s usually a secondary or tertiary source that’s nearly unusable-- or even one of those “we did our own research and vaccines cause autism” type sources. It’s awful and idiots seem to think otherwise.
You shouldn’t use them to keep up with the news. They make that option available because it’s wanted, but they shouldn’t.
It should only be used to research older data from its original dataset, perhaps adding to it a bit with newer knowledge if you’re a specialist in the field.
When you ask the right questions in the right way, you’ll get the right answers, or at least mostly - and you should always check the sources after. But it’s a specialists tool at this time. And most people are not specialists.
So this whole “Fuck AI” movement is actually pretty damn stupid. It’s good to point out its flaws, try and make people aware and help guide it better into the future.
But it’s actually useful, and not going away. You’re just using it wrong, and as the tech progresses, ways to use it wrong will decrease. You can’t stop progress, humanity will always come with new things, evolution is designed that way.
Well, no, because what I’m referring to isn’t even news, it’s research. I’m an adjunct professor and trying to get old articles doesn’t even work, even when they’re readily available publicly. The linked article here is referencing citations and it doesn’t get more citation-y than that. It doesn’t change that when you ask differently, either, because LLMs aren’t good at that even if tech bros want it to be.
Now, the information itself could be valid, and in basics it usually is. I was at least able to use it to get myself some basic ideas on a subject before ultimately having to browse abstracts for what I need. Still, you need the source of you’re doing anything serious and the best I’ve got from AI are just authors prevalent in the field which at least is useful for my own database searches.
I understand your experience and have had it myself. It’s also highly dependent on the model you use. The most recent ChatGPT4.5 for instance, is pretty good at providing citations. The tech is being developed fast.
I don’t doubt that it’ll get better, although with the black box nature of these types of models, I’m not sure if it’ll ever reach perfection. I understand neural networks, it’s not exactly something you can flip the hood of and take a look at what’s giving you weird results.
AI has its uses: I would love to read AI written books in fantasy games (instead of the 4 page books we currently have) or talk to a AI in the next RPG game, hell it might even make better random generated quests and such things.
You know, places where hallucinations don’t matter.
AI as a search engine only makes sense when/if they ever find a way to avoid hallucinations.
Copilot is such garbage. Microsoft swirling the drain on business capabilities that they should be dominating is very on brand.
Perplexity is not looking bad, IMHO.
Perplexity Pro: we take all of the non-answers and give you completely incorrect answers!
Perplexity is by far the best for searching but still copiously hallucinates.
Is the plan for AI to give tech plausible deniablity when it lies about politics and other mis/dis information?