Seriously, TRY and get an AI chat to give an answer without making stuff up. It is impossible. You can tell it “you made that data up, do not do that” … and it will apologize and say you were right, then make up more dumb shit.
I have found AI to be a terrible primary source. But something I’ve found very useful is to ask for a detailed response, structured a certain way. Then tell the AI to grade it as a professor would. It actually does a very good job at acknowledging gaps and giving an honest grade then.
AI shouldn’t be a primary source but it’s great for starting a topic. Similar to talking to someone that’s moderately in the know on something you interested in
That’s because ALL generative AI results, even the correct ones, are “made up”. They just exist on a spectrum of coincidental correspondence with reality. I’m still surprised that they manage to get as much right as they do.
Google Gemini gives me solid results, but I stick to strictly factual questions, nothing ambiguous. Got a couple of responses I thought were wrong, turns out I was wrong.
I got a Firefox plugin to block Gemini results because whenever I look up something for my medical studies, it runs a really high chance of spitting out garbage or outright lies when I really just wanted the google result for the NIH StatPearls article on the thing.
As a medical professional, generative AI and search adjuncts like Gemini only make my job harder.
I looked up on google at one point what the minimum required depth for a cable running under a building is by NEC code. It told me it was 0 inches. I laughed and called it stupid, wtf do you mean 0 inches?? Upon further research, 0 inches is the correct answer, I felt real stupid after that -_-
Seriously, TRY and get an AI chat to give an answer without making stuff up. It is impossible. You can tell it “you made that data up, do not do that” … and it will apologize and say you were right, then make up more dumb shit.
I have found AI to be a terrible primary source. But something I’ve found very useful is to ask for a detailed response, structured a certain way. Then tell the AI to grade it as a professor would. It actually does a very good job at acknowledging gaps and giving an honest grade then.
AI shouldn’t be a primary source but it’s great for starting a topic. Similar to talking to someone that’s moderately in the know on something you interested in
That’s because ALL generative AI results, even the correct ones, are “made up”. They just exist on a spectrum of coincidental correspondence with reality. I’m still surprised that they manage to get as much right as they do.
Yeah, LLMs are great if you treat them like a tool to create drafts or give you ideas, rather than like an encyclopedia.
I wish people would stop treating these tools as intelligent.
I’ll get hate for this but in most tasks people use them for they are pretty dang accurate. I’m talking about frontier models fyi
Google Gemini gives me solid results, but I stick to strictly factual questions, nothing ambiguous. Got a couple of responses I thought were wrong, turns out I was wrong.
I got a Firefox plugin to block Gemini results because whenever I look up something for my medical studies, it runs a really high chance of spitting out garbage or outright lies when I really just wanted the google result for the NIH StatPearls article on the thing.
As a medical professional, generative AI and search adjuncts like Gemini only make my job harder.
I looked up on google at one point what the minimum required depth for a cable running under a building is by NEC code. It told me it was 0 inches. I laughed and called it stupid, wtf do you mean 0 inches?? Upon further research, 0 inches is the correct answer, I felt real stupid after that -_-
Seems like a depth of 0 inches means you can just lay it on the floor?
No, it means 50% of the cable must be submerged or buried. Little speed bumps all around.