But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
For the last time, people need to stop treating AI like it removes their need for research, just because it sounds like it did its research. Check the work your tools do for you, damn it.
It’s Wikipedia all over again. Absolutely feel free to use the tool, e.g. Wikipedia, ChatGPT, whatever, but holy shit check the sources, my guy. This is embarrassing.
The best use, for me, is asking ChatGPT to give me five (or however many) scholarly, peer-reviewed articles on a topic. Then I search for said articles by title and author name on my school library database.
It saves me so much time compared to doing a keyword search on said same database and reading a ton of abstracts to find a few articles. I can get to actually reading them and working on my assignment way faster.
AI is a great tool for people who use it properly.
Haven’t people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
I heard turning in AI Slop worked out pretty well for Arcane Season 2 writers.
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y’all. It’s like Boomers trying to figure out the internet all over again. Just because AI (probably) can’t lie doesn’t mean it can’t be earnestly wrong. It’s not some magical fact machine; it’s fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it’s important to check people’s sources yourself, robot or not.
AI can absolutely lie