5 Tips to Avoid Becoming the Next AI Hallucination Headline

Generative AI tools like ChatGPT have become part of the legal conversation; sometimes in helpful ways, and sometimes…less so. A handful of cases have cropped up where filings included citations or quotes that turned out to be, well, fictional. Not intentionally, of course, but thanks to the occasional AI hallucination, some briefs ended up referencing quotes and court decisions that simply don’t exist.
In one federal case, a judge imposed sanctions on attorneys after their brief included 6 fictitious court decisions that ChatGPT completely made up. In another, three lawyers in a personal injury suit were fined $5,000 for citing 8 non-existent cases. And just recently, a state appellate court in Utah reprimanded a lawyer when an interlocutory appeal brief he filed referenced a completely made up precedent that could be found nowhere outside the chatbot’s imagination.
These incidents are part of a growing problem. By early 2025, at least nine different lawsuits featured AI-fabricated citations or quotes in filings, with news headlines popping up in multiple legal publications. Each story points out a simple truth: If we’re going to use AI in our legal research or drafting, we need to mind our due diligence, or we risk becoming the next example of “AI gone wrong.”
Hallucination or Automation? The Real Issue at Hand

The problem isn’t that lawyers are using AI. Both the ABA and judges nationwide have explicitly said there’s nothing unethical about attorneys tapping into this technology. As US District Judge Kevin Castel said,
“There is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
But it becomes problematic when attorneys use these tools blindly. Generative AI platforms like ChatGPT are startlingly good at sounding confident - so much so that they’ll present fiction as fact with complete conviction. The AI community has a term for this: hallucination.
In other words, if you ask ChatGPT for case law on a specific issue, it might give you a beautifully formatted answer with legalese and citations that look legitimate but are utterly fake.
We lawyers work in a realm where credibility is currency. Citing a non-existent case isn’t just embarrassing; it can trigger court sanctions, fines, ethical investigations, and a very unhappy client. The technology may be cutting-edge, but the lawyer’s age-old responsibility, to make sure citations are real and quotes accurate, is the same as it ever was.
Think of AI Like a Junior Associate (A Really Eager One)
A useful way to frame AI tools like ChatGPT is to treat them the way you would treat a brand-new associate or intern. They’re quick, surprisingly knowledgeable in some areas, full of creative ideas, and can generate polished work with impressive speed. But they’re also inexperienced in judgment, prone to cutting corners, and can sometimes be confidently wrong.
Would you ever let a fresh law school grad file a brief without reviewing their work? Of course not. The same principle applies here. Generative AI can draft or brainstorm, but it cannot replace your judgment or diligence. It often misunderstands context, nuance, and consequences. And just like a new team member, it might take shortcuts if you don’t watch closely.
The key is supervision. AI should be an assistant, not an authority. That means checking every citation, verifying every claim, and making sure the output aligns with your professional standards before it ever reaches opposing counsel or the court.
Clear Instructions Make Better AI (Just Like with a Junior Associate)
Precision is all about direction. Like any junior associate or new hire, an AI tool will only perform as well as the guidance you provide.
Ask a vague question, and you’ll likely get a vague answer. Feed in a poorly defined task, and you may get confident-sounding nonsense back. But if you clearly articulate the issue, frame it in the right legal context, set boundaries around jurisdiction or task scope, and spell out what you need, AI can produce remarkably helpful work.
More detailed and focused input almost always means better output.
In some cases, this may mean giving the AI the source materials you want it to work from. For example, if you’re drafting an argument and want help analyzing a particular precedent, paste in the full text of the opinion. You wouldn’t expect a junior associate to summarize a case they haven’t read; the same logic applies here. Giving the AI a factual or legal foundation reduces the chance of hallucination and increases the chance of a usable answer.
I’m aware that this isn’t new information. As lawyers, we already know how to delegate thoughtfully. So use this approach when utilizing AI in your work: give it clear instructions, check its work, and never mistake confidence for correctness.
ChatGPT is a Symptom of a Wider Problem
This new technology is a mirror held up to our professional practices. We’ve grown accustomed to quick answers and cut-and-paste solutions, sometimes at the expense of careful research and precise drafting. ChatGPT’s hallucination habit simply throws gasoline on that fire. It outputs exactly the kind of authoritative-sounding yet error-filled text that a complacent writer might produce on a bad day. The difference is that with AI, the volume of such text can be higher and the errors harder to spot because they come with a perfectly straight face.
Some have mused about a broader “dumbing down” of skills in the technology era. A new MIT study even found that ChatGPT can lead to “cognitive debt” and a “likely decrease in learning skills.” Yikes.
In the legal field, we have to be extra vigilant against that slide. It’s one thing if autocorrect makes a funny typo in an email; it’s quite another if auto-generated text puts fictitious precedent into a brief. The stakes are simply too high. As officers of the court, lawyers cannot pass the buck to an algorithm. That duty doesn’t go away just because a clever AI helped draft the document.
AI tools can’t assume the role of legal gatekeeper. It’s up to each attorney to ensure that the materials they submit meet the profession’s standards for accuracy.
5 Tips to Stay Off the Hallucination Highlight Reel

So, how can you implement AI in your practice without ending up in a judge’s crosshairs or on the front page of the legal news? The key is to use AI thoughtfully and with proper guardrails in place.
Here are a 5 practical tips for avoiding those AI-induced landmines:
- Implement a Clear AI Policy: Establish firm-wide guidelines for how AI tools should (and shouldn’t) be used. A good policy will help your team use these tools responsibly, minimize the risk of hallucinations, and stay aligned with your ethical obligations.
You can download Darrow’s free AI policy template and customize it as you see fit.
- Use AI for Ideation, Not Specific Research: AI can be an incredible brainstorming partner, providing new ways to strengthen arguments or articulate ideas, and helping with tone and structure. But when it comes to finding and citing law, use traditional research methods. If you do experiment with an AI search, be extra skeptical of the results.
- Double-Check Everything (Especially Citations): No matter how good the draft looks, treat any AI-generated content as a starting point, not a finished product. If ChatGPT cites a case or quotes a statute, double-check it using trusted primary sources.
- Give Specific, Detailed Prompts: AI prompts should be clear, detailed, and contextualized. The more precisely you phrase your question, the better your output will be. Provide limits, avoid vague requests, and define the exact kind of answer you’re seeking.
- Stay Educated and Vigilant: The legalities surrounding AI are shifting fast, with new regulations emerging at state levels as well as updated court guidelines. For example, some judges now require attorneys to certify that AI-drafted filings have been verified by a human. Make it a habit to keep up with the latest rules and ethics opinions on AI in law. If you’re going to ride the AI wave, wear a life vest. Know the tool’s limitations and the current best practices.
Thoughtful Lawyering Wins in the AI Era
The bottom line for attorneys is that we are living in a moment of incredible technological change. AI has already made a significant impact on the legal industry. But no matter how advanced our tools become, words still matter. Precision, truthfulness, and clarity in language are the bedrock of effective advocacy and ethical lawyering.
In a sense, generative AI is just the newest test of our profession’s commitment to core legal values. It’s telling us, in its own sometimes unreliable voice, not to get complacent. If we treat AI outputs with the proper scrutiny, viewing them as helpful assistants rather than infallible oracles, we can avoid the nightmare headlines and focus on the ones that truly matter: the stories of justice served.
So by all means, explore that shiny new AI tool in your office. Just remember to treat it like your most enthusiastic but untested junior associate and double check everything.
That way, the only headlines you’ll be making are the ones you choose, and for all the right reasons.
This Might Interest You: