We Should All Know Better By Now, But We Don't
Remember back in 2023 when everyone like me posted about the Mata v. Avianca case, which seemed to be the first (or at least, the first to earn national attention) case in which lawyers filed briefs with citations hallucinated by generative artificial intelligence. The lawyers ended up sanctioned, some of my nerd friends got a cool courtroom sketch out of it, and a lot of us thought that would be a sufficient cautionary tale. Don’t use AI if you don’t know what you’re doing, and make sure you check each citation if you do.
Easy enough, right?
Making the rounds on LinkedIn this week is an order from Billips v. Louisville Municipal School District out of the Northern District of Mississippi. As with many of these cases, the court ordered several lawyers to show cause why they shouldn’t be sanctioned for filing a memorandum with, yep, bad AI-generated citations.
I will cut to the chase: the court was mad at everyone:
The young associate attorney who performed research on behalf of the plaintiff using, among other things, Grok (Elon Musk’s AI chatbot that I won’t link to), and included four “problematic” citations (one nonexistent case, three misrepresented holdings) in the final product, which she signed (and, this was apparently part of a pattern that affected other cases as well)
The supervising partner who signed off on the work (and also signed the memorandum) but did not independently check the citations, despite being on notice that the associate had problems with AI before
The other partner who did not draft, review, or sign the memorandum but who had supervisory authority over the associate and was familiar with her history
The law firm (which dissolved in between the submission of the memorandum and the hearing), jointly responsible with the attorneys pursuant to Rule 11. The court also expressed concern as to how the firm was handling the incident and remediation of other cases.
The defendant law firm, for not alerting the court to the bad citations (though that law firm was simply cautioned, not sanctioned).
The court declined to impose monetary sanctions. Instead, all three plaintiff attorneys were disqualified (and the whole case was stayed to allow the plaintiff to find new counsel), and they were ordered to provide copies of the order to the presiding judge in every pending case in which they were counsel of record (and the clerk was directed to send a copy of the order to the Mississippi regulatory authorities). The firm was directed to perform an audit, and, in the paragraph that made me shudder the hardest, the associate was ordered to seek withdrawal from every case in which she was appearing before that judge, and was forbidden from appearing in any other case before that judge for a period of two years.
So, what are we taking away from this?
First, and this should go without saying 2 ½ years after Mata but I guess it doesn’t: If you’re going to use AI to generate arguments or citations, verify every single one of them with a trusted legal research source.
Second, if despite your best efforts something sneaks through, fess up as soon as you learn about it, and move to correct or withdraw the pleading. Chances are, the opposing attorney won’t object.
Third, if you’re a supervisor, get a good AI policy in place before things go sideways, and train your junior attorneys and staff. And, if you’re a supervisor on a specific matter, you are not going to like this advice (I do not like this advice) but it may be time to trust less, and verify more.
And, it’s definitely time to trust less, verify more if your subordinate has already shown they’ve had problems with misuse of AI. Unlike Mata, the drafting lawyer in Billips was not a seasoned attorney working with unfamiliar technology; she was relatively young and had even recently attended CLE on AI use.
Finally—it’s not just this case. I’m seeing more of a duty of opposing counsel to detect, and to report, bad citations (AI or otherwise) rather than wait for the court to find them out. It’s always been a best practice to read each case cited by the opposing party (at least, each case substantively cited) so you can adequately respond to their arguments; finding out that a case doesn’t exist or that it’s not at all reflective of why it was cited shouldn’t be too much of a lift.
Interestingly as an aside, as of my search a few minutes ago (6 pm on January 5, 2026), Wisconsin still does not have any public or published discipline with the phrases “Artificial Intelligence,” “ChatGPT” or, for that matter, “Grok.” Then again, the pandemic is six years old and the word “covid” appears only in one decision and “covid-19” in four, despite my assumption that coronavirus would lead to all kinds of discipline, particularly for lack of diligence. As always, discipline lags the real world, so stay tuned.




