The Fastest Way to Lose a Court Case? Use ChatGPT

Burnout and heavy workloads are driving lawyers to AI—and into trouble

A robot finger tipping a scale held by a statue of Athena.
(baona / iStock / The Met / The Friedsam Collection)

On December 6, 2023, Chong Ke, a Vancouver lawyer at Westside Family Law, filed a notice of application in court for a client who was asking for more access to his kids. Lawyers representing the mother went through the application to put together a response. That’s when they noticed two of the cases cited in the application couldn’t be found. That’s because they didn’t exist. Ke had used ChatGPT, the chatbot developed by OpenAI, to do legal research and didn’t realize the cases he found were fictitious.

Law can be a slog, so it’s no surprise lawyers are eager to leverage AI’s ability to sift through databases of legal texts, case laws, statutes, and regulations. A tool able to summarize findings and highlight relevant precedents can save significant amounts of time and effort. But ChatGPT isn’t a search engine; it’s a large language model. It trains on data and generates human-like responses based on what it learns. That creative streak gives the chatbot a dark side: a tendency to serve up data that’s false or inaccurate, a phenomenon called hallucination.

Ke’s was the first reported example of a Canadian lawyer submitting false information as a result of generative AI. Incidents have also popped up in Massachusetts and Colorado. Last year, according to legal documents, New York lawyers Steven A. Schwartz and Peter LoDuca were prepping for a personal injury case against an airline. When Schwartz couldn’t find the court cases he needed using Fastcase, a well-known US legal database, he turned to ChatGPT instead. His brief contained six fabricated citations and included cases with the wrong date or names. LoDuca, who was supposed to be supervising Schwartz’s work, said it “never crossed my mind” he couldn’t trust the technology.

“As soon as ChatGPT was making headlines, we started seeing these stories about fake cases cited in court,” says Amy Salyzyn, an associate professor at the University of Ottawa law school who specializes in legal ethics and technology. “These stories are like car crashes. You can’t look away.”

As more and more lawyers integrate AI chatbots into their practice, Salyzyn worries about contracts and wills being created that may not get a second look. “It seems inevitable that one day we’re going to have an error sneak into a legal decision,” says Salyzyn.

If generative AI is prone to making mistakes, why would lawyers still use it? Blame long work hours and heavy caseloads. A 2022 national study from the Federation of Law Societies of Canada about the mental health of lawyers revealed more than half of all lawyers experience burnout and nearly 60 percent of legal professionals are under psychological distress. “It’s tempting to push the easy button when you’re facing a deadline,” says Salyzyn.

AI doesn’t change the reality that lawyers are still responsible for, well, being lawyers. Lawyers who use AI should “comply with the standards of conduct expected of a competent lawyer,” says Christine Tam, director of communications and engagement at the Law Society of British Columbia. That means checking that everything filed in court is accurate. Professional embarrassment might be the motivation some lawyers need to get their act together. Ke is currently under investigation by the Law Society of British Columbia for her conduct. Schwartz and LoDuca were sanctioned and fined $5,000 (US). Colorado lawyer Zachariah Crabill received a ninety-day suspension after he used ChatGPT to do legal research.

Lawyers are already required to take professional development courses about new technology. But AI is so transformative that using it responsibly requires hands-on experience. Law schools are beginning to develop courses to give students exposure to chatbots like ChatGPT, and AI legal clinics are popping up at Queen’s University and the University of New Brunswick.

Law societies are taking this seriously too. In Ontario, British Columbia, Alberta, Saskatchewan, and Manitoba, they have set out specific guidelines. Most of the guidelines focus, unsurprisingly, on harms like hallucinations. But they also require lawyers to ask clients for consent when employing the technology. The Law Society of Ontario has a checklist on how to choose an AI provider and how to audit your AI practices on an annual basis. Alberta takes a different approach, focusing on the positives, such as how to use AI to draft letters to clients or creating questions to ask a witness at trial.

Len Polsky, author of the AI playbook for the Law Society of Alberta, is bullish on the technology. He argues generative AI can excel at tasks beyond legal research, such as creating outlines for documents, coming up with trial questions, and proofreading legal briefs. “Generative AI can help lawyers provide better legal services to the public as long as they use it in a safe, reliable way,” he says. “We’re not saying don’t use AI. We want people [to] use it and be smart.”

Lawyers aren’t alone in struggling with the technology. Generative AI is an attractive option for people who need legal help but can’t afford attorneys. The problem, again, is there’s no guarantee users are getting accurate information. This can be a danger for people who opt to represent themselves. In the UK, Felicity Harber used nine ChatGPT-fabricated citations in her appeal against paying property taxes. When Jonathan Karlen appealed a decision that he pay damages against a former employee, the Eastern District Court of Appeals in Missouri found he had invented the citations he relied on in his defence. Only two of the twenty-four cases he used were real. Karlen explained he had hired an online consultant and didn’t know the consultant would use “artificial intelligence hallucinations” in creating the brief.

Canlii, a popular free legal research website in Canada, has seen the trend toward the public using AI and started rolling out its own generative tool last year. The company is looking to get funding from provincial and territorial law foundations to create summaries of case law and legislation for each location, and so far, they have been able to get data for Saskatchewan, Alberta, Manitoba, Prince Edward Island, and the Yukon.

The software—developed by Lexum, a subsidiary of Canlii—isn’t perfect. Users have caught errors, such as incorrect labelling or summaries missing the correct legal analysis. Pierre-Paul Lemyre, vice president of business development at Lexum, says the tool has a 5 percent failure rate because some cases are too long or complex to summarize. But he expects the software to improve and wants his team to hear about the errors in order to fine-tune the product. “We’re doing this work because we want people to be able to understand the law better,” he says. “People need access to legal information that’s convenient and fast.”

For the moment, the convenience and speed have to be matched by careful implementation and oversight. We need collective action from the courts, government, and regulators on educating lawyers and the public on how to use AI and deciding how AI can be used to serve people and protect them. Otherwise, expect to see another headline of lawyers behaving badly.

Julie Sobowale
Julie Sobowale is a freelance journalist and lawyer writing about legal affairs.