Caution Signs Line the Road to Wider ChatGPT Adoption

ChatGPT and related artificial intelligence technologies are one of the most exciting developments to hit the legal profession in decades. Their ability to synthesize large datasets of legal information and deliver convincing, human-like answers to nearly any inquiry has caught everyone’s attention. We’ve even written about it ourselves, noting recently the compelling ways that ChatGPT can be used to assist in deposition preparation.

To be specific, ChatGPT — developed and distributed by OpenAI, a consortium of technology companies and private investors — is a form of “generative AI,” the term given to artificial intelligence technology that produces text, images, and audio, imagery, and audio. Its ability to produce high-quality text on law-related topics has excited, and worried, the legal profession.

The Promise of Generative AI

According to a recent Thomson Reuters survey of lawyers in the United States, United Kingdom, and Canada on the use of ChatGPT and other “generative artificial intelligence” technologies, 82% believe that these tools could be applied to legal work. A smaller percentage — just 51% – believe that generative AI should be applied to legal work. The survey revealed that 3% of law firms are currently using generative AI technologies, while 6% have banned their use entirely (for now).

“In fact, all those interviewed noted that they do not fully trust generative AI tools — and particularly the public-facing ChatGPT tool — with confidential client data,” the Thomson Reuters report stated. “Yet, even as this mistrust exists, our research shows that attitudes are changing, and potential use cases are being explored by many law firms.”

Generative AI technologies offer many legal use cases. The list of legal applications will undoubtedly grow as more attorneys become familiar with the technology and grasp how it could be applied to their practices. Current uses for generative AI in law include:

  • drafting routine legal correspondence and press releases
  • drafting legal memos and legal briefs
  • assisting with legal research
  • performing contract and document review
  • analyzing legal briefs
  • predicting litigation outcomes
  • assisting with billing
  • assisting with jury selection

AI technologies can also accept images, including video imagines, as inputs. Legal applications now coming on the market have the ability to analyze video depositions and make judgments about the credibility of the witnesses.

This innovation is a good thing, right? After all, lawyers have an ethical obligation to be technologically competent. According to Comment 8 of Rule 1.1 of the ABA Model Rules of Professional Conduct, lawyers are ethically obligated to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” (Note: A version of Comment 8 is part of the professional ethics code in nearly every jurisdiction in the United States.)

The Perils of Generative AI

During a recent presentation sponsored by the Texas Bar Association, Hon. John G. Browning, a partner at Spencer Fane in Plano, Texas, and former justice on the Fifth District Court of Appeals in Dallas, reminded lawyers that Rule 1.1 speaks of both benefits and risks. And the risks of using generative AI technologies in law practice are considerable, he said. Competent use of these technologies requires a thorough understanding of those risks.

The phrase “benefits and risks” means that lawyers should be mindful of both the capabilities and the limitations of generative AI technologies, he said.

Browning noted the recent warning from a leading legal malpractice insurer that, notwithstanding the hype and promise, ChatGPT should be used with extreme caution (if at all) by law firms. Attorneys’ Liability Assurance Society Ltd. warned policyholders earlier this year that ChatGPT is “not ready for prime time.” The article quotes ALAS senior vice president of loss prevention Mary Beth Robinson stating that ChatGPT, although promising, is nevertheless “not a substitute for critical thinking.”

Lawyers should have a clear understanding of ChatGPT’s limitations as a tool for law practice. “ChatGPT cannot critically think analyze, evaluate, or act ethically,” Browning said. “Don’t confuse yourself about that.”

Browning gave as an example of ChatGPT’s limitations the legal concept of the “consideration” necessary to create an enforceable contract. ChatGPT might be good at producing an acceptable definition of “consideration” but it can’t reliably determine whether there is sufficient consideration to enforce a particular contract.

The use of ChatGPT in law firms raises several ethical concerns. In addition to technological competence issues, generative AI also implicates a lawyer’s ethical obligation to clearly communicate to the client the means by which the client’s legal objectives will be accomplished (Rule 1.4) as well as the ethical obligation to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” (Rule 1.6)

In addition to ethical and possible malpractice liability, ChatGPT use by law firms also raises cybersecurity risks. If the ChatGPT service — or any other generative AI technology service that captures and responds to lawyer queries — suffers a data breach, then bad actors could come into possession of client confidential information or privileged attorney work product, Browning noted.

Lawyers should be very careful about the information they reveal when making queries to ChatGPT. There is a real possibility that client confidential information revealed to ChatGPT may no longer be protected by attorney-client privilege.

Then there is the “black box” aspect of generative AI: Users do not meaningfully understand how these technologies yield their outputs. They don’t understand the data that is being used, nor do they understand the assumptions that programmers made during development. In the field of law, the outputs of generative AI technologies can be problematic and potentially unlawful.

To take just one example, consider jury selection, a critical component of both civil and criminal litigation. In a series of rulings beginning with Batson v. Kentucky, 476 U.S. 79 (1986), the U.S. Supreme Court has declared that peremptory challenges to potential jurors cannot be exercised on the basis of the juror’s race, ethnicity, and gender. Litigators using generative AI technologies to select jurors will have a difficult time responding to Batson challenges because of their inability to state with certainty that their juror selection decisions were not influenced by unlawful considerations.

It’s Not All Unicorns and Rainbows

Another drawback with ChatGPT, one noted by many lawyers, is that it can be just plain wrong at times. A commonly cited article on SCOTUSblog noted that ChatGPT failed to correctly answer some basic questions regarding the U.S. Supreme Court’s work. In fact, it confidently reported several entirely false accounts about leading court decisions and invented a fictional justice “James F. West.”

Browning also reminded lawyers that they should treat providers of generative AI technologies the same way they deal with all other vendors used by law firms to deliver legal services and handle client confidential communications — with due diligence and extreme caution.

“Lawyers have to remember that when they are dealing with one of the many vendors that incorporate AI tools like ChatGPT into what they offer, they are dealing with a vendor, with all that that implies,” he said.

At a minimum, lawyers should be asking:

  • How is the client’s data secured?
  • Where are the servers located?
  • Who has access to the client’s data?
  • Is data provided to the vendor used for marketing purposes?
  • Who is liable for errors that generative AI might make?
  • Are the AI’s methods and assumptions screened for bias?
  • Is there an audit trail?

The dangers of carelessly deployed generative AI technologies, it seems, can be as great as their potential to revolutionize law practice when understood and thoughtfully deployed.

“With ChatGPT, it’s not all unicorns and rainbows,” Browning cautioned. “Plenty of dangers await unwary lawyers.”