The promise of artificial intelligence technologies captured the imagination of the legal community in a big way in 2023. None more so than “generative artificial intelligence,” which has the ability to create pleadings and correspondence, and to summarize source materials, that approximate the work of a well-informed, legally trained human being.
Artificial intelligence also has compelling uses for pretrial discovery in civil litigation. Already widely in use for scouring and analyzing electronically stored information, artificial intelligence can be deployed to summarize deposition transcripts, spot inconsistencies in testimony, and suggest fruitful areas of inquiry during the deposition itself.
However, as is often the case with emerging technologies, generative AI is leaving in its wake a trail of unresolved ethical issues. For example:
- Does legal advice provided by artificial intelligence amount to unauthorized practice of law?
- How should a lawyer’s ethical duty of technology competence guide the use of generative artificial intelligence in delivering legal services?
- To what extent should lawyers inform clients of the use of AI by their law firms?
- Does providing client data to an AI technology waive attorney-client privilege? If so, how can this risk be mitigated?
- To what extent does a lawyer’s ethical duties of diligence and competence obligate the lawyer to review the terms of service (and data security representations) of AI technology vendors?
- Do lawyers have an ethical obligation to fact-check and verify the outputs of generative AI used to produce legal pleadings?
- Are today’s lawyers adequately trained in the benefits and risks of using AI in law practice?
Judging from the flurry of task forces and workgroups formed in recent months, it’s clear that state bar regulators are determined to not play regulatory catch-up with this promising yet ethically problematic technology.
Task Forces Proliferate
Already this year at least a half dozen state bar workgroups have been formed, with the sole purpose of providing much-needed ethical guidance to lawyers who have already – or will soon – be using generative AI in their law practices. There is urgency to these efforts. Some groups have mandates to produce actionable guidance for lawyers by year’s end.
The Florida Bar’s Special Committee on AI Tools & Resources, formed in late September, is meeting frequently with the objective of proposing changes to the Florida professional ethics code as soon as December 2023. Among the Florida group’s responsibilities is to provide guidance that ensures the state’s legal community can extract value from artificial intelligence while maintaining the lawyer’s independent legal judgment.
In California, the board of trustees for the California State Bar Association has asked its Committee on Professional Responsibility and Conduct to produce – by mid-November 2023 – proposed revisions to the California Rules of Professional Conduct “to ensure that AI is used competently and in compliance with the professional responsibility obligations of lawyers.”
The New York State Bar Association formed a task force this year to address a wide range of legal issues raised by artificial intelligence technologies. The NYSBA website reports that the task force’s mandate will be a broad one. In addition to lawyer ethics, the group will examine AI’s impact on all areas of the law as well as its potential for increasing greater access to justice.
This past June, State Bar of Texas president Cindy V. Tisdale established an expert working group that will be studying both the ethical challenges and the practical benefits of using AI in the practice of law. The State Bar of Texas Workgroup on Artificial Intelligence was given a one-year deadline to complete its work.
In August, the New Jersey State Bar Association’s Board of Trustees voted to create a task force to study artificial intelligence in the legal profession. The New Jersey task force will be examining, among other things, the ways in which artificial intelligence might replace human beings and (less ominously) might be deployed in ways that inadvertently waive attorney-client privilege.
Similar state bar workgroups formed in 2023 to study legal and ethical issues arising from artificial intelligence have also been formed in Illinois, Kentucky, and Minnesota.
Last, and certainly not least, the American Bar Association is also studying AI-related legal issues. On Aug. 28, the ABA announced the formation of its own Task Force on Law and Artificial Intelligence, charged with the responsibility of studying how artificial intelligence technologies are changing the practice of law and how these changes might affect professional ethics obligations to clients.
The ABA’s work in the area of artificial intelligence is potentially significant, because of the organization’s twin policymaking levers over the legal profession; first, in the area of legal ethics via possible revisions the ABA’s influential Model Code of Professional Conduct; and second, in the area of legal education as a result of the ABA’s law school accreditation program.
The Road Ahead
The legal profession’s recent track record in addressing novel ethics issues raised by technological innovation suggests that bar groups and state regulators will adopt a light-touch approach to AI-related issues as well. After all, the Internet, email, social media, text messaging, ransomware, cyber-crime, and security risks created by pervasive remote work arrangements all raised ethical issues that, at the end of the day, failed to prompt changes to the rules of professional conduct.
Outside of the ABA’s 2012 decision to add – in Comment 8 to Model Rule of Professional Conduct 1.1 – “technological competence” to the list of required professional competencies, new rules to cope with technological change have been scarce. Where technology is concerned, bar regulators seem to believe that the best way forward is through non-binding guidance and advisory opinions explaining how technology-driven ethical concerns are adequately addressed by existing legal ethics rules.
Perhaps a similar light-touch regulatory fate awaits artificial intelligence. It may be enough to advise lawyers to seek client consent when using AI, to urge them to vet AI technology vendors carefully, to remind the bar that client confidential information should not be used as an input into AI applications, and to point out that delegating legal research to ChatGPT is ethically problematic.
Or perhaps not. Alphabet Inc. (aka “Google”) chief executive Sundar Pichai, along with many other experts, believes that artificial intelligence will be more transformative than the Internet itself. And if that’s the case, more than mere best practice pronouncements may be necessary to deliver the quality of legal services that clients need and deserve.