Florida’s Lawyer Leaders Push Ethical Guardrails for AI Use

It seems Florida’s lawyers are determined to stay ahead of the curve when it comes to the use of artificial intelligence tools in the practice of law.

On Jan. 19, The Florida Bar’s Board of Governors published new guidance on how Florida’s lawyers can ethically deploy artificial intelligence tools. The ethical guidance underscores the need for lawyers using artificial intelligence – particularly “generative AI” tools such as ChatGPT – to make sure they continue to safeguard the confidentiality of client information, develop law firm policies for the oversight of generative artificial intelligence, ensure that AI-related fees are reasonable, and take care that they don’t use AI in a way that results in misleading marketing communications.

The Board of Governors’ action was preceded by the release of a package of AI-related proposed ethics code revisions the group sent to the Florida Supreme Court earlier in January.

Proceed With Caution

Among the ethical issues flagged by Florida Bar Ethics Opinion 24-1:

  • The duty of confidentiality. Lawyers are required to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of client confidential information and to disclose such information only when necessary to serve the client’s interests. The ethics opinion drafters recommend that lawyers obtain client consent prior to using third-party artificial intelligence technologies that involve the disclosure of confidential information.
  • The duty to supervise nonlawyers. Lawyers must ensure that artificial intelligence technologies used in their law offices meet the same ethical standards applicable to lawyers. “Lawyers who rely on generative AI for research, drafting, communication, and client intake risk many of the same perils as those who have relied on inexperienced or overconfident nonlawyer assistants,” the opinion notes. Lawyers must review the output of artificial intelligence technologies just as they would the work of paralegals.
  • The duty to charge reasonable legal fees. Lawyers must ensure that client charges for AI-related costs are reasonable and not duplicative. Additionally, lawyers should not charge clients for time spent developing minimal competence in the use of artificial intelligence.
  • The duty to not engage in misleading or manipulative legal marketing. Lawyers are responsible for information delivered to the general public by chatbots. Lawyers who use chatbots must inform prospective clients that they are communicating with a chatbot – not a lawyer or law firm employee. Lawyers may advertise their use of artificial intelligence technologies but may not claim their use of AI is superior to those used by other lawyers or law firms – unless the claim is objectively verifiable.

Much as was the case with data security risks when using third-party vendors to handle client confidential information, Florida’s new ethical guidance should prompt lawyers to ask searching questions of legal technology vendors who advertise AI-related enhancements in their products. Attorneys will want to take the time to learn in detail how these technologies process and store client confidential information, the assumptions and methodologies underlying their operations, and the limitations/caveats that should accompany their outputs.

The ethical guidance underscores the need for lawyers using artificial intelligence – particularly “generative AI” tools such as ChatGPT – to make sure they continue to safeguard the confidentiality of client information, develop law firm policies for the oversight of generative artificial intelligence, ensure that AI-related fees are reasonable, and take care that they don’t use AI in a way that results in misleading marketing communications.

Possible Changes to Ethics Code

The Florida Bar Board of Governors’ action is the latest effort by Florida’s legal leaders to raise awareness of the ethical issues that may arise when using artificial intelligence technologies to deliver legal services. Earlier this month, on Jan. 4, the bar sent to the Florida Supreme Court a package of ethics code amendments drafted by its Special Committee on Artificial Intelligence Tools and Resources. The proposed revisions cover much of the same ground plowed in Advisory Opinion 24-1.

The committee proposed that commentary to Rule 4-101 of the Rules Regulating The Florida Bar be amended to make clear that the ethical duty of technology competence includes competence in the uses of generative artificial intelligence.

Similarly, the committee proposed that the commentary to Rule 4-1.6, which addresses confidentiality of client information, explicitly remind lawyers of the need to be aware of their responsibility to maintain confidentiality when using artificial intelligence.

The last proposed AI-related revision relates to Rule 4-5.1 on the ethical responsibilities of law firm supervisors and partners. The proposed change would add to the rule’s commentary language encouraging lawyers to “consider safeguards for the firm’s use of technologies such as artificial intelligence.”

It’s worth noting that the proposed revision to the technology competence rule specifically targets generative artificial intelligence (e.g., ChatGPT and other technologies that produce written output), while the changes to the rules on confidentiality and supervisory duties are addressed to artificial intelligence tools generally.

The state bar’s new ethical guidance emerges against a backdrop of heightened awareness in 2023 of the benefits, and risks, of using artificial intelligence technologies to deliver legal services. Well-publicized missteps such those documented in Mata v. Avianca, No. 22-cv-1461 (S.D.N.Y. June 22, 2023), where an attorney filed with the court a ChatGPT-authored document containing erroneous legal citations, and People v. Crabill, No. 23-PDJ067 (Colo. O.P.D.J. Nov. 22, 2023), a case in which an attorney filed a motion containing fictitious legal citations suggested by ChatGPT and later attributed the errors to a legal intern, have convinced state bar regulators across the country of the need to raise awareness of the ethical dangers posed by careless use of artificial intelligence tools.

ABA: AI Use Is Growing

Which raises the question: What is the state of the legal community’s usage and understanding of artificial intelligence? The American Bar Association recently attempted to answer that question. On Jan. 15, the ABA released a summary of its 2023 Artificial Intelligence TechReport, highlighting the results of recent polling of U.S. lawyers on their use of artificial intelligence in their law practices.

According to the ABA survey, 21.4% of attorneys in large law firm lawyers are currently using artificial intelligence tools; in addition, 14.3 percent of large law firm lawyers are “seriously considering” purchasing artificial intelligence tools. At the opposite end of the legal services market, solo and small firm lawyers, 19.6% of attorneys surveyed said they are currently using AI-based technologies, while 15.7% are “seriously considering” the purchase of these tools.

In what seems to be a worrying indication on the need to educate lawyers on artificial intelligence, 42.9% of large law firm lawyers said they didn’t know whether their firm was using artificial intelligence tools or not. Among solos, 8.3% said they didn’t know if they were using AI-based tools. Forty-six percent of solos and 38% of small firm lawyers said they didn’t have enough information to even answer the question.

Among all surveyed lawyers, there is a growing awareness of the benefits to be gained from artificial intelligence tools. According to the ABA, lawyers see the value in AI-based tools for saving time, increasing efficiency, document management, reducing law firm costs, predicting outcomes, reducing risks, and quality assurance.

In other news from the ABA, the nation’s largest bar group on Jan. 16 released new ethics guidance advising lawyers to be mindful of data security of confidentiality issues when uploading data to third-party vendors offering artificial intelligence tools. While most lawyers are attuned to data security issues that might arise when client confidential information is entrusted to third-party vendors, they may not appreciate that these same risks are present when using artificial intelligence technologies.

We’d be remiss not to take this opportunity to highlight Esquire Deposition Solutions’ commitment to sound data security practices in all our operations. We’re happy to share with anyone the measures we take to ensure the security of client data entrusted to us.