State Legislatures Appear Keen to Regulate Artificial Intelligence

Artificial intelligence has the potential to transform the legal profession through its ability to uncover information and insights hidden in large amounts of data. Already today we have:

  • generative AI tools that can summarize vast amounts of case law or spot weaknesses in an opponent’s pleading
  • AI-powered software that can pull critical pieces of information from a deposition transcript or a trove of email messages
  • Software that can make judgments about a witness’s credibility or a prospective juror’s inclination to embrace a party’s legal position
  • Analysis tools that predict judicial outcomes based on prior rulings
  • Chatbots that promote access to justice by delivering legal services cost-effectively at scale.

On the other hand, the potential for current AI technologies to commit a legal error or perpetuate an unlawful bias is also well-known. And they certainly will have a transformative impact on the labor market, replacing persons whose jobs can be efficiently performed with automated tools.

With so much at stake, it seems inevitable that AI technologies will soon be the subject of close federal government regulation. Or does it?

Recent experience with government regulation in other computer technology sectors suggests that meaningful federal regulation of AI could be a long way off if it comes at all.

For example, the harms caused by data breaches have been well-known for decades, yet there is still no generally applicable federal data breach notification statute. Instead, data breach notification obligations arise from dozens of state laws across the country — the first one enacted in California in 2003. Today, every state in the country has a data breach notification law.

Consumer privacy policymaking has followed the same pattern. Although federal privacy legislation has been a topic of discussion in Washington as far back as the Clinton Administration, Congress has not yet passed a generally applicable privacy law. Consumer privacy protections today are largely a matter of state law led, again, by California in 2004. The number of states following California’s lead grows every year. Washington, Virginia, Connecticut, and Colorado have all passed comprehensive consumer privacy legislation in recent years.

This history is useful context when considering the prospects for the regulation of AI. In the near term, state legislatures will be the most likely source of compliance obligations for software developers and law firms using artificial intelligence technologies to serve clients.

Last year the Vermont legislature likely articulated the consensus opinion among state policymakers when it included the following language in House Bill 410:

Large-scale technological change makes states rivals for the economic rewards, where inaction leaves states behind. States can become leaders in crafting appropriate responses to technological change that eventually produces policy and action around the country.

The [Vermont Artificial Intelligence] Task Force determined that there are steps that the State can take to maximize the opportunities and reduce the risk, but action must be taken now. The Task Force concluded that there is a role for local and State action, especially where national and international action is not occurring.

House Bill 410 (enacted May 24, 2022) created a permanent state agency to study AI issues and to implement the recommendations of the Vermont Artificial Intelligence Task Force.

Elsewhere, several states passed laws regulating the use of AI in the areas of facial recognition and automated decision-making tools (ADT) — particularly in the context of employment, where automated tools are routinely used to screen and vet job applicants.

In Colorado, for example, Senate Bill 113, enacted in 2022, created a task force to study the use of AI in the context of facial recognition services.

In Illinois, the 2020 Artificial Intelligence Video Interview Act requires employers to give notice and obtain consent from job applicants whenever they use AI technologies to evaluate job candidates. A 2022 amendment also requires employers to report to state officials demographic information about which job candidates were selected or rejected based on artificial intelligence.

New York City began regulating the use of automated decision-making tools in employment in 2021. Local Law 144 prohibits employers and employment agencies from using an ADT unless the tool has been subject to a bias audit within one year of its use.

State Legislation Under Consideration in 2023

State legislatures continued to be active on AI-related issues in 2023. Dozens of bills relating to AI are working their way through state legislatures. Although many of these laws would merely create policy study groups, some are proscriptive, creating new individual rights to be informed about AI in interactions with the government and new civil causes of actions, as well as crimes for abusing AI technologies.

On May 25, the Texas Senate passed HB 2060, a measure that would create an Artificial Intelligence Advisory Council to study legal issues surrounding the use of AI by Texas government agencies.

In Illinois, legislation (HB 3563) calling for a study group on generative AI technologies such as ChatGPT passed both the House and Senate on May 18. Among other mandates, the measure directs the study group to:

  • propose laws to protect consumer information and civil rights as they relate to generative artificial intelligence
  • assess the impact of generative artificial intelligence on employment levels and types of employment
  • assess the impact of generative artificial intelligence on cybersecurity

In Connecticut, SB 1103 passed the House on May 30 and appears headed for the governor’s signature. SB 1103 would create a task force with a broad mandate to “the impact that artificial intelligence has on residents of this state and persons doing business in this state” and develop an artificial intelligence bill of rights.

Elsewhere across the country, legislation has been proposed to:

  • require the disclosure of AI use in advertising (New York AB 216)
  • require the disclosure of AI use in election campaign messages (Washington HB 1442)
  • create a civil cause of action for distributing sexually explicit “deep fake” images without the consent of the depicted individual (Minnesota HB 1370)
  • create criminal liability for using deep fake technology to influence an election (Minnesota HB 1370)

California legislators are considering SB 313, the “California AI-ware Act,” which would increase the transparency of AI technologies in state government operations by mandating that state agencies:

  • clearly alert persons whether artificial intelligence technologies are being used during interactions with the government
  • clearly inform persons of their right to directly communicate with a human being from the state agency

Another bill working its way through the California Assembly is AB 331, a civil rights-oriented measure that would forbid “algorithmic discrimination” by developers and deployers of AI-driven decision-making technologies. AB 331 is supported by civil rights groups and opposed by the California Chamber of Commerce and other associations representing the state business community.

Federal Interest in AI Regulation Growing

State action on AI policy has taken place against a backdrop of federal disinterest. But that situation is changing. On May 16, the Senate Judiciary Committee held a hearing on AI oversight issues. In written testimony OpenAI chief executive Samuel Altman told the senators that he believed federal regulation of AI was necessary.

One week later, Microsoft Corp. weighed in, also calling for federal regulation to guide AI development and protect against potential abuses. Microsoft said it supported the creation of a new federal agency to regulate AI technologies and police how these tools are developed and deployed.

And on May 15, the first bill specifically addressing AI-related harms was introduced in the Senate. The Real Political Ads Act (S. 1596) would require disclaimers on political ads that use images or video generated by artificial intelligence.

The takeaway for law firm leaders from all this legislative action (and inaction) seems clear. When watching for new compliance obligations and regulatory roadblocks to deploying AI tools, the place to look is the local statehouse. Not only are state governments the most likely source of AI-related regulations in the short term, but they’re also the most hospitable forum for local lawyers and technology experts seeking to shape AI policy in their states. State bar associations and lawyers can have an impact on state policymaking that’s simply not possible at the federal level.

For more information on AI in the delivery of legal services, see our recent blog posts on ChatGPT pitfalls and ethical considerations raised by the use of artificial intelligence in the delivery of legal services.