Robot Lawyer Battles Human Regulators

Correction: This article misstated that the State Bar of California’s Closing the Justice Gap Working Group was still in operation. It was dissolved after the state bar’s annual fee-licensing bill, signed into law on Sept. 18, directed the agency to halt its work on proposals to allow nonlawyers to practice law.

Life moves pretty fast for artificial intelligence start-up DoNotPay. On Jan. 8, CEO Joshua Browder announced the company would pay $1 million to any attorney who would allow the company’s “robot lawyer” to argue before the U.S. Supreme Court. The bold offer initiated a wave of skepticism and intrigue across the legal community.

On Jan. 25, Browder—revealing threats of a jail sentence from State Bar officials—announced the company had ditched all plans of deploying its cybernetic barrister. The CEO says they will now focus on their core mission of helping consumers with issues like bank fees and unwanted subscriptions. While all this may have come as a shock to Browder and his team at DoNotPay, the discourse over AI’s place in the legal profession has been ongoing for years.

Notwithstanding the deep complexities involved in the AI’s backend, its user application is straightforward. DoNotPay’s program would have “argued” by having the attorney parrot statements provided by the AI in real-time through a pair of wireless earbuds connected to the attorney’s cellphone. Ostensibly, the program would simultaneously be listening and responding (through the lawyer) to questions and comments from the justices.

The Supreme Court was not the first place Browder sought to test DoNotPay’s AI. The initial real-world experiment was to take place on Feb. 22 in a California traffic court which is what sparked the backlash from state bar officials across the country. Although Browder will not say who specifically sent letters or threatened a jail sentence, he admits states like California have initiated investigations of DoNotPay.

The reason for the fallout is clear. State bars are concerned about the unauthorized practice of law. Officials for the State Bar of California declined to comment on the investigation, when asked by reporters, but said they do send warnings to possible violators. Under California Business and Professions Code section 6126, the unauthorized practice of law is a misdemeanor punishable by up to one year in a county jail or by a fine of up to $1,000, or both.

Browder claims however in selecting a test-site, he focused on jurisdictions where legal representation is specifically defined as done by a “person” thus, in his view, avoiding the issue entirely. California is one of those jurisdictions. Although courts have recently been unreceptive to considering an AI a “person,” the threats were still too much of a risk for Browder.

In contrast to the tenor of recent headlines, the State Bar of California has been looking into regulating “robot lawyers” for at least five years. In 2018, its Board of Trustees, citing goals of public protection and increased access to justice, directed the formation of the Task Force on Access Through Innovation of Legal Services.

The task force was charged with “identifying possible regulatory changes to enhance the delivery of, and access to, legal services through the use of technology, including artificial intelligence and online legal service delivery models.” The task force included the “Subcommittee on Unauthorized Practice of Law and Artificial Intelligence” (UPL-AI Subcommittee).

An early 2019 report from the UPL-AI Subcommittee addressed what it described as a “legal advice device.” This report and later ones discussed potential methods of assessing, approving, and regulating devices of this nature which, at that time, were only theoretically possible. The subcommittee was particularly interested in the Food and Drug Administration’s then-recent approval of an autonomous medical device which made use of AI. This discussion inspired subcommittee members to propose a conceptual regulatory framework based upon the FDA’s approach.

This framework would first determine if a software application was a “legal advice device,” which they defined as “any technology that researches and applies law to a person’s particular facts and renders a legal opinion on a legal question and/or provides a recommendation for action that is legally sound.” If a legal advice device were intended for lawyers, it would be slated for streamlined approval because, when used by a licensed attorney, a program could not be considered the unauthorized practice of law.

If intended for the public, the program’s classification would depend on whether it were intended to be used for document preparation, in a legislative or administrative proceeding, a civil court proceeding, or a criminal court proceeding. The idea was to separate the various levels of risk to the public that could be presented by an autonomous legal advice product and address them accordingly. Mistakes made in preparing certain types of legal documents present lower risks than mistakes made in a criminal proceeding which could result in a prison sentence.

Like the FDA regulatory system, the subcommittee contemplated that legal advice products would be continually updated and reissued in subsequent versions. Devices with predicates would be able to engage a streamlined approval process under the proposed framework.

In its final report, the Task Force on Access Through Innovation of Legal Services recommended the creation of a working group to flesh out these ideas in the form of a “regulatory sandbox.”  This would be the first step to a full-scale system of regulations for artificially intelligent legal advice programs. In the sandbox, software developers would be given an opportunity to test their legal advice programs in a controlled environment.

In turn, the California Bar would be able to collect data and monitor the services to ensure consumer protection with the overall goal of increasing access to justice. On March 24, the Board of Trustees adopted the task force’s recommendations and formed the Closing the Justice Gap Working Group for the purpose of ironing out the details for the regulatory sandbox.

Despite the backlash, this record suggests that—in moving towards an anticipated AI legal market—DoNotPay and the California Bar have the same primary goal of providing consumers access to effective legal assistance when it would not otherwise be affordable.

To the dismay of all involved, these efforts came to a stop on Sept. 18, when Gov. Newson signed AB 2958 into law.  This legislation banned corporate ownership of law firms and fee splitting with nonlawyers. It also prohibited the State Bar from proposing future changes to limitations on the unlicensed practice of law—effectively killing the AI sandbox. With no way to lessen restrictions, even in a controlled environment, the State Bar cannot develop or propose the regulations needed to permit legal advice products in California.

A “robot lawyer” future seems all but inevitable when considering recent major successes for programs like ChatGPT.  The only question for now is when will the law catch up.

Evan Louis Miller is an associate attorney at McManis Faulkner in the Silicon Valley. He may be reached at [email protected]

Related Posts