Artificial Intelligence and the Practice of Criminal Law

You’ve probably heard of ChatGPT, the artificial intelligence chatbot built by the company Open AI. The most recent version of Open AI’s product, GPT-4, “scored in the 88th percentile on the LSAT . . . and did even better on the [Uniform Bar Exam] by scoring in the 90th percentile.” More details here, but this might reasonably make criminal lawyers wonder whether we could be replaced by AI.

It almost happened already. The first “robot lawyer” was supposed to debut this spring. An AI startup called DoNotPay planned to equip two litigants challenging their traffic tickets with smart glasses. The glasses would enable a litigant to record what the judge said, feed that into an AI, and hear suggested responses through an earpiece. After threats from various state bar officials suggesting that the company would be engaged in the unauthorized practice of law, the company backed away from its plans, as NPR reports here. Now it has it the defendant in a civil suit accusing it of the unauthorized practice of law, according to Reuters.

Some applications are pretty obvious. Robot lawyers might not be here yet but you’re already using AI when you do legal research on Westlaw. According to this article:

[AI] has been seamlessly woven into . . . research services . . . [like] Westlaw Edge, launched by Thomson Reuters more than three years ago. The keyword or boolean search approach that was the hallmark of the service for decades has been augmented by semantic search. This means the machine learning algorithms are trying to understand the meaning of the words, not just match them to keywords. Another example of an AI-powered feature from Westlaw Edge is Quick Check, which uses AI to analyze a draft argument to gain further insights or identify relevant authority that may have been missed.

AI is already used to conduct document review in large commercial disputes, so I doubt that those entry-level document review contract positions that many young lawyers have relied on over the years are going to be as numerous in the future.

AI isn’t limited to analyzing text someone else has drafted, it can do the drafting itself. Commercial websites already use AI to generate content, newspapers use it to write summaries of sporting contests, and fiction authors use it to draft scenes. The human role is reduced, or elevated, to being more of an editor. I would be shocked if lawyers who write motions and appellate briefs didn’t start to use AI to do drafting work. Maybe some already have. Doubtless some old-school artisans will continue to insist on doing all the drafting themselves, but the cost to hire an AI+lawyer team to handle an appeal or some other writing-intensive task will drastically undercut the cost of hiring a lawyer who adheres to old-fashioned practices like writing a brief him- or herself.

Outside the criminal law arena, lawyers will use AI to draft contracts and similar documents. One of the world’s largest law firms is already using an AI platform to assist its attorneys.

AI will also substitute for, or supplement, lawyers providing elementary legal advice. Want to know the basics of deferred prosecution in Kentucky? Ask AI. Need to know how to apply for a DVPO in Nevada? Ask AI. Can you get a concealed carry permit in Minnesota if you’ve been dishonorably discharged from the Navy? Ask AI. Many Americans need legal services but can’t afford them, and an optimistic spin on things is that AI may help close that gap. AI may also streamline the public-facing work of clerks and magistrates, helping members of the public get answers to simple questions.

The longer term is more uncertain. AI technology is evolving rapidly enough that it is frightening to many. Recently a large group of AI researchers was asked “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median response was 10%. If the machines may become capable enough to cause human extinction, certainly it is possible that they could become able to handle a trial in district court. For example, a robot prosecutor could digest an officer’s report about stopping the defendant for DWI and generate questions designed to elicit the information included in the report and to cover every element of the offense. It would never be late to work or get sick and would never forget to ask the officer whether the defendant was on a public street or highway. Or perhaps the AI could negotiate pleas while the human prosecutor tried cases, expediting crowded dockets in district court.

On the defense side of things, AI might also be effective at client intake. Many defense lawyers use client intake forms that ask the client for lots of basic biographical information, plus information about their charges. An AI could conduct those interviews, eventually more cheaply and thoroughly than a human legal assistant, following up as appropriate and answering the potential client’s questions based on information provided by the lawyer.

In general, the more complex, creative, and “human” your legal work, the less likely you are to be replaced by AI. The more routine, form-driven, and mechanical your legal work, the more the machines are coming from you. At least, that’s what a UVa law professor who is an expert on “computational legal scholarship” says here.

Samples of ChatGPT’s work. To get a better sense of the sophistication of current AI, I asked a friend who uses ChatGPT to ask it two questions for me. (The friend asked to be identified only as ChatDJZ. He may or may not be a robot himself.) I’ve reprinted the questions and answers unedited below. Note that the public-facing ChatGPT is still powered by GPT-3.5, and GPT-4 is much more powerful.

I asked: Is a motorist who is the subject of a traffic stop in custody under Miranda v. Arizona?

It answered: Typically, a motorist who is the subject of a routine traffic stop is not considered to be “in custody” for the purposes of the Miranda warnings. The Supreme Court has held that Miranda warnings are only required when a person is both in custody and being subjected to interrogation by law enforcement.

During a routine traffic stop, a motorist is generally not considered to be in custody, as the stop is brief and the person is free to leave once the officer has issued a citation or warning. However, if the stop becomes more prolonged or the officer begins to ask questions that go beyond the scope of the traffic violation, the situation may change and the motorist could potentially be considered in custody.

If a motorist is in custody and being questioned by law enforcement, the Miranda warnings would need to be given before any statements made by the motorist could be used as evidence in court.

I asked: How will artificial intelligence affect the practice of criminal law?

It answered: Artificial intelligence (AI) has the potential to significantly impact the practice of criminal law in a number of ways. Here are a few examples:

Predictive policing: AI can be used to analyze large sets of data and predict where crimes are likely to occur, which can help law enforcement agencies deploy resources more efficiently. However, there are concerns about bias in the algorithms used to make these predictions and the potential for over-policing in certain communities.

Evidence analysis: AI can be used to analyze and interpret evidence, such as DNA samples, fingerprints, and surveillance footage. This can help investigators identify suspects and build stronger cases, but it also raises questions about the reliability of AI-generated evidence and the potential for errors.

Sentencing: AI can be used to predict a defendant’s likelihood of reoffending or to recommend sentences based on similar cases. However, there are concerns about the fairness and transparency of these algorithms, particularly if they are not properly audited and tested for bias.

Legal research: AI can be used to assist lawyers in legal research by quickly and accurately identifying relevant case law and statutes. This can save time and improve the quality of legal arguments, but it also raises questions about the role of lawyers in the legal system and the potential for job displacement.

Overall, the use of AI in criminal law has the potential to improve efficiency and accuracy in some areas, but it also raises important ethical and legal questions that will need to be addressed as the technology continues to evolve.

Want more examples? This Reuters article discusses a law professor’s experience asking ChatGPT to construct various kinds of legal arguments. Contending that Obergefell v. Hodges should not be overturned, the chatbot noted that the decision “is consistent with a long line of precedent establishing the fundamental right to marry. In Loving v. Virginia, the Court held that marriage is one of the ‘basic civil rights of man,’ and that the right to marry is protected by the Due Process and Equal Protection Clauses of the Constitution.” Not bad, but the best part of the article is when the professor asks ChatGPT to write a poem about how it will change legal services. The result: “ChatGPT will guide us through with ease / It will be a trusted companion and guard / Helping us to provide the best legal services with expertise.” Bottom line, poets may be safer than lawyers from the robot takeover.