blank

Artificial Intelligence and the Courts

The role of artificial intelligence (AI) in American life was a hot topic of discussion at a conference for judicial educators that I attended earlier this week. The conference launched with a screening of the documentary Coded Bias, which explores disparities in the data that inform algorithms for a range of computerized functions from facial recognition to loan eligibility to insurance risk. The documentary highlights the vast amount of data collected and controlled by a small number of large U.S. companies and the lack of regulation governing its use. A panel of experts spoke after the screening about what judges should know about AI. Several of those topics related to its use in preventing, investigating and punishing crime.

Crime prevention. AI has long been a feature of modern policing. Compstat, a data-driven performance management system for law enforcement agencies, was launched in the early 1990s and its use has been widespread for more than a decade. Compstat and other similar programs require the collection, mapping, and analysis of nearly real-time data on incidents of crime and their location. Law enforcement leaders then use that data to manage police resources and tactics and to evaluate performance. The goal is to prevent crime as opposed to merely responding to it – a strategy sometimes referred to as predictive policing. Detractors complain that not only is predictive policing susceptible to feedback loops, “where police are repeatedly sent back to the same neighborhoods regardless of the actual crime rate,” but that such feedback loops can be a byproduct of biased police data. Rashida Richardson, et al., Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. Online 15, 41 (2019). Specifically, Richardson and her co-authors argue that the data may overrepresent groups or areas that have been disproportionately and unjustifiably targeted by law enforcement and that data regarding crime and criminals not targeted by law enforcement, such as white collar crime and its perpetrators, may be omitted. Id. The authors argue that increased policing based on potentially skewed data also reinforces popular misconceptions about the criminality and safety of underrepresented individuals and communities, citing the improper targeting and arrest of transgender residents by the New Orleans Police Department as an example. Id. at 42.

Criminal investigation. Coded Bias begins with the story of scholar Joy Buolamwini’s research into the disparities in the accuracy of facial recognition programs based on race and gender. Buolamwini discovered that facial recognition was most accurate for white males and that the rates of error were significantly higher for people with darker skin, particularly women. The reason? Facial recognition intelligence was built using a data set that primarily consisted of white males.

What does this have to do with investigating crime? Facial recognition technology is widely used by law enforcement agencies to identify suspects and other persons of interest. An inaccurate match can have disastrous results, such as the wrongful arrest of a Detroit man in June 2020. The primary response from the developers of this technology has been to add data to increase its accuracy. Some argue, however, that given privacy concerns and the potential for its use in ways that might chill protected exercises of expression and other activities, use of this technology should be severely curtailed if not altogether banned.

It is also conceivable that widespread and consistent surveillance analyzed with the benefit of facial recognition technology might, in some circumstances, be considered a Fourth Amendment search. Cf. Carpenter v. United States, 585 U.S. __ (2018) (holding that a person has a legitimate expectation of privacy in the record of his or her physical movements as captured through cell-site location information (CSLI)). Coded Bias documents a successful campaign by Brooklyn, NY apartment residents to dissuade their landlord from using facial recognition technology to enter and leave their building. One might argue that such technology, if used by the government to consistently track a person’s comings and goings, infringes upon a person’s reasonable expectation of privacy.

Criminal punishment. Finally, AI also informs the risk assessment tools that may be used to determine whether a defendant is detained before trial and how the defendant is punished, including how he or she is supervised. Jeff Welty wrote here about the good and bad associated with these tools. On the one hand, there is evidence that their use may result in fewer secured bonds being imposed with no corresponding harm to public safety. On the other, assessments may be based on factors that are immutable and may be a proxy for race, such as whether a parent or friend has ever been incarcerated, how long the person has lived at his or her current address, and how much crime occurs in the person’s neighborhood.

In 2016, ProPublica published a scathing review of the risk assessment tool used in Broward County, Florida in 2013 and 2014. Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016). ProPublica reported that the tool (1) was “remarkably unreliable in forecasting violent crime [as] [o]nly 20 percent of the people predicted to commit violent crimes actually went on to do so”; (2) that the tool was somewhat more accurate than a coin toss (61 percent) at predicting future crimes when a full range of offenses, including misdemeanor traffic offenses, was considered; and (3) there were significant racial disparities. With respect to this last category, the authors found that “[t]he formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants,” and that “white defendants were mislabeled as low risk more often than black defendants.”

What do judges need to know? The experts I heard from (Timnit Gebru, Executive Director of the Distributed Artificial Intelligence Research Institute; Paul Grim, United States District Court Judge for the District of Maryland; and Maureen Grossman, Research Professor at the University of Waterloo) suggested that judges could benefit from training in several topics related to AI. First, they should learn about and be prepared to guard against automation bias, which is the tendency to over-accept computer-generated information. See Kate Goddard et al., Automation bias: a systematic review of frequency, effect mediators, and mitigators, J. Am. Med. Inform. Assoc., Vol. 19 (2012). One of the more striking examples of this phenomenon was documented by researchers at Georgia Tech who found that a majority of study participants followed a robot into a dark room with no discernible exit during a fire alarm rather than to exit the way they entered. See Paul Robinette et al., Overtrust of Robots in Emergency Evacuation Scenarios, HRI ’16: ACM/IEEE International Conference on Human-Robot Interaction Christchurch New Zealand, March 7 – 10, 2016.

They also encouraged judges not to be deterred by their lack of coding expertise, to look at AI through a critical lens, and to ask questions. Consider whether the AI used is valid (that is, does it do what it is supposed to do) and whether it is reliable (does it do that consistently)? Indeed, those are foundational issues in determining whether expert testimony based on technical knowledge is admissible at trial. See N.C. R. Evid. 702. Other relevant considerations are who may properly provide such expert testimony and whether sufficient information regarding the methodology has been provided such that it may be challenged by an adversary.

The end of the world as we know it? I’ll leave it to you to decide whether AI is the end of the world as we know it – and whether you feel fine. Regardless of where you come out, AI certainly has changed the world as we know it, including the legal profession. See Gary E. Merchant, Artificial Intelligence and the Future of Legal Practice, The Sci Tech Lawyer 21 (Summer 2017) (reviewing AI implications for legal practice including technology-assisted document review, legal analytics, and interactive online legal analysis programs). The judiciary is unlikely to remain immune from those changes. Merchant notes the development of dispute resolution technology that may prove capable of resolving certain civil legal claims without court involvement. Indeed, British Columbia has an online tribunal (CRT) through which litigants may resolve small claims and claims related to motor vehicle accidents. The entry point to the system is Solution Explorer, a tool that contains free public information and calculation aids. See A.D. Reiling, Courts and Artificial Intelligence, International Journal for Court Administration 11(2), p. 8 (2020). That said, the essential work of judging is distinctly human. And a criminal defendant’s due process rights (including the right to an individualized determination of his or her sentence) cannot be satisfied by a computer-generated algorithm. So judges are not going the way of the machine but they may need to better understand the machine-generated outputs that inform their work.