A few years ago, T-shirts and coffee mugs with the phrase “Please do not confuse your Google search with my law degree,” were making the rounds. What simpler times those were. Thanks to the rapid development and proliferation of generative artificial intelligence (GenAI), a subset of artificial intelligence (AI) that “focuses on creating new content based on patterns learned from data,” a person seeking advice on the internet about a legal issue today has access to far more than websites or articles that include the terms entered in the search. Indeed, a Google search about a legal topic generates an “AI overview,” a product of Google’s GenAI tool, which may summarize applicable law and suggest steps the person might take to address the issue. And a simple prompt entered into the free version of ChatGPT will generate a draft civil complaint, including instructions about where to file the matter. Defendants in criminal cases likewise may utilize GenAI to prepare defense strategy, craft legal arguments, and summarize documents. While on the one hand, such tools may expand access to justice, they also may increase the number of frivolous or unsupported filings and thus may expose unrepresented litigants to risks including monetary sanctions. Such litigants also may unwittingly expose information to their adversaries by using open-source AI tools as a replacement for counsel.
Of course, concerns about GenAI and its impact on the courts extend beyond its use in preparation for litigation and the filing of legal documents. GenAI may be employed to affect substantive evidence, by, for example, altering recordings and footage to enhance sound or other features, or by fabricating evidence altogether. In the face of these concerns, what are trial courts to do?
Trial courts have responded to such concerns in a variety of ways, with some prohibiting the use of GenAI to prepare filings, some requiring filers to declare their use of GenAI to generate any portion of a legal filing and their independent verification of the accuracy of the filing’s contents, and others imposing no requirement that GenAI use be disclosed but reminding filers of the governing rules and standards of conduct. One Texas court has required that all filers (including self-represented litigants) certify that “all language, quotations, sources, citations, arguments, and legal analysis created or contributed to by generative artificial intelligence were before submission verified as accurate through traditional (non-AI) legal sources by an attorney licensed to practice law in the State of Texas.”
The Illinois Supreme Court adopted a policy on artificial intelligence last year that fell into the no-disclosure-but-be-careful camp, stating in part:
The use of AI by litigants, attorneys, judges, judicial clerks, research attorneys, and court staff providing similar support may be expected, should not be discouraged, and is authorized provided it complies with legal and ethical standards. Disclosure of AI use should not be required in a pleading. The Rules of Professional Conduct and the Code of Judicial Conduct apply fully to the use of AI technologies. Attorneys, judges, and self-represented litigants are accountable for their final work product. All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations. Prior to employing any technology, including generative AI applications, users must understand both general AI capabilities and the specific tools being utilized.
North Carolina’s Supreme Court has not adopted a rule or policy governing the use of AI, leaving the matter to be addressed, or not, by individual judicial districts. (The North Carolina State Bar addressed an attorney’s professional responsibility when using AI in a 2024 Formal Ethics Opinion.) Only one trial court district in the state, Superior Court District 25 (Cabarrus County), has adopted an administrative order addressing AI. See Revised Administrative Order 25-09 (December 8, 2025) (replacing Administrative Order adopted July 23, 2024). The order was promulgated by Senior Resident Superior Court Judge Martin (Marty) McGee, a nationally recognized leader on technology and the courts. The order’s approach to AI differs from the one adopted by many other courts in that it distinguishes the use of AI to affect advocacy from its use to affect evidence. The order does not require parties and counsel to disclose their use of AI to draft pleadings and other advocacy-related documents that are not to be used as evidence or to establish facts. On the other hand, it does require parties and counsel to disclose the use of AI to generate or alter evidence. That disclosure generally must occur 90 days before trial since “[l]earning of an AI-related authenticity or reliability issue in the midst of trial or hearing may deprive litigants of a fair opportunity to be heard and the Court the time necessary to make informed rulings consistent with the Court’s gatekeeping responsibilities.” Revised Administrative Order 25-09 at ¶6b. The order contains a helpful chart advising whether disclosure is required in common circumstances and is accompanied by a Quick Reference Guide with even more abbreviated guidance. As a Fellow for RAILS (Responsible AI in Legal Services), Judge McGee currently is working to develop a model administrative order governing GenAI that can be modified to meet individual needs.
The National Center for State Courts has partnered with Thomson Reuters Institute to create the AI Policy Consortium for Law & Courts. That group has developed resources to help courts address AI-related issues including a guide for judges on AI-generated evidence, guidance on AI and hallucinations, and a guide for talking to court users about AI.
Of course, litigants and parties are not the only actors utilizing this rapidly evolving technology. Judges too are benefitting from GenAI’s capabilities and are wrestling with the appropriate limits on its use. A federal district court judge recently wrote about his experience using GenAI in connection with a bench trial in a voting rights case and offered guidance based on his experience. See Xavier Rodriguez, Judging AI: How U.S. Judges Can Harness Generative AI Without Compromising Justice, Judicature Vol. 109 No. 2 (Bolch Judicial Institute, Duke Law 2025). Judge Rodriguez reported that GenAI was able to summarize trial testimony, replete with citations to the record, in seconds. The judge’s intern took far longer, though the human-generated summary was more focused. Judge Rodriguez then used GenAI to draft findings of fact and conclusions of law on one of the causes of action in the case, but only after publishing his final order. He explained:
I do not advocate that GenAI be used as a substitute for judicial decision-making, for many reasons. A GenAI response might be partially or even completely inaccurate. A judge may unintentionally become “anchored” to the GenAI’s response — sometimes referred to as automation bias, a phenomenon in which humans trust GenAI responses as valid without validating the results. Similarly, a judge might be influenced by confirmation bias, where a human accepts the GenAI results because they align with the beliefs and opinions already held.
That said, he added: “I do not doubt that GenAI tools can be used to assist judicial officers in performing their work more efficiently. A GenAI tool could also be used after a draft of an order or opinion is completed to verify or question the draft’s accuracy, and confirmation bias can occur without the use of an AI tool.”
As to the results, Judge Rodriguez said they revealed (to his relief) that GenAI is not yet ready to replace judges. The results were overbroad and some of the analysis was superficial. He wrote that to effectively use GenAI as a tool in opinion writing judges must (1) be familiar with the factual and legal disputes to develop appropriate prompts, (2) consider the limitations of the tool being used and the nature of the evidentiary record, and (3) evaluate the relevancy, accuracy, and limitations of the results.