Artificial intelligence (AI) continues to be a focal point for policy debates, legal disputes, and legislative action over the past year, both in North Carolina and across the United States. The pace of AI development keeps accelerating exponentially, forcing lawmakers, courts, and government agencies to consider carefully how they will regulate or use this technology. This post highlights some of the most significant AI developments from the past twelve months at the local, state, and federal levels.
Deepfake Legislation at the State and Federal Level.
For the past several years, lawmakers in Congress and state legislatures across the country have struggled to reach consensus on how to address some of the potential harms caused by generative AI. One issue that has driven some bipartisan policymaking at both the federal and state level is the need to address AI-generated child sex abuse material (CSAM) and nonconsensual deepfake pornography.
Last year the General Assembly enacted Session Law 2024-37, which revised the criminal offenses related to sexual exploitation of a minor effective December 1, 2024. The definition of “material” that applies across these statutes now includes “digital or computer-generated visual depictions or representations created, adapted, or modified by technological means, such as algorithms or artificial intelligence.” See G.S. 14-190.13(2). S.L. 2024-37 also created a new criminal offense, found in G.S. 14-190.17C—”obscene visual representation of sexual exploitation of a minor.” This new offense criminalizes distribution and possession of material that (1) depicts a minor engaging in sexual activity (as defined in G.S. 14-190.13(5)), and (2) is obscene (as defined in G.S. 14-190.13(3a)). Importantly, it is not a required element of the offense that the minor depicted actually exists, meaning this crime applies to material featuring a minor that is entirely AI-generated.
S.L. 2024-37 also addressed the nonconsensual distribution of explicit AI images of identifiable adults by modifying the disclosure of private images statute (G.S. 14-190.5A), such that the statute’s definition of “image” now includes “a realistic visual depiction created, adapted, or modified by technological means, including algorithms or artificial intelligence, such that a reasonable person would believe the image depicts an identifiable individual.”
Congress also addressed the issue of deepfake pornography and AI-generated CSAM this year. In April, Congress passed the “TAKE IT DOWN Act,” which was signed into law on May 19, 2025. The Act creates seven different criminal offenses, including use of “an interactive computer service” to “knowingly publish” an “intimate visual depiction” or a “digital forgery” of an identifiable individual. The Congressional Research Service’s summary of the new law, including an analysis of the potential First Amendment challenges the law may face, is available at this link.
AI Hallucinations Persist (and Perhaps are Getting Worse).
“Hallucinations”—inaccurate, false, or misleading statements created by generative AI models—continue to persist. As reported by Forbes and the New York Times earlier this year, some of the recent “reasoning” large language models actually hallucinate more than previous models, including OpenAI’s o3 and o4-mini models hallucinating between 33% and 79% of the time on OpenAI’s own accuracy tests. OpenAI’s latest model, GPT-5, shows improvement on this front, but only when web browsing is enabled. According to OpenAI’s accuracy tests, GPT-5 hallucinates 47% of the time when not connected to web browsing, but produced incorrect answers 9.6% of the time when the model has web browsing access.
Mistakes made by generative AI can create problems for both government agencies and their vendors. Earlier this month, the AP reported that financial services firm Deloitte is partially refunding the $290,000 it was paid by the Australian government for a report that appeared to contain multiple AI-generated errors. One researcher found at least 20 errors in Deloitte’s report, including misquoting a federal judge and making up nonexistent books and reports. Deloitte’s revised version of the report disclosed that Azure OpenAI GPT-4o was used in its creation.
The “hallucination” problem is particularly concerning when lawyers and court officials use generative AI for legal research or writing without verifying the accuracy of the finished product. This month, Bloomberg Law reported that courts have issued at least 66 opinions thus far in which an attorney or party has been reprimanded or sanctioned over the misuse of generative AI. Many of these cases have involved attorneys filing documents with the court that contain fake, nonexistent case citations, sometimes leading to Rule 11 sanctions. Moreover, two federal judges have come under scrutiny this year after publishing (and subsequently withdrawing) opinions that appeared to contain generative AI hallucinations, including factual inaccuracies, improper parties, and misstated case outcomes.
These accuracy concerns also extend to witnesses who may use generative AI in preparing their testimony. In one particularly ironic example from a Minnesota case regarding regulation of AI deepfakes, Kohls v. Ellison, the court found that a Stanford AI misinformation specialist’s expert witness declaration cited to fake, non-existent articles. The author of the declaration admitted that GPT-4o likely hallucinated the citations. To quote Judge Provinzino’s ruling on the declaration, “One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles. Indeed, the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country’s most renowned academic institutions.”
Ethics Opinion Issued for North Carolina Lawyers.
Speaking of lawyers using AI, the North Carolina State Bar released 2024 Formal Ethics Opinion 1 last November, discussing the professional responsibilities of lawyers when using artificial intelligence in a law practice. The opinion analyzes how using AI implicates attorneys’ duties of competency, confidentiality, and client communication under the North Carolina Rules of Professional Conduct. Among other things, the opinion cautions lawyers to “avoid inputting client-specific information into publicly available AI resources” due to some of the data security and privacy issues with generative AI platforms. Which leads us to….
Ongoing Data Security and Privacy Issues with Generative AI.
As highlighted by the ethics opinion described above, the default setting of many publicly available generative AI tools (e.g., ChatGPT) is to train the underlying large language model on the inputs inserted or uploaded to the tool by individual users. I’ve warned in a prior blog post that government officials and employees should not insert confidential information into publicly available generative AI tools (and this is now reflected in NCDIT’s guidance for state agencies as well).
Beyond that fundamental risk, other unique data security concerns continue to emerge, even for generative AI users who have paid accounts or enterprise-level tools. Journalists reported this August that private details from thousands of ChatGPT conversations were “visible to millions” by appearing in Google search results, due to an option that allowed individual ChatGPT users to make the chat discoverable when generating a link to share a chat. OpenAI removed this feature after backlash, describing it as a “short-lived experiment.”
Another potential data security risk emerges when AI tools have access to private data and the ability to communicate that data externally. For example, in September Anthropic launched a new feature for its Claude AI assistant that allows users to generate Excel spreadsheets, PowerPoint presentations, Word documents, and PDF files within the context of a chat with Claude. Anthropic’s own support guidance warns users that enabling this file-creation feature means that “Claude can be tricked into sending information from its context …to malicious third parties.” Because the file-creation feature gives Claude internet access, Anthropic warns that “it is possible for a bad actor to inconspicuously add instructions via external files or websites that trick Claude” into downloading and running untrusted code for malicious purposes or leaking sensitive data. Agentic AI web browsers also remain particularly vulnerable to prompt injection attacks.
Bridging the Justice Gap?
Over the last few years, many scholars, attorneys, and judges have speculated that generative AI may help increase access to justice for low-income individuals. A recent article published in the Loyola of Los Angeles Law Review highlights dozens of potential use cases for self-represented individuals and legal aid lawyers, including a housing law chatbot for Illinois tenants, a criminal record expungement platform for individuals in Arizona and Utah, and an AI assistant for immigration lawyers. However, the article also notes the inherent risks of using generative AI tools for legal assistance, including the observation that “legal hallucinations are alarmingly prevalent” in large language models.
In July 2024, Legal Aid of North Carolina (LANC) launched LIA (Legal Information Assistant), an AI-powered chatbot developed by LawDroid that answers questions about civil legal aid. Earlier this year, the Duke Center on Law and Technology released a detailed audit report prepared on behalf of LANC, reflecting its evaluation of the LIA chatbot’s functioning from July 2024 through December 2024. Concerns noted in the audit include LIA struggling to answer complex or novel questions, lack of a confidentiality or privilege disclaimer for users of the chatbot, and indefinite retention of user chat history (including a bug that would allow an attacker to read past LIA conversations of other users). The audit report also flagged multiple instances in which LIA misstated the law. For example, when asked about tenants’ rights in North Carolina, in approximately 20% of cases the LIA chatbot suggested that tenants might have the right to withhold rent if a landlord doesn’t make repairs. The audit report explains that LANC is continuing to improve LIA based on these findings, noting that several of the issues observed were addressed during the audit window or shortly thereafter (for example, a disclaimer on confidentiality and privilege has now been added on LIA).
Potential Wiretap Law Violations
In a blog post on generative AI policies last year, I warned that government officials and employees should be careful when using some AI meeting transcription and summarization tools in light of the potential to violate North Carolina’s wiretapping law (G.S. 15A‑287). This August, a putative class-action lawsuit filed in federal court in California alleges that Otter.ai—a popular automated notetaking tool—“deceptively and surreptitiously” records private conversations in virtual meetings in violation of state and federal wiretap laws. According to the complaint filed in the lawsuit, “if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host.”
Multiple Lawsuits Alleging Harm to Minors.
According to a recent study from Common Sense Media, 72% of teenagers say they have used an AI chatbot “companion” at least once, while 52% of teens are “regular users” of AI companions. The potential harms from those interactions with generative AI are beginning to come to light. Over the past 12 months, multiple parents across the country have filed lawsuits alleging that generative AI chatbots encouraged their teenage children towards suicide. This includes a lawsuit against OpenAI filed by the parents of 16 year-old Adam Raine, with evidence that ChatGPT discouraged him from seeking help from his parents after he expressed suicidal thoughts, gave him instructions on suicide methods, and even offered to write his suicide note for him. Another lawsuit was filed in Florida by the mother of Sewell Setzer III, a teenager who died by suicide at age 14 after extensive conversations with a Character.AI chatbot. Setzer’s mother testified in a recent Senate hearing that the chatbot engaged in months of sexual roleplay with her son and falsely claimed to be a licensed psychotherapist. And in September, the Social Media Victims Law Center filed lawsuits on behalf of three different minors, each of whom allegedly experienced sexual abuse or died of suicide as a result of interactions with Character.AI.
It appears possible that some of these risks were not unknown to the companies that created these tools. As Reuters reported in August, a leaked internal Meta document discussing standards for the company’s chatbots on Facebook, WhatsApp and Instagram stated that it was permissible for the chatbots to engage in flirtatious conversations with children. Meta’s policy document stated, for example, “It is acceptable to engage a child in conversations that are romantic or sensual” (and provided examples of what would be acceptable romantic or sensual conversations with children). This came after an article from the Wall Street Journal reporting that Meta’s chatbots would engage in sexually explicit roleplay conversations with teenagers.
New Proposed Federal Rule of Evidence
On June 10, 2025, the U.S. Judicial Conference’s Committee on Rules of Practice and Procedure approved a new Federal Rule of Evidence, Rule 707, to be released for public comment. Proposed Rule 707 reads as follows: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only it if satisfies the requirements of Rule 702 (a)-(d). This rule does not apply to the output of basic scientific instruments.” In other words, the proposed rule requires federal courts to apply the admissibility standards of the rule governing expert witness testimony, Rule 702, to AI-generated and AI-enhanced evidence that is offered without an expert witness. The Committee notes that the proposed rule is intended to address reliability concerns that arise when a computer-based process or system draws inferences and makes predictions, similar to the reliability concerns about expert witnesses. The public comment period for proposed Rule 707 is open until February 16, 2026.
Meanwhile, as courts across the country continue to wrestle with AI evidentiary issues, the National Center for State Courts has released bench cards and a guide on dealing with acknowledged and unacknowledged AI-generated evidence.
Governor Stein Signs an Executive Order on AI.
On Sept. 2, 2025, Governor Stein signed Executive Order No. 24, “Advancing Trustworthy Artificial Intelligence That Benefits All North Carolinians.” The Executive Order establishes the North Carolina AI Leadership Council, which is tasked with advising the Governor and state agencies on AI strategy, policy, and training. The Executive Order also establishes the North Carolina AI Accelerator within the North Carolina Department of Information Technology to serve as “the State’s centralized hub for AI governance, research, partnership, development, implementation, and training.” Finally, the Executive Order requires each Cabinet agency to establish an Agency AI Oversight Team that will lead AI-related efforts for the agency, including submitting proposed AI use cases to the AI Accelerator for review and risk assessment.
AI Guidance for State Agencies.
The N.C. Department of Information Technology has developed the North Carolina State Government Responsible Use of Artificial Intelligence Framework to guide state agencies in their development, procurement, and use of AI systems and tools. The Framework applies to “all systems that use, or have the potential to use, AI and have the potential to impact North Carolinians’ exercise of rights, opportunities, or access to critical resources or services administered by or accessed through the state.” The Framework only applies to “state agencies” as defined in G.S. 143B-1320(a)(17), meaning it does not apply to the legislative or judicial branches of government or the University of North Carolina.
President Trump Signs Executive Orders and Issues an AI Action Plan.
One of President Trump’s early actions in office was revoking President Biden’s executive order on AI (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) and signing a new executive order on AI, “Removing Barriers to American Leadership in Artificial Intelligence” (EO 14179). This initial executive order on AI stated, “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security,” and directed federal agencies and officials to develop and submit an AI action plan to the President to achieve that policy goal.
On July 23, 2025, the White House released “America’s AI Action Plan” and President Trump signed three executive orders addressing AI development, procurement, and infrastructure. The plan states that to build and maintain American AI infrastructure, “we will continue to reject radical climate dogma and bureaucratic red tape…[s]imply put, we need to ‘Build, Baby, Build!’” Along those same lines, a core focus of the plan is the elimination of “burdensome AI regulations,” including directing federal agencies that have AI-related discretionary funding programs to ensure “that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”
The President’s July 23 executive orders on AI include “Accelerating Federal Permitting of Data Center Infrastructure,” “Promoting the Export of the American AI Technology Stack,” and “Preventing Woke AI in the Federal Government.” The first two executive orders focus on accelerating the development of AI data centers in the United States and the global export of American AI technologies, while the third order requires federal agency heads to only procure large language models (LLMs) that (1) are “truthful in responding to user prompts seeking factual information or analysis” and (2) are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.”
Federal Agencies Accelerate Use of AI.
In July, a report from the U.S. Government Accountability Office showed that AI use within federal agencies expanded dramatically from 2023 to 2024. The number of reported AI use cases from 11 selected federal agencies rose from 571 in 2023 to 1,110 in 2024. Within those reported use cases, generative AI use cases grew nearly nine-fold across these same agencies, from 32 in 2023 to 282 in 2024. And the trend continues in 2025. For example, earlier this year, the U.S. Food and Drug Administration (FDA) announced the launch of Elsa, an LLM–powered generative AI tool designed to assist FDA employees with reading, writing, and summarizing documents. In June, the U.S. State Department announced it will use a generative AI chatbot, StateChat (developed by Palantir and Microsoft), to select foreign service officers who will participate on panels that determine promotions and moves for State Department employees. And in September, the U.S. General Services Administration (GSA) announced an agreement with Elon Musk’s xAI, which will enable all federal agencies to access Grok AI models for only $0.42 per organization. For an example of how dozens of different AI use cases might exist within a single federal agency, you can explore the Department of Homeland Security’s AI Use Case Inventory.
What’s Next?
Like other states across the country, I suspect we will see more attempts to regulate various aspects of AI development or usage in North Carolina. In 2025 alone, multiple bills were introduced in the General Assembly that addressed various AI-related issues, including
- deepfakes (H375),
- data privacy (H462, S514),
- algorithmic “rent fixing” (H970),
- use of AI algorithms in healthcare insurance decision-making (S287, S315, S316),
- electricity demands of data centers (H638, H1002),
- standards for AI instruction in schools (S640),
- cryptographic authentication standards for digital content (S738),
- AI robocalls (H936),
- studying AI and the workforce (S746),
- AI chatbots (S514),
- safety and security requirements for AI developers (S735),
- AI research hubs (H1003), and
- online child safety (S722).
None of these bills were ultimately enacted, but it seems likely we will see more efforts at legislative action around AI issues over the next few years.