Victim ‘Speaks’ via AI, Sparking an International Conversation
Judicature International (2025) | An online-only publication
In many common law countries, victims are given the opportunity to tell the court how a defendant’s actions have affected them, as well as their families, friends, and communities. Called “victim impact statements,” these personal, often emotional accounts of a crime both empower victims and help judges determine an appropriate sentence.
Last month, in what is believed to be a world first, artificial intelligence enabled a murder victim to deliver an impact statement during his killer’s sentencing in Arizona. While international interest has come largely from regions and jurisdictions that permit victim impact statements, the use of AI in an official court proceeding may signal the beginning of broader, global shifts in how such tools are used in justice systems.
Three years ago, Chris Pelkey, a 37-year-old Arizona man, was killed in a road rage shooting. In an extraordinary move, his family used voice recordings, videos, photographs, and AI technology to recreate him digitally, allowing him to address his killer, Gabriel Horcasitas, during the sentencing hearing in May 2025.
“To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the digital Pelkey said. “In another life, we probably could have been friends.”
On the Australian true crime podcast “Motive & Method,” Judge Paul W. Grimm (Ret.), David F. Levi Professor of the Practice of Law and director of the Bolch Judicial Institute at Duke Law School (which publishes Judicature International), noted that the overriding concern with this type of AI use in courtrooms will be whether it could have undue influence on the outcome of the proceeding.
Judge Todd Lang, who presided over the Arizona case, sentenced Horcasitas to 10 and a half years in prison on manslaughter charges — more than the prosecution requested — and specifically praised the use of technology in his ruling.
“Well, I’m not surprised that it was impactful,” said Grimm. “I’m not surprised that it was emotional. I’m not surprised that it was one of those things that really riveted everyone’s attention in the courtroom.”
Grimm considered how the use of AI as a tool for victim advocacy might impact courtroom procedures more generally.
“I can see judges in the future saying, now look, if we are going to have this kind of presentation, I want advance notice about it, so that if there are any objections, I can deal with them before we get in the middle of a courtroom full of people,” he said.
He predicts that attorneys will continue “pushing the envelope,” sometimes successfully and other times not. For their part, judges, he said, will likely try “to get ahead of it by perhaps requiring disclosure. And, you know, usually there are sentencing memoranda with other materials that are filed. But victim statements, oftentimes, there’s no requirement that they provide advance notice of it.”
He also envisions a future where AI-generated victim statements, like Pelkey’s, may become an ordinary part of proceedings.
“I think that as it becomes sort of more common, then the novelty of it dies away,” he said, “and it’s like, okay, yeah, this is just another AI recreation, and so it lowers the impact. You don’t know. We’re at the very beginning of this.”
Encouraging a Case-by-Case Analysis
In a separate interview with the BBC, Grimm said he was not surprised to see AI being used in this way, noting that Arizona courts have already begun incorporating AI technology in other capacities. He highlighted that Arizona’s Supreme Court uses AI systems to help explain rulings to the public, demonstrating the state’s progressive approach to legal technology.
Grimm emphasized that in the Horcasitas case, the use of AI was legally permissible because it occurred during sentencing, with only a judge present, rather than during jury proceedings — a crucial distinction.
“We’ll be learning [AI] on a case-by-case basis, but the technology is irresistible,” Grimm observed.
Grimm is a leading voice in AI legal scholarship, having co-authored several seminal works, including “AI in the Courts: How Worried Should We Be?,” in Judicature in 2024; “The GPTJudge: Justice in a Generative AI World,” in the Duke Law & Technology Review in 2023; and “Artificial Intelligence as Evidence,” in the Northwestern Journal of Technology and Intellectual Property in 2021.
This use of AI to give a victim a final voice in court has opened discussions about victim advocacy, the authenticity of digital representations, and the evolving role of technology in delivering justice. As legal systems worldwide observe Arizona’s pioneering approach, this case may be instructive for future uses of AI in court proceedings.
Michelle Kaminsky is a senior editor and writer at the Bolch Judicial Institute of Duke Law School, which publishes Judicature and Judicature International.