When the US state of New Jersey lifted a Covid-19 ban on foreclosures last year, court officials hatched a plan to handle the incoming influx of cases: train a chatbot to respond to queries.
The program — nicknamed JIA — is one of a number of bots being rolled out by US justice systems, with advocates saying they improve access to services while critics warn automation opens the door for errors, bias, and privacy violations.
“The benefit of the chatbot is you teach it once and it knows the answer,” said Jack McCarthy, chief information officer of the New Jersey court system.
“(With) a help desk or staff, you tell one person and now you’ve got to train every other staff member.”
The trend towards such chatbots could accelerate in the near future — the US Department of Justice (DoJ) last month closed a public call asking for examples of “successful implementation” of the technology in criminal justice settings.
“It raises a flag that the DoJ is going to move towards funding more automation,” said Ben Winters, a lawyer with the rights group the Electronic Privacy Information Center (EPIC), which submitted a cautionary comment to the DoJ.
It urged the government to study the “very limited utility of chatbots, the potential dangers of over-reliance, and collateral consequences of widespread adoption.”
The National Institute of Justice (NIJ), the DoJ’s research arm, said it is simply gathering data in an effort to respond to developments in the criminal justice space and create “informative content” on emerging tech issues.
A 2021 NIJ report identified four kinds of criminal justice chatbots: those used by police, court systems, jails and prisons, and victim services.
So far, most function as glorified menus that do not use artificial intelligence (AI).
But the report predicts that much more advanced chatbots, including those that measure emotions and mimic empathy, are likely to be introduced into the criminal justice system.
JIA, for its part, was trained using machine learning from court documents and can handle 20,000 variants of questions and answers, from queries over wiping criminal records to child custody rules.
Its developers are trying to build more tailored services, allowing people to ask for personal information such as their court date.
But it is not involved in making any decisions or arbitration — “a thick line” that the courts system does not intend to cross, said Sivakumar Appavoo, a program manager working on AI and robotic automation.
Snorri Ogata, the chief information officer of Los Angeles courts, said his staff tried to build a JIA-style chatbot, trained using years’ of data from live agents handling questions about jury selection.
But the system struggled to give accurate answers and was often confused by queries, he said. So the court settled on a series of simpler menus that do not allow open-ended questions.
“In justice and in courts, the stakes are higher, and we were stressed about directing people incorrectly,” he said.
Last year, the Identity Theft Resource Center — a nonprofit that helps victims of identity theft — tried to train a chatbot to respond to victims outside working hours, when staff were not available.
But the system — supported by DoJ funding — was unable to provide consistently accurate information, or respond with appropriate nuance, said Mona Terry, the chief victims officer.
In particular, it could not adapt to new identity theft schemes that cropped up during the Covid-19 pandemic, which produced new jargon and inquiries the system had not been trained for.
“There’s so much subtlety and emotion that goes into it — I’m not sure a chatbot could take that over,” Terry said.
Emily Bender, a professor at the University of Washington who studies ethical issues in automated language models, said carefully built interfaces to help citizens interact with government documents can be empowering.
But trying to build chatbots that mimic human interaction in a criminal justice context carries significant risks, she said.
“We have to keep in mind that anyone interacting with the justice system is in a vulnerable position,” Bender told the Thomson Reuters Foundation.
Chatbots should not be relied upon to give time-sensitive advice to those at risk, she said, while systems also need to have strong privacy protections and offer people a way to opt out so they can avoid unwanted data collection. — Thomson Reuters Foundation
LEAVE A COMMENT Your email address will not be published. Required fields are marked*
The European war project
Aviation: long-term climate goal key to net-zero carbon emissions by 2050
The World Trade Organisation is back on track
The case for Ukraine’s EU membership
The supply-side fight against inflation
Madrid round: Getting Nato up to scratch
Donald Trump’s lessons for defending the rule of law