top of page
  • Writer's pictureDaniel Gao

Virtual Verdicts: How AI is Transforming Criminal Sentencing

Law enforcement continues to realize the potential of AI in the criminal justice ecosystem, now using facial and license plate recognition, predictive policing, automated DNA analysis, natural language processing, and gunshot audio detection to assist the investigative process. Policing is becoming more and more efficient, but with roughly 1.9 million incarcerated peoples and the highest incarceration rate in the world, there is great pressure to safely reduce prison numbers. To streamline the legal process and ensure consistency, authorities have increasingly adopted automated tools to manage defendants within the legal system.

One such tool is criminal risk assessment. These algorithms analyze the profiles of defendants to determine recidivism, or a convicted criminal's risk of reoffense. Risk factors include demographic information, criminal history, age, and employability, but also includes a myriad of unknown attributes hidden in the algorithms' deep neural networks. The output for most models is a single digit recidivism score that is used by the court to determine the defendant's sentence, jail time, as well as rehabilitation services. Higher scores generally lead to harsher sentences, whereas lower scores are more lenient.

The Results

Data-driven decisions are supposed to create uniformity and reduce bias among a nation's countless judges, but these algorithms often do the exact opposite. In ProPublica's investigation into Northpointe's risk assessment software, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), 7000 arrested people's risk scores from Florida were collected. Of these, only 61% accurately predicted recidivism in the next 2 years, little more than a 50-50. In the investigation, blacks were twice as likely as whites to be labeled a higher risk without actually reoffending. On the other hand, whites were significantly more likely to both be labeled low risk and continue to commit other crimes.

COMPAS's risk assessment scores for two shoplifting arrests. Rivelli was arrested on domestic violence aggravated assault, grand theft, petty theft, and drug trafficking. Cannon was only arrested on petty theft. Only Rivelli later went on to commit recidivism, shoplifting $1000 worth of tools from Home Depot.


The main source of these models' inaccuracies is poor training data. Machine learning algorithms recognizes patterns and draw conclusions from data independently from its creator. Thus, it can only identify correlations, and not causation. Historical data in the US's Department of Justice may reflect societal prejudices and systemic inequalities. A classification system trained on historical data and its unrepresentative patterns will inherit society's biases and only perpetuate those prejudices. A risk assessment transforms correlative findings, like low income and minority race, into causal scoring mechanisms in court. This "bug" may unjustly hack years off a defendants life in prison. Trying to "blind" an algorithm to attributes like race or gender is an almost impossible task, as AI may find indirect ways to introduce such biases into the model. For instance, an algorithm programmed to ignore race on resumes in a job-recruiting situation may learn to distinguish between races via cadence and accents.

Risk assessment AI technology is not a new tool; Oasys, or the Offender assessment System, has been used in the UK justice system until 2001. However, scientists outside of the Ministry of Justice have always been prohibited from accessing Oasys's program and data to assess bias and accuracy. Combined with the complex models and "black box" nature of these algorithms, lack of transparency is the biggest defect of this system. In the US, for-profit companies like Northpointe develop risk assessment algorithms for states like New York, and data and model architecture are kept as proprietary trade secrets. Contrarily, the Ministry of Justice argues that external evaluation of Oasys poses data protection implications because it would mean the release of private data and protected characteristics.

AI uses a complex "blackbox" to make its decisions

How Should We Move Forward?

There have been few legal challenges to risk assessment algortihms around the world. One notable successful case is Ewert vs Canada in 2018, where Canada's Supreme court deemed a predictive algorithm unlawful for use on Indigenous inmates because the model had never been tested on Indigenous Canadians. Currently, many civil rights organizations, like the ACLU and the NAACP, signed a statement urging the discontinuation of risk assessment. However, 11 states and 100+ additional counties have adopted risk assessment to combat their crowded prison emergencies. If risk assessment is to be aboloshed, judicial systems would have to turn to more time consuming processes like interviews by probation officers. There is work to be done to strike the balance between having the consistency of machines and having the soft skills of experienced human judges.

It is important to remember that people, families, and rehabilitation needs exist behind risk assessment numbers and that increased scrutiny is a must to keep algorithms ethical. Vote and contact your local and state officials to voice your opinion!

283 views1 comment

Recent Posts

See All

1 comentário

Avaliado com 0 de 5 estrelas.
Ainda sem avaliações

Adicione uma avaliação
12 de dez. de 2023
Avaliado com 5 de 5 estrelas.
bottom of page