top of page
  • Writer's pictureDaniel Gao

Do Justice and AI belong in the Same Courtroom?

New AI technologies allow lawyers to profile potential jurors and select ones that are statistically more likely to support their client. Is this a reflection of pre-existing strategies or is it manipulation of the court?

The Sixth Amendment of the Constitution and the United States Judicial System have made significant efforts to protect the rights of all parties in the courtroom and ensure fair trials. Criminal defendants are by default "innocent until proven guilty" and are guaranteed a lawyer and impartial jury, ensuring a level playing field for them to prove their innocence. However, an idea of legal realism is that as long as the law is administered by humans, human frailty and limitations would play a role. The judge and jury, the key players in determining guilt and sentencing, are not yet robots, and thus will inevitably bring bias into the courtroom. Consequently, attorneys with experience are somewhat able to "play" the court, where knowing the court might be just as valuable as knowing the law. For instance, knowing a judge's personality or even current emotion on court day could be valuable information to a client.

Recently, advancements in data science have unlocked an arsenal of tools for lawyers to predict court decisions. Let's say a company developed a system that analyzes the social media profiles of jurors to predict whether a juror is swayed by emotional or factual appeals, or if they belong to specific group, especially gender, political opinion and such like, that is likely hostile to your client. An attorney could use this app during jury selection to get favorable jurors on the bench and challenge others, improving their chance of success. Is such an app ethical? France says no, and they have a law that prohibits the use of AI to develop profiles of individual judges; their likes and dislikes, to prevent this sort of manipulation. To aid you with you decision, I will present some arguments for both sides:


  • The product technically just automates what prosecutors and defense lawyers do anyway. The fact that the legal system permits the evaluation and challenging of potential jurors indicates that the social consensus approves of them.

  • It contributes to equality of arms between defense and prosecution. It models the knowledge a very experienced – and therefore expensive attorney, which will be out of reach to many clients. Unequal financial resources are a major source of injustice, and this app helps address it.

  • The app is also fairer to young attorneys. Decisions are often made based on a mix of gut feeling and the “attorney knowing their court”, based on extensive experience. This creates an unfair and self-perpetuating monopoly, where people need the right connection to be hired on a junior position to then slowly gain the same experience. The app shortcuts this process and leads to fairer and more open competition between lawyers. Healthier business competition reduces costs overall, which benefits our overstretched justice and legal aid systems.


  • The product is likely to perpetuate existing stereotypes and prejudices, and give them the facade of scientific respectability. This is particularly harmful if the underlying model is untested and not subject to robust evaluation and also challenges.

  • The product would also be unfair to the defendant, as the plaintiff's side gets to do jury selection first. The right to a “trial by your peers” is premised on the idea that there is at least a possibility that one or more of the jurors share their background, experiences, and ideology, and therefore can understand why a person may act in a specific way. This can be crucial to give evidence its due weight. By identifying these jurors as “suspicious” and excluding them, this ideal is undermined.

  • The application is not benign. Serving on the jury is not just a duty, but right. In the US, Strauder v. West Virginia established that exclusion of a jurors only due to their race was unconstitutional. The use of this algorithm to exclude jurors on the basis of their identity is therefore not only unethical but also camouflaging illegal behavior.

  • The use of AI to analyze public data will further raise questions and protest about the intersection of surveillance technology and privacy/consent.


Now, what do you think should be the future of AI's involvement in jury selection? It becomes clear, after viewing both sides, that ethics problems regarding emerging AI technology are extremely complex. With multiple stakeholders with their own ethical perspectives involved, it is challenging or even impossible to find a solution that satisfies everyone's valid morals. However, we still have a moral responsibility to try to maximize good and minimize harm in an intimidating, unpredictable world.


1,046 views5 comments

5 Komentar

Dinilai 0 dari 5 bintang.
Belum ada penilaian

Tambahkan penilaian
23 Des 2023
Dinilai 5 dari 5 bintang.

im gonna touch you


George Gao
George Gao
17 Okt 2023

Very good point 👍

03 Sep 2023

I didn't realize this is so complicated. Good discussion.


21 Agu 2023

Nice ideas in this article.

I believe AI may come to the assistance of lawyers eventually. Even if AI is banned from the courtroom, lawyers may still use AI to prepare scripts or document evidence.

AI in the jury would be much more controversial, considering that AI would need to be very sophisticated to imitate impartial, yet human, decisions.


Judy Gao
Judy Gao
21 Agu 2023

Wow, never thought about this before! Vety interesting to know!

bottom of page