Session
Organizer 1: Civil Society, African Group
Speaker 1: MOHAMED FARAHAT, Civil Society, African Group
Speaker 2: Jimena Sofia Viveros Alvarez, Intergovernmental Organization, Intergovernmental Organization
Speaker 3: Prateek Sibal, Intergovernmental Organization, Intergovernmental Organization
Speaker 4: Malek khachlouf, Government, African Group
Speaker 2: Jimena Sofia Viveros Alvarez, Intergovernmental Organization, Intergovernmental Organization
Speaker 3: Prateek Sibal, Intergovernmental Organization, Intergovernmental Organization
Speaker 4: Malek khachlouf, Government, African Group
Format
Roundtable
Duration (minutes): 60
Format description: The session will bring speakers have extensive experience in the topic , so the roundtable is suitable format for the discussion and give the audience overview about the topic before they engage in the discussion.
Duration (minutes): 60
Format description: The session will bring speakers have extensive experience in the topic , so the roundtable is suitable format for the discussion and give the audience overview about the topic before they engage in the discussion.
Policy Question(s)
To what extent using AI technologies in the courts emphasis the rule of law?
What safeguards should be in the place to respect and protect the human rights?
what are the ethics should be followed by judges/prosecutors / and other actors when using AI in the courts?
What will participants gain from attending this session? The main objective of the session is to discuss the potential concerns from the AI-induced biases and their not-so-subtle implications for a defendant's right to a fair trial, and reach a essential principles and safeguards that should be adopted and taking into account when using AI in Courts and to how guarantee and protect the right to justice and fair trial.
Description:
The integration of AI in the legal system, while on the surface technologically impressive, raises significant concerns about the infringement of a defendant's fundamental rights, particularly the right to a fair trial. The biases embedded in AI algorithms can potentially violate several cornerstone principles of justice. the landmark case of State v. Loomis (2016) in Wisconsin. Eric Loomis was sentenced to six years in prison, partly based on a risk assessment provided by an AI tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). Loomis argued that the use of COMPAS violated his rights, as he was unable to challenge the scientific validity and potential biases of the tool. if an AI tool used for evidence analysis is fed historical data that contains racial biases, the AI is likely to perpetuate these biases. A study demonstrated this with risk assessment tools used in criminal sentencing, showing that these tools can (and tend to) inherit and amplify racial biases present in the historical arrest data. There are many real-life instances where biased AI has led to questionable trial outcomes. In the case of "People v. Bridges" in Michigan (2019), Robert Bridges was wrongfully arrested based on a flawed facial recognition match. The software erroneously identified Bridges as a shoplifting suspect, despite significant physical differences. His case highlights the dangers of relying on AI without enough adequate safeguards in place.
The integration of AI in the legal system, while on the surface technologically impressive, raises significant concerns about the infringement of a defendant's fundamental rights, particularly the right to a fair trial. The biases embedded in AI algorithms can potentially violate several cornerstone principles of justice. the landmark case of State v. Loomis (2016) in Wisconsin. Eric Loomis was sentenced to six years in prison, partly based on a risk assessment provided by an AI tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). Loomis argued that the use of COMPAS violated his rights, as he was unable to challenge the scientific validity and potential biases of the tool. if an AI tool used for evidence analysis is fed historical data that contains racial biases, the AI is likely to perpetuate these biases. A study demonstrated this with risk assessment tools used in criminal sentencing, showing that these tools can (and tend to) inherit and amplify racial biases present in the historical arrest data. There are many real-life instances where biased AI has led to questionable trial outcomes. In the case of "People v. Bridges" in Michigan (2019), Robert Bridges was wrongfully arrested based on a flawed facial recognition match. The software erroneously identified Bridges as a shoplifting suspect, despite significant physical differences. His case highlights the dangers of relying on AI without enough adequate safeguards in place.
Expected Outcomes
The session seek to reach out a number of recommendations, guiding principles and safeguards of fair trial when using AI system in the courts. Exchange the experience with audience is essential to reach out the concrete principles. According the discussion the organizer will form a working group to deep dive in the topic and discuss the topic in details. The organizer with the participants who will interest in the topic will work together to develop a policy paper in national and regional level.
Hybrid Format: the session will have onsite moderator and online moderator, online moderator will follow on intervention by online participants