IGF 2025 WS #3 Right to seek asylum in the age of AI, risks and Safeguards

    Organizer 1: Civil Society, African Group
    Speaker 1: Malek khachlouf, Government, African Group
    Speaker 2: maya dalloul, Civil Society, Asia-Pacific Group
    Speaker 3: Gustavo Fonseca Ribeiro, Civil Society, Latin American and Caribbean Group (GRULAC)
    Format
    Roundtable
    Duration (minutes): 90
    Format description: Due to the nature of the topic , the organizer will invite expert speakers in the the filed to discuss the issue front of the participants , to give them overview and provide information needed during the discussion. this way will encourage the participants to engage in the discussion. So, the selected format is more suitable for this purpose.
    Policy Question(s)
    Using AI system in the context of asylum raise a several challenges and questions about 1. the legality of automated tools in assess asylum applications? 2.How can we use digital technologies responsibly in the area of displacement? 3.what the purpose or motivation behind the deployment and use of digital technologies in particularly AI is, and critically assessing who makes the decisions? 4.what responsible and human-centred approach to digital technologies in displacement settings should – and could – look like in practice? What the adverse impact of handing over decisions in asylum applications to machines?
    What will participants gain from attending this session? rise awareness about the impact of using AI on one vulnerable groups " Refugees" , to know what is safeguards should be in the place to protect their rights in the age of AI
    Description:

    While employing Artificial Intelligence (AI) and digital technology is a political choice, the People on the move, as well as their families and communities, often in vulnerable situations, are finding themselves at the ‘sharp edges’ of policies and practices over which they have no control and little to no agency in shaping. Automated tools are increasingly being used in public decision-making related to migration and asylum. With Using automated systems in migration and asylum processes, there is a potential for algorithmic bias or for enshrining existing inequalities, racism, and other forms of structural discrimination through automated systems. Generally, the use of automated tools in the public domain to identify, categorise and evaluate individuals raises important legal issues concerning fundamental rights. In the context of refugees automated tools might lead to violating refugees’ fundamental rights such as seek asylum and protection, non-refoulement and may lead to detention and deportation. In addition accordance to bias could lead to reject asylum application.
    Expected Outcomes
    The workshop will be first step to issued an article about the topic, and also to organizing capacity building program for actors in the field of refugees and migration about the impact of using AI on the refugees and their protection
    Hybrid Format: Online moderator will be onsite to facilitate the engagement of online participants