Check-in and access this session from the IGF Schedule.

IGF 2019 WS #399
Talking ethics, writing laws and what’s left for us and AI

    Subtheme

    Organizer 1: Fanny Hidvegi, Access Now
    Organizer 2: Paus Inger, Vodafone Institute for Society and Communication

    Speaker 1: Anna Bacciarelli, Civil Society, Western European and Others Group (WEOG)
    Speaker 2: Nuria Oliver, Private Sector, Western European and Others Group (WEOG)
    Speaker 3: Malavika Jayaram, Civil Society, Asia-Pacific Group
    Speaker 4: Max Senges, Private Sector, Western European and Others Group (WEOG)

    Moderator

    Fanny Hidvegi, Civil Society, Eastern European Group

    Online Moderator

    Paus Inger, Private Sector, Western European and Others Group (WEOG)

    Rapporteur

    Fanny Hidvegi, Civil Society, Eastern European Group

    Format

    Other - 90 Min
    Format description: Micro-multistakeholder community debate

    Policy Question(s)

    In 2019 artificial intelligence is still a buzzword, providing the opportunity to have policy debates around the societal and individual harms and benefits of automated decision making, big data, machine learning and robots under the same umbrella term, depending on the agenda and taste of the given event organiser.

    While all these conversation about Artificial Intelligence with a capital A and I are painfully stuck in between voluntary ethics guidelines, sandboxing for innovation, and calls for the application of human rights frameworks, the application of AI systems is being written into laws.

    Instead of generally comparing the most prevalent policy tools on the table that are being characterised as frameworks for artificial intelligence (eg. ethics guidelines, impact assessments, regulations) we will pick one very specific and well-defined AI related situation/decision/case and we will see what answer or solution those different policy tools would give to that problem.

    The three policy tools that we will consider using as a framework:

    An ethics guidelines: On Monday 8 April, the European Commission’s High Level Expert Group on Artificial Intelligence (HLEG) published its “Ethics Guidelines for Trustworthy AI”. The concept of trustworthy AI is introduced and defined by the Guidelines as a voluntary framework to achieve legal, ethical, and robust AI. Alternatively, we would pick an ethics guidelines developed by a private sector actor.
    AI Now’s algorithmic impact assessment model
    A human rights based, normative framework: we believe that by the time of the event the Council of Europe will have released a draft framework relevant to AI

    SDGs

    GOAL 16: Peace, Justice and Strong Institutions

    Description: We developed a format last year that worked really well. Based on the lessons learned, I adapted the format a bit for this year's session as follows:

    Introduction [10mins]:
    - session organisers
    - objectives and framing (but no presentation or speech)
    - explaining the format and the AI problem/case

    Small group discussions [30mins]:
    - we will break into three groups, one per policy tool
    - based on our outreach we hope to have relevant experts in the room but we also want to make sure that newcomers to the topic can enjoy the session as well
    - the "speakers" will be the small group leaders
    - we ask each group to pick someone who will report back - this person ideally is not the group leader so we have different people getting the chance to be active

    Debate / Reporting back from small groups (3x10min)
    - each group presents how their policy tools answers or solves the problem at hand

    Outcome/conclusion: (10-20mins)
    - based on the reporting back we'll make a vote in the room about which solution they found the most suitable to the problem.


    Expected Outcomes: The expected outcome is to go one level deeper than just discussing the usual differences of ethics and human rights, voluntary, self-regulatory and regulatory approaches but to look at a practical case to see if and how they reach to a different conclusion.

    The session description gives a detailed explanation about the format. In addition to this participatory and inclusive format we will make sure to have a preparatory call with our group leaders to discuss facilitation in the small groups to ensure that many people gets the opportunity to contribute. The debate and reporting back will be facilitated by the organisers.

    Relevance to Theme: The policy questions we plan to discuss during these sessions are relevant for this theme on multiple levels. First, we will explore ethical, legal and regulatory approaches to an emerging technology. Second, through this method, we will see the difference between the local, regional and international governance models on a topic that is very closely tied to data. Finally, the session will contribute to the narrative of this theme because we will go beyond just discussing these policy options. We aspire to assess them based on the solutions they provide and see if they are sufficient, adequate and desired from the perspective of the outcome.

    Relevance to Internet Governance: Artificial intelligence has been one of the most prominent topic subject to policy debates, self-regulatory initiatives, technical research and innovation, and public debate in the past few years. Practically all stakeholders involved in internet governance are working on principles, norms, rules, decisions around AI systems.

    Online Participation

    We're not planning to use the online tool because it didn't work well last year. Due to the small group discussions it was not technically possible to allow actual interaction between online participants and the room.

    Proposed Additional Tools: We will publish our "case study" on twitter and ask for feedback there.