Session
Other
Sub-theme description: “Internet & Societal Challenges”; “Gender Equality”; and "Reduced Inequalities"
Organizer 1: Bernard Shen, Microsoft Corporation
Speaker 1: Scott Campbell, Intergovernmental Organization, Intergovernmental Organization
Speaker 2: David Reichel, Intergovernmental Organization, Intergovernmental Organization
Speaker 3: Layla El Asri, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Wafa Ben-Hassine, Civil Society, African Group
Speaker 5: Sana Khareghani, Government, Western European and Others Group (WEOG)
Mr. Bernard Shen, Assistant General Counsel – Human Rights; Corporate, External and Legal Affairs; Microsoft
We will identify an online moderator to enable online participation.
Mr. Bernard Shen, Assistant General Counsel – Human Rights; Corporate, External and Legal Affairs; Microsoft
Panel - 90 Min
A multiplicity of stakeholders will work together to shape the agenda. Mr. Bernard Shen, Assistant General Counsel – Human Rights, in the Corporate, External and Legal Affairs department of Microsoft is listed as the organizer. However, he will work closely with the speakers to ensure that the multi-dimensional perspectives, experiences, and expertise of the speakers and their respective organizations/sectors are reflected in the agenda for the session and in the questions and issues that will be discussed during the workshop. The speakers reflect a diversity in gender, nationalities, and sector/stakeholder roles and expertise (including government, intergovernmental organization, civil society, private sector, and technical roles and expertise), as well as regional and international perspectives. Discussion during the workshop session will be moderated by Mr. Bernard Shen, Assistant General Counsel – Human Rights, in the Corporate, External and Legal Affairs department at Microsoft. His work at Microsoft focuses on human rights policy and practice in cloud technology and services, including AI technology and service. Given the focus of his work, he is well positioned to serve as moderator for this topic. He will focus on facilitating an interactive discussion among the speakers, as well as between the speakers and the audience.
The workshop has a panel that is diverse in gender, nationalities (six nationalities among the moderator and the five speakers), and stakeholder and sector perspectives and expertise, including government, intergovernmental organization, civil society, private sector, and technical roles and expertise, as well as regional (including the Global South) and international perspectives. The workshop will be organized and moderated by Mr. Bernard Shen, Assistant General Counsel – Human Rights, in the Corporate, External and Legal Affairs department at Microsoft. His work at Microsoft focuses on human rights policy and practice in cloud technology and services, including AI technology and service. Given the focus of his work, he is well positioned to organize the workshop and serve as moderator. He will work closely with the speakers to prepare for and shape the workshop and develop a plan to create a dynamic discussion of the questions on the agenda. During the workshop, he will focus on facilitating an interactive discussion among the speakers, and between the speakers and the audience. Ms. Peggy Hicks is a national of the United States, and is based in Geneva, Switzerland. She is the Director of the Thematic Engagement, Special Procedures and Right to Development Division, in the Office of the United Nations High Commissioner for Human Rights (OHCHR). She brings global experience and perspectives to the discussion on the opportunities and concerns in the use of AI, including the risk of unfair bias on the basis of age, gender, disability, race, ethnicity, origin, religion or economic or other status, the mitigation of those risks, and the pursuit of responsible use of AI in support of Sustainable Development Goals, especially SDG#5 (Gender Equality) and SDG#10 (Reduced Inequalities). Mr. David Reichel is a national of Austria and is based in Vienna, Austria. His work at the European Union Agency for Fundamental Rights (“FRA”) assesses the pros and cons for fundamental rights of using artificial intelligence and big data for public administration and business purposes in selected EU Member States (including concerns regarding discrimination in data-supported decision making). Give his role and work at FRA, he is uniquely positioned to provide EU perspective and expertise on the responsible use of AI to mitigate the risk of unfair bias, as well as the opportunities for the use of AI to advance fundamental rights. Ms. Sana Khareghani is a national of Canada, and she is based in the United Kingdom. As Deputy Director, Head of joint Office of Artificial Intelligence for the Department for Digital, Culture, Media and Sport (DCMS) and Business, Energy and Industrial Strategy (BEIS) in the UK government, she brings her governmental perspective to this multi-stakeholder dialog on both the concerns and the opportunities regarding human rights in the use of AI. Ms. Wafa Ben-Hassine is a national of both Tunisia and the United States. She is based in Tunisia. As Access Now’s policy and advocacy lead for the Middle East and North Africa region (MENA), she brings to the discussion a civil society perspective as well as experience with issues and concerns in the Global South. Ms. Layla El Asri is a national of both Morocco and France. She has a Ph.D. in computer science and is a research manager at Microsoft Research lab in Montreal, Canada. Her research is on conversational agents and machine learning. She studies conversational agents that accomplish tasks such as personal assistants. Her expertise on cutting-edge AI technology will help orient the discussion with a shared and deeper understanding of AI. The following are more detailed bios for the moderator and the five speakers: [Organizer & Moderator] Mr. Bernard Shen is an Assistant General Counsel in the Corporate, External, and Legal Affairs department at Microsoft Corporation. His work focuses on Microsoft’s policy and practice on human rights across its products and services, and on engagement with external stakeholders on human rights issues and policies. Bernard has also provided legal support for various Microsoft products and technologies including Windows, cloud services, silicon technology, and health solutions. He serves as Co-Chair of the Policy Committee of the Global Network Initiative, and as Immediate Past Chair of the International Practice Section of the Washington State Bar Association. He worked in a business capacity in the telecommunication industry before becoming an attorney. Bernard received his JD from Northwestern University School of Law, MBA from Cornell University Johnson Graduate School of Management, and B. Commerce from the University of Toronto. More detailed bios for the five speakers: Ms. Peggy Hicks. Since January 2016, Peggy Hicks has served as director of the Thematic Engagement, Special Procedures and Right to Development Division at the UN's human rights office. From 2005 to 2015, she was global advocacy director at Human Rights Watch, where she was responsible for coordinating Human Rights Watch's advocacy team and providing direction to its advocacy worldwide. Ms. Hicks previously served as director of the Office of Returns and Communities in the UN mission in Kosovo and as Deputy High Representative for Human Rights in Bosnia and Herzegovina. She has also worked as the Director of Programs for the International Human Rights Law Group (now Global Rights), clinical professor of human rights and refugee law at the University of Minnesota Law School, and as an expert consultant for the UN High Commissioner for Human Rights. Ms. Hicks is a graduate of Columbia Law School and the University of Michigan. Mr. David Reichel works as a researcher in the Freedoms and Justice Department of the European Union Agency for Fundamental Rights (“FRA”). His areas of expertise include statistical data analysis, data quality and statistical data visualisation. He has extensive experience in working with data and statistics in an international context. He is managing FRA’s project on “Artificial Intelligence, Big Data and Fundamental Rights”. This project assesses the pros and cons for fundamental rights of using artificial intelligence (AI) and big data for public administration and business purposes in selected EU Member States, including discrimination in data-supported decision making. Ms. Sana Khareghani is Deputy Director, Head of Office of Artificial Intelligence, DCMS/BEIS, HMG. She has twenty years’ experience in technology, business and consulting across the UK, North America, continental Europe and the Middle East. She is now responsible for running the joint Office for Artificial Intelligence for the Department for Digital, Culture, Media and Sport (DCMS) and Business, Energy and Industrial Strategy (BEIS) within HMG. Ms. Wafa Ben-Hassine is a Policy Counsel for Access Now, a global organization defending and extending the digital rights of users at risk. She leads Access Now’s policy and advocacy arms in the Middle East and North Africa region (MENA). She is a New York qualified attorney specializing in international law, human rights, and technology. Ms. Layla El Asri is a research manager at Microsoft Research Montreal. After getting a Ph.D. in computer science from Université de Lorraine in 2016, she joined the team at Maluuba in Montreal as a research scientist. Maluuba was acquired by Microsoft in 2017 and is now a research lab within Microsoft Research. The mission of this lab is to create literate machines that can read, reason, and communicate. Layla’s research is on conversational agents and machine learning. She studies conversational agents that accomplish tasks such as personal assistants and looks for ways to build such agents automatically from data.
The workshop will consist of two parts, each of which will include interactive discussion among the speakers and with the audience. PART I: • A brief explanation on how machine learning and AI work -- to provide a common understanding and context for the discussion of specific issues to follow. • Discuss the benefits and opportunities of AI to advance human rights and sustainable development goals. • Explore questions and concerns on the responsible use of AI, in particular regarding unfair treatment on the basis of age, sex, disability, race, ethnicity, origin, religion or economic or other status? How can unfair bias in the use of AI be identified and addressed, and what does being transparent and accountable mean in this context? PART II: • Explore ways and opportunities for different stakeholders to collaborate and share learnings and good practices on (1) identifying and mitigating the risk of unfair bias in the use of AI, (2) being transparent in the use of AI, and (3) being accountable for the use of AI. • Discuss laws or government policies and actions that promote innovation and responsible and effective use of AI, particularly to address the concerns of unfair bias, and the opportunities to advance human rights and sustainable development goals, including SDG#5 (Gender Equality) and SDG#10 (Reduced Inequalities).
Section VIII (Content of the Session) outlines the agenda, questions and issues to be discussed. The workshop will consist of two parts, each of which will include interactive discussion among the speakers, as well as between the speakers and the audience (including online participants). Bernard Shen, the organizer/moderator, will work with the speakers (including one or more preparation calls) to prepare for and shape the workshop, and develop a plan to create a dynamic discussion of the questions on the agenda, both among the speakers and with the audience. This will include the organizer/moderator and the speakers working together to plan detailed questions, issues, and examples for discussion within the agenda outlined in Section VIII. During the workshop, the moderator will focus on facilitating an interactive discussion among the speakers, and between the speakers and the audience (including online participants). Audio-visual materials such as PowerPoint slides or videos may be used during different parts of the discussion to provide examples or other content that are relevant to the discussion.
With advances in cloud computing empowering the use of machine learning and artificial intelligence technology (collectively, “AI”) across many fields of human endeavour, the workshop will explore prevalent questions and concerns regarding the human rights risks and opportunities in the use of AI. In particular, how do we identify and address potential unfair bias in the use of AI? What does it mean for the use of AI to be transparent and to be accountable for unfair treatment on the basis of age, gender, disability, race, ethnicity, origin, religion or economic or other status? Through a multi-stakeholder interactive discussion among the speakers and with the audience, the session aims to achieve and share a deeper understanding and multi-faceted consideration of the human rights opportunities and risks in the use of AI, in order to support collaboration and efforts to work towards effective mitigation of risks and pursuit of opportunities to advance human rights.
We will have an online moderator to enable online participation and will encourage questions from online participants as well as the audience in the room. We have not confirmed the name of the online moderator.
The workshop will consist of two parts, each of which will include interactive discussion among the speakers and with the audience.
PART I:
• A brief explanation on how machine learning and AI work -- to provide a common understanding and context for the discussion of specific issues to follow.
• Discuss the benefits and opportunities of AI to advance human rights and sustainable development goals.
• Explore questions and concerns on the responsible use of AI, in particular regarding unfair treatment on the basis of age, sex, disability, race, ethnicity, origin, religion or economic or other status? How can unfair bias in the use of AI be identified and addressed, and what does being transparent and accountable mean in this context?
PART II:
• Explore ways and opportunities for different stakeholders to collaborate and share learnings and good practices on (1) identifying and mitigating the risk of unfair bias in the use of AI, (2) being transparent in the use of AI, and (3) being accountable for the use of AI.
• Discuss laws or government policies and actions that promote innovation and responsible and effective use of AI, particularly to address the concerns of unfair bias, and the opportunities to advance human rights and sustainable development goals, including SDG#5 (Gender Equality) and SDG#10 (Reduced Inequalities).
Report
IGF 2018 Pre-Session Synthesis & Short Report Template
Pre-Session Synthesis Due: 2 November 2018
Short Report Due: Within 12 hours of when session is held
[sample report here]
- Session Type (Workshop, Open Forum, etc.): Workshop
- Title: Accountability for Human Rights: Mitigate Unfair Bias in AI
- Date & Time: 13 November 2018. 09:00 – 10:30
- Organizer(s): Bernard Shen, Microsoft Corporation
- Chair/Moderator: Bernard Shen and Camille Vaziaga, Microsoft Corporation
- Rapporteur/Notetaker: Bernard Shen, Microsoft Corporation
- List of speakers and their institutional affiliations (Indicate male/female/ transgender male/ transgender female/gender variant/prefer not to answer):
Speaker 1: Scott Campbell, Office of the United Nations High Commissioner for Human Rights
Speaker 2: David Reichel, EU Agency for Fundamental Rights (FRA)
Speaker 3: Layla El Asri, Microsoft Corporation
Speaker 4: Wafa Ben-Hassine, Access Now
Speaker 5: Sana Khareghani, Office of Artificial Intelligence, Department for Digital, Culture, Media and Sport (DCMS) and Business, Energy and Industrial Strategy (BEIS), HMG
- Theme (as listed here): Human Rights, Gender & Youth
- Subtheme (as listed here): “Internet & Societal Challenges”; “Gender Equality”; and “Reduced Inequalities”
- Please state no more than three (3) key messages of the discussion. [150 words or less]
· Technology can help detect unfair bias and human rights violations.
· Build trust with people impacted by AI through transparency -- designing AI that includes explanation of its recommendations (e.g., characteristics that influence the AI recommendation or prediction).
· Important not to leave out parts of the world from the benefits of AI (e.g., due to lack of access to the internet, etc.)
- Please elaborate on the discussion held, specifically on areas of agreement and divergence. [150 words] Examples: There was broad support for the view that…; Many [or some] indicated that…; Some supported XX, while others noted YY…; No agreement…
There was broad consensus on the importance of a multi-sectorial and multi-stakeholder approach.
- Please describe any policy recommendations or suggestions regarding the way forward/potential next steps. [100 words]
· Importance of conducting human rights impact assessments.
· States actors have a higher duty to protect human rights than non-state actors. Therefore, state actors’ use of AI (e.g., in law enforcement and criminal justice) need to meet a far greater duty of care and responsibility.
· Peer-learning: creating an environment in which human rights can be discussed among peers (e.g., peer-to peer programs for companies).
- What ideas surfaced in the discussion with respect to how the IGF ecosystem might make progress on this issue? [75 words]
· On-going multi-stakeholder dialog, collaboration, and relationship/trust building is critical.
· Conferences tend to segregate into those for policy experts and those for data scientists. Data scientists are investing a lot of research on how the science can help humans better address unfair bias or other risks. We need more cross-pollination, dialog and understanding between the two communities.
- Please estimate the total number of participants.
130
- Please estimate the total number of women and gender-variant individuals present.
Approximately half.
- To what extent did the session discuss gender issues, and if to any extent, what was the discussion? [100 words]
Much of the discussion on unfair bias used gender bias (as well as age discrimination) as the context (e.g., including bias in job hiring, and insurance premiums).