IGF 2018 WS #11 AI Ethics: privacy, transparency and knowledge construction

    Room
    Salle VII
    Issue(s)

    Organizer 1: Yik Chan Chin, Xi'an Jiao-tong Liverpool University
    Organizer 2: Ansgar Koene, University of Nottingham
    Organizer 3: Kuo Wu, APNIC
    Organizer 4: Yang SHEN, School of Journalism and Communication, Tsinghua University

    Speaker 1: Yik Chan Chin, Civil Society, Asia-Pacific Group
    Speaker 2: Ansgar Koene, Civil Society, Western European and Others Group (WEOG)
    Speaker 3: Yeseul Kim, Civil Society, Asia-Pacific Group
    Speaker 4: Changfeng Chen, Civil Society, Asia-Pacific Group
    Speaker 5: Lina Chen, Private Sector, Asia-Pacific Group

    Additional Speakers

    Mr. Jasper Wang, Deputy Editor of Sina Weibo, Editor-in-chief of Weibo Think Tank in China. Mr. Wang has 10 years of working experience in news business, social media and Journalism industry. With the special passion towards Internet industry, he moved to the Weibo after working in government's education department.  He researches into the areas of information dissemination and user governance in social media platform, development model of social and public opinion, the ecological pattern of we-media. Depend on Weibo's more than 200 million active user data per day, the Weibo think tank he manages is committed to working with universities and overseas academic institutions to research the evolution of social media, media convergence and development.

     

    Dr. Félicien Vallet,  a privacy technologist at the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority. His missions are to analyze technical systems and study the challenges they raise on a privacy and data protection standpoint in order to elaborate CNIL’s position on these issues. While working closely with legal experts of the CNIL, he regularly meets with professionals to help them design their systems in accordance with data protection requirements. He also participates in the works of several groups of data protection authorities such as the European Data Protection Board (EDPB) or the International Working Group on Data Protection and Telecommunications (IWGDPT) in order to develop international standards for data protection. He holds engineering and doctoral degrees in Computer Science from Télécom ParisTech (obtained in 2007 and 2011 respectively).

     

    Mr. Yuqiang Chen,  the co-founder and chief research scientist of the 4th Paradigm, which is a AI start-up company in China. He received his bachelor and master degrees from the CS department of Shanghai Jiao Tong University. He was major in machine learning, and his main research interests include transfer learning, large-scale machine learning and deep learning.  Mr. Chen had serval papers published in NIPS, AAAI, SIGKDD, ACL, and one of his work was reported by MIT Technology Review in 2011. Mr. Chen worked in Baidu, which is the largest search engine company in China and deployed online the first deep learning system for commercial advertising use, which consists of a large model of billions of raw features as input.  He worked in Toutiao, which is a leading mobile new feeds company in China,  helping to build up one of the largest ML systems in the world.  Later, with Wenyuan Dai and others, Mr. Yuqiang Chen co-founded“the 4th Paradigm”- an AI development platform that enables enterprises to build their own AI applications in various settings such as finance, telecommunication and Internet applications, and thereby significantly increase their operation's efficiency.

     

    Mr. Jake Lucchi, Head of AI, Public Policy, Google Asia Pacific. Jake Lucchi leads Google’s policy work on AI, among other areas, for the Asia Pacific region. In this role, he works with government, academia, industry and civil society to build an ecosystem in which AI can be leveraged to achieve economic and social benefit around the region through promoting the positive development and application of AI, while also establishing strong governance frameworks. He also works to amplify the voice of APAC on AI governance and adoption in various global fora. In addition to AI, Jake also leads Google’s work managing content for the region.Prior to joining Google he worked on a range of law and policy issues for the United Nations, INGOs and as a consultant for the Thai government, for which he helped redesign the migrant labor regulatory scheme to prevent human trafficking. Originally from the US, he has lived full time in the APAC region since 2010 and speaks fluent Thai and intermediate Spanish. He holds a juris doctorate (law) from Yale University and a BA, summa cum laude,in politics and philosophy from University of Missouri. He is based in Hong Kong.

     

     

    Moderator

    Dr. Wu, Kuo-Wei

    Online Moderator

    Professor Shen, Yang

    Rapporteur

    Dr. Yik Chan Chin

    Format

    Panel - 90 Min

    Interventions

    The notion of ethics encompasses socially accepted moral rights, duties and behavioural norms deriving from a culture-specific tradition. Therefore, it is important to consider the role of culture in shaping the AI ethics. In this panel, we have invited speakers from the different culture, profession, gender and geographical location to discuss and share their insights, perspectives, and local/global experiences in terms of AI and related ethical issues and solutions. Each speaker will be given six minutes to present their knowledge and views based on their expertise in dealing with AI, ethics, policy and regulatory issues. The presentations will be followed by one-minute immediate audience response, and ten minutes discussions amongst speakers themselves that will be moderator by onsite moderators. Questions are then invited from both audiences and online participants by two moderators to panel members. The discussion time will be about 40 minutes in order to provide sufficient interactions amongst speakers, audience and online participants.

    Diversity

    The organisers of this panel include members from different geographical locations- China, USA and UK, from different stakeholder groups – academia, social media companies, technical sector to reflect the diversities in geography, sectors and culture. The workshop proposer, i.e. Dr. Yik Chan Chin is the first-time IGF proposer. And more than half of the organising team and speakers are first-time IGF organizers and speakers.

    The panel includes six speakers from China, UK, France and USA, and one onsite moderator and one online moderator. Speakers are Dr. Yik Chan Chin/Professor Chen Chengfeng, Xi'an Jiaotong-Liverpool University and Tsinghua University, China;  Dr. Ansgar Koene, University of Nottingham, UK, working group chair for IEEE Standard for Algorithm Bias Considerations, Horizon Digital Economy Research Institute, UK; Mr. Jasper Wang, Deputy Editor of Sina Weibo;  Dr. Félicien Vallet,  a privacy technologist at the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority;  Mr. Yuqiang Chen,  the co-founder and chief research scientist of the AI start-up, the 4th Paradigm; Mr. Jake Lucchi, Google.  And Dr. Wu Kuo-Wei from APNIC is the onsite moderator. Each speaks will speak for 6 minutes with 1 minute of immediate audience response. It is followed by 10 minutes discussions amongst speakers and 30 minutes questions and answers session, both are moderated by Dr. Wu.

    The issues discussed include: Prominent AI related ethical issues in different sectors and societies such as normative frameworks and regulatory models; algorithmic transparency and accountability, diversity, information filtering bubbles, online rumors and post-truth, privacy-preserving machine learning and practices used in dealing with those ethical issues. The panel speakers will also discuss recommendations for improvement

    The session will be divided into three sections to utilise the participation and discussions between panel members, onsite and online audiences. In the first section, each speaker will be given six minutes to present their knowledge and views based on their expertise in dealing with AI and its related ethical. Each presentation will be followed by a one-minute immediate response from audiences to increase their participation. In the second section, ten minutes will be allocated for discussions amongst speakers themselves, the onsite moderator will prepare some AI ethical questions for them to answer. In the third section, questions are invited from both onsite audiences and online participants by two moderators to panel members. This section will last for 30 minutes. The discussion time of the whole session will last about 45 minutes, which is the half of its 90 minutes allocated time, in order to provide sufficient interactions amongst speakers, audience, and online participants.

    This panel will be addressing the issues of how to apply or create policy, regulatory model, frameworks and best practices to tackle the ethical issues arising from the application of artificial intelligence in different sectors and societies to achieve the dual objectives of protecting people's rights and beneficial uses of artificial intelligence to contribute to social good. The ethical issues will be discussed in this panel include transparency, accountability, diversity, privacy protection, information filtering, privacy-preserving machine learning, and ethical norms. We will highlight perspectives from different national and organizational cultures through experience of local and global institutions and private companies such as Chinas leading social media company - Sina Weibo, AI start-up - the 4th Paradigm, Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority; the global Internet company- Google; the global professional association-IEEE, and academic researchers. Audiences and online participants will be invited to contribute during Q&A following each opening statement from the panel, and again during the discussion segment, which will comprise more than half of the session.

    Online Participation

    Social Media such as Tweets, Facebook, Wechat and Sina Weibo will be used to report this workshop session and allow for observations to the online community. More importantly, to properly facilitate the online participation, below measures will be adopted: 1) Online attendees will have a separate queue and microphone to their audience questions/comments, which will rotate equally with the several microphones in the room. 2) The panel's onsite moderator will have the online participation session open, and will be in close communication with the trained online moderator, to transmit the questions and answers between the online participants and panel members, and make any adaptations necessary as they arise, etc. 3) The online moderator will be trained in advance, and he will take part in the discussion of the issue and workshop development, and is prepared to manage this responsibility of online moderator.

    Agenda

    Panel Introduction    by Dr. Kuo-wei Wu          

    Presentations From Six Speakers

    Dr. Yik Chan Chin/Prof. Changfeng Chen             XJTL University/Tsinghua University                    AI policies and ethical frameworks    

     

    Dr. Ansgar Koene            University of Nottingham                           Governance frameworks for algorithmic transparency and accountability

     

    Mr. Jasper Wang                 Sina Weibo                               The Rumor shredder —Social media platform has a new role in an age of post-truth

     

    Dr. Félicien Vallet            Commission Nationale de l'Informatique et des Libertés (CNIL)          How can humans keep the upper hand? Presentation of CNIL's report on the ethical matters raised by artificial intelligence. (link : https://www.cnil.fr/en/how-can-humans-keep-upper-hand-report-ethical-matters-raised-algorithms-and-artificial-intelligence)

     

    Mr. Yuqiang Chen                The 4th Paradigm                                                    How will AI serve human better in the future?

     

    Mr. Jake Lucchi                 Head of AI, Public Policy, Google Asia Pacific      Ethical AI at Google         

     

    Panel Discussions

     

    Q&A 

     

    Session Time
    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    - Session Type (Workshop, Open Forum, etc.):

    Panel  

    - Title:

    AI and Ethics: privacy, transparency and construction of knowledge

    - Date & Time:

    5 pm 6:30 pm on 13th November.

    - Organizer(s)

    Yik Chan Chin Xi'an Jiao-tong Liverpool University
    Ansagar Koene, University of Nottingham

    Kuo-wei, Wu APNIC
    Yang Shen, School of Journalism and Communication, Tsinghua University

    - Chair/Moderator:

    On Site Moderator: Dr. Wu Kuo-wei, APNIC
    Online Moderator: Mr. Le Song

    - Rapporteur/Notetaker:

    Dr. Yik Chan Chin

    - List of speakers and their institutional affiliations (Indicate male/female/ transgender male/ transgender female/gender variant/prefer not to answer):

     

    Dr. Yik Chan Chin and Prof. Changfeng Chen             Xi’an Jiaotong-Liverpool University and Tsinghua University 

    Dr. Ansgar Koene                                      University of Nottingham   

    Mr. Jasper Wang, Deputy Editor of Sina Weibo, Editor-in-chief of Weibo Think Tank in China.

    Dr. Félicien Vallet,  a privacy technologist at the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority.

    Mr. Yuqiang Chen,  the co-founder and chief research scientist of the 4th Paradigm.

    Mr. Jake Lucchi, Head of AI, Public Policy, Google Asia Pacific.

     

    - Theme (as listed here):

    Emerging Technologies

    - Subtheme (as listed here):

    Internet Ethics

    - Please state no more than three (3) key messages of the discussion. [150 words or less]

    What are the most prominent ethical issues in related to the AI development and application?

    What is the current landscape of policies and regulations of AI ethics in China, EU and USA?

    What is the necessary policy environment to ensure the ethical development of AI?

     

    Panel  

    AI and Ethics: privacy, transparency and construction of knowledge

    5 pm 6:30 pm on 13th November.

     

    Yik Chan Chin Xi'an Jiao-tong Liverpool University
    Ansagar Koene, University of Nottingham

    Kuo-wei, Wu APNIC
    Yang Shen, School of Journalism and Communication, Tsinghua University

     

    On Site Moderator: Dr. Wu Kuo-wei, APNIC
    Online Moderator: Mr. Le Song

    Dr. Yik Chan Chin

     

    Dr. Yik Chan Chin and Prof. Changfeng Chen             Xi’an Jiaotong-Liverpool University and Tsinghua University 

     

    Dr. Ansgar Koene                                      University of Nottingham   

    Mr. Jasper Wang, Deputy Editor of Sina Weibo, Editor-in-chief of Weibo Think Tank in China.

    Dr. Félicien Vallet,  a privacy technologist at the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority.

    Mr. Yuqiang Chen,  the co-founder and chief research scientist of the 4th Paradigm.

    Mr. Jake Lucchi, Head of AI, Public Policy, Google Asia Pacific.

     

    - Please elaborate on the discussion held, specifically on areas of agreement and divergence. [150 words] Examples: There was broad support for the view that…; Many [or some] indicated that…; Some supported XX, while others noted YY…; No agreement…

    Our panel involved stakeholders from the government agency (Commission Nationale de l'Informatique et des Libertés (CNIL)),  IT and AI companies (Google, 4th Paradigm, and Sina Weibo)  and academia/civil society (Xi’an Jiao-tong Liverpool University, Tsinghua University and Nottingham University, and IEEE working group on Algorithm Bias) . Speakers’/organisations’ geographical origins cove both  Asia and  Europe.

     

    The panel first discussed the AI policy and ethical frameworks developed in China, France as well as by professional organisation- IEEE, and the underpinning importance of developing the ethical standards.  It is understood that the ethical framework of AI is still in the process of discussion and formation, there is no compressive standard has been proposed by

     

    Secondly, we discussed the consultation done by the governmental agency such as CNIL (the French Data Protection Authority)  in producing digital technology ethics, i.e. the digital republic bill 2016 and its widely inclusive approach in conducting the process.  Two important founding principles- fairness and continued attention and vigilance were discussed. Most importantly, we discussed six policy recommendations made by CNIL in how to address the AI ethics.

     

    Finally, three-panel speakers from private sectors including Google, Sina Weibo, and 4 Paradigm shared their companies’ policy and practices in addressing AI ethical issues – such as privacy protection, preventing algorithm bias and improving algorithm’s fairness and accountability.  

     

    There was broad support for the view that it is important and in need to have international discussions of AI and algorithm 's industrial standards, regulation and ethical guidance.  The similar discussions have already been undertaken in different countries and regions.  The discussions must include different stakeholders of different backgrounds. And also at the national level,  it has to set up a shared ethical code and normative framework.

     

    Panel members also showed concerns on the potential chilling effect on freedom of expression caused by actions taken to refute fake news, an action such as credit rating of online users, when users may not have the capacity to have their stories fact-checked before releasing it.  

     

     

     

    - Please describe any policy recommendations or suggestions regarding the way forward/potential next steps. [100 words]

    Panel members and workshop participants highly endorsed and praised the importance of involving different stakeholders including academia, industry actors, NGOs and policymakers from different geographic regions in discussing and addressing AI ethics. While European and American actors were often presented in the IGF forum, actors from Asia and other regions are less presented, and in particular in the discussions of ethical issues. Taking into the accounts of participants' feedback and the importance of cultural diversities in ethics research and debates, the panel members and the organiser are exploring the opportunities to form a cross-countries-cross-sectors research collaboration in AI ethics (EU-China, and industry-government-academics).  

    - What ideas surfaced in the discussion with respect to how the IGF ecosystem might make progress on this issue? [75 words]

     The IGF provided an open platform to allow the different stakeholders to involve in this debate of AI ethics. This open platform is important for both policy deliberation and public education purposes. It would be helpful if IGF could have more policymakers involved. 

    - Please estimate the total number of participants.

    The total number of participants is between 50-60, which is the maximum capacity the room VII is allowed. Half of the participants queening outside the room was not allowed to enter by the UNESCO security staffs ( we guess it is because of the safety concern). 

    - Please estimate the total number of women and gender-variant individuals present.

    We had 2/3 participants in the room were men and 1/3 were women. 

    - To what extent did the session discuss gender issues, and if to any extent, what was the discussion? [100 words]

    Our session did not discuss gender issues. 

     

     

    Long Report

     

    - Session Type (Workshop, Open Forum, etc.):

     

    Panel  

     

    - Title:

     

    AI and Ethics: privacy, transparency and construction of knowledge

     

    - Date & Time:

     

    5 pm – 6:30 pm on 13th November.

     

    - Organizer(s)

     

    Yik Chan Chin,   Xi'an Jiao-tong Liverpool University
    Ansagar Koene, University of Nottingham

    Kuo-wei, Wu APNIC
     

     

    - Chair/Moderator:

     

    On Site Moderator: Dr. Wu Kuo-wei, APNIC
    Online Moderator: Mr. Le Song

     

    - Rapporteur/Notetaker:

     

     

    Dr. Yik Chan Chin

     

     

    - List of speakers and their institutional affiliations (Indicate male/female/ transgender male/ transgender female/gender variant/prefer not to answer):

     

     

     

     

    Dr. Yik Chan Chin and Prof. Changfeng Chen             Xi’an Jiaotong-Liverpool University and Tsinghua University 

     

    Dr. Ansgar Koene                                           University of Nottingham   

    Mr. Jasper Wang, Deputy Editor of Sina Weibo, Editor-in-chief of Weibo Think Tank in China.

    Dr. Félicien Vallet,  a privacy technologist at the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority.

    Mr. Yuqiang Chen,  the co-founder and chief research scientist of the 4th Paradigm.

    Mr. Jake Lucchi, Head of AI, Public Policy, Google Asia Pacific.

     

    - Theme (as listed here):

     

    Emerging Technologies

     

     

    - Subtheme (as listed here):

     

    Internet Ethics

     

     

    - Please state no more than three (3) key messages of the discussion. [150 words or less]

     

    The 3 ‘Key Messages’ that are asked for essentially refer to the policy questions or policy points in your session. You should revise these if they in any way change once your session is

     

    What are the most prominent ethical issues in related to the AI development and application ?

    What are current landscape of policies and regulations of AI ethics in China, EU and USA?

    What is the necessary policy environment to ensure the ethical development of AI?

     

    - Please elaborate on the discussion held, specifically on areas of agreement and divergence. [300 words] Examples: There was broad support for the view that…; Many [or some] indicated that…; Some supported XX, while others noted YY…; No agreement…

     

    Our panel involved stakeholders from the government agency (Commission Nationale de l'Informatique et des Libertés (CNIL)),  IT and AI companies (Google, 4th Paradigm, and Sina Weibo)  and academia/civil society (Xi’an Jiao-tong Liverpool University, Tsinghua University and Nottingham University, and IEEE working group on Algorithm Bias) . Speakers’/organisations’ geographical origins cove both  Asia and  Europe.

     

    The panel first discussed the AI policy and ethical frameworks developed in China, France as well as by professional organisation- IEEE,  and  the underpinning importance  of  developing the ethical standards.  It is understood that the ethical framework of AI is still in the process of discussion and formation, there is no compressive standard has been proposed by

     

    Secondly, we discussed the consultation done by the governmental agency such as CNIL (the French Data Protection Authority)  in producing digital technology ethics, i.e. the digital republic bill 2016 and its widely inclusive approach in conducting the process.  Two important founding principles- fairness and continued attention and vigilance were discussed. Most importantly, we discussed six policy recommendations made by CNIL in how to address the AI ethics.

     

    Finally, three-panel speakers from private sectors including Google, Sina Weibo and 4 Paradigm shared their companies’ policy and practices in addressing AI ethical issues – such as privacy protection, preventing algorithm bias and improving algorithm’s fairness and accountability.  

     

    There was broad support for the view that it is important and in need to have international discussions of  AI and algorithm’s  industrial standards, regulation and ethical guidance.  The similar discussions have already been undertaken in different countries and regions.  The discussions must include different stakeholders of different backgrounds. And also at the national level,  it has to set up a shared ethical code and normative framework.

     

    Panel members also showed concerns on the potential chilling effect on freedom of expression caused by actions taken to refute fake news, an action such as credit rating of online users, when users may not have the capacity to have their stories fact-checked before releasing it.  

     

     

     

    - Please describe any policy recommendations or suggestions regarding the way forward/potential next steps. [200 words]

     

    The policy recommendations include:

     

     

    To develop a shared ethical code and normative framework of AI at the private sector as well as at the national level;

     

    Protect minorities’ rights and avoid majoritarian tyranny

     

    Prevent AI from doing evil by developing more powerful regulations tools and more regulations in collecting unethical data.

     

    Different stakeholders from different sectors and disciplines need to be included in the process.

     

     

    Making algorithmic systems comprehensible

     

    Improving algorithmic system’s design: to prevent the black box effect; to empower individuals with more autonomy

     

    Creating a national platform in order to audit algorithms: To ensure the compliance with the law and the fairness and accountability of AI systems

     

    Increasing incentives for research on ethical AI: to foster research in computer sciences and engineering (such as explainable AI) as well as in social sciences ; to create fairer systems and raise the collective awareness

     

    Strengthening ethics in companies: to organize dialogues between practitioners, specialists, stakeholders and communities involved; to deploy new governance tools such as ethics committees

     

    Construction of moral agency.

     

    Fostering education: to address everyone involved in the algorithmic chain: system developers and designers, professionals, citizens, etc. to make sure everybody understands what is at stake

     

     

    - What ideas surfaced in the discussion with respect to how the IGF ecosystem might make progress on this issue? [150 words]

     

     

    The IGF provided an open platform to allow different stakeholders to involve in the debate of AI ethics. This open platform is important for both policy deliberation and public education purposes. It would be helpful if IGF could have more policymakers involved. 

     

     

    - Please estimate the total number of participants.

     

    The total number of participants is between 50-60, which is the maximum capacity the room VII is allowed. Half of the participants queening outside the room was not allowed to enter by the UNESCO staffs (we guess it is because of the safety concern). 

     

    - Please estimate the total number of women and gender-variant individuals present.

     

    We had 2/3 participants in the room were men and 1/3 were women. 

     

    - To what extent did the session discuss gender issues, and if to any extent, what was the discussion? [100 words]

     

    Our session did not discuss gender issues. 

     

    - Session outputs and other relevant links (URLs):

     

    Workshop organisors are exploring funding opportunities for a cross-countries-cross-sector research collaboration in Internet governance and AI ethics based on the session’s discussion.

    Participants also reported the session’s discussion and its policy recommendation via their organisations’ platforms.

    For instance, the report by the AI company “the 4th Paradigm”:

    https://mp.weixin.qq.com/s?__biz=MzAwMjM2Njg2Nw==&mid=2653146249&idx=1&sn=73b254eae8eb33b9ec2891617366668a&chksm=811ce075b66b6963fa6d4ed5f692ae5c074a42dc3826bf25a8e03bf8fc36cfc49b1068126e8a&mpshare=1&scene=1&srcid=1123mjP52tHSJnMwfVjA4vgF#rd