Emerging Technologies

show on Landing page
Off

IGF 2018 LIGHTNING SESSION #21 From Open data to open government for a better governance: the case of French policies

Etalab is the French State task force for open data and open government. Through this mandate, the French State has made the choice of opening public data and public algorithm as a fundamental principle for improving its government policies and actions. This lightning session will aim at presenting the French open data and open algorithm policies, and its on going work on data and algorithms, and how it contributes to a more open global governance.

 

IGF 2018 LIGHTNING SESSION #14 How blockchain can impact the Internet

Despite the fact that almost ten years have passed since blockchain was introduced by bitcoin, the technology remains mysterious to many people particularly in understanding how it could impact the Internet and its future. In this session, we highlight some key attributes that make blockchain standout compared to any earlier database technology and how this may in fact disrupt traditional banking, real estate, media, and a host of other sectors.

IGF 2018 LIGHTNING SESSION #3 Emerging Technologies and Rights Future

 

- Title:  Emerging Technologies and Rights Future

- Organizer(s): Dynamic Coalition IRPC 

- Chair/Moderator: Hanane Boujemi 

- Rapporteur/Notetaker: Minda Moreira

- List of speakers and their institutional affiliations (Indicate male/female/ transgender male/ transgender female/gender variant/prefer not to answer):

Sarah Moulton (female): Senior Technology Innovation Analyst- NDI (NGO)

IGF 2018 EMERGING TECHNOLOGIES

IGF 2018 Report

“Emerging Technologies: Fostering benefits, managing risks through multistakeholder governance”

- Session Type: Panel discussion - Main Session

- Title: Emerging Technologies: Fostering benefits, managing risks through multistakeholder governance

- Date & Time: Monday, 12 November 2018, 10:00-11:20

- Organizer(s): Christoph Steck, Nataša Glavor, Wafa Dahmani, Raquel Gatto

IGF 2018 OF #21 What future for the Internet?

The Internet of the next decade will be built around the users. It will have to be a human centric Internet, an Internet of values, that ensure privacy and data control and a more inclusive, transparent and democratic digital environment for all.

The European Commission has launched the Next-Generation Internet initiative, aiming at designing this internet of values. To this end we have engaged all different stakeholders that will help us build the internet of the future: top innovators, startup and civil society.

IGF 2018 WS #427 AI will solve all problems. But can it?

Additional Speakers

Prof. Karen Yeung (Birmingham School & School of Computer Science)

Agenda

Opening of the session by the co-organizers [5-10mins]

  • Fanny Hidvegi (Access Now)
  • Jan Gerlach (Wikimedia Foundation)
  • Charlotte Altenhöner-Dion (Council of Europe)
  • Nicolas Suzor (QUT Law School)

 

The opening introduction will be short, we will focus on explaining the format and invite all participants to actively contribute to the session. 

 

Small group discussions [50mins]

  • The four group leaders will kick off the group conversations which will facilitated by the main pros and cons that are prepared by the assigned speakers. There will be no traditional presentation but we might use one slide (or flipchart) for the pros and cons respectively. 
  • The participants of the groups will discuss each issue with the goal of preparing a short set of talking points for the debate in the second part of the workshop.

 

Debate [20-25mins]

  • The groups will reconvene into a short debate
  • selected representatives (not necessarily the original moderators) will present the outcome of the group discussion. 

 

Vote and closing of the session [5-10mins]

  • Instead of a win/lose vote on each issue, we will develop a range of confidence about the applicability of AI for each issue in the near term. 
  • Participants will express this range of confidence based on the small group discussion and the debate. 

IGF 2018 WS #227 BLOCKCHAIN FOR SOCIAL AND HUMANITARIAN APPLICATIONS

Agenda

The agenda of the Session Will develop as follows:

The moderator begins the assignment by presenting and introducing the discussion (2 minutes). Then he gives the floor to the speakers who will present their point of view according to the sector they represent (academia, civil society, private sector, technical, government), they will have 8 minutes for their presentation; then the question and answer session opens (20 minutes), and finally the conclusion (3 minutes).

Our Speakers will develop the following topics:

  • Cases of use of active projects developed with Blockchain technology.

  • Definition of criteria to define the relevance of the use of Blockchain technology in the development of a solution.

  • Viability and sustainability of the projects developed with Blockchain technology for social and humanitarian application.

  • Examples of synergies that have developed between governments, non-profit organizations and private companies to solve social and humanitarian problems with Blockchain technology.

  • Identification of social and humanistic problems that can be mitigated with the development of applications with Blockchain technology and there are no use cases so far.

IGF 2018 WS #182 Artificial Intelligence for Human Rights and SDGs

Additional Speakers

Final speakers

  • Mr. Marko Grobelnik, Co-Chair, Artificial Intelligence Laboratory, Jožef Stefan Institute (Slovenia)
  • Ms. Nnenna Nwakanma, Interim Policy Director at the World Wide Web Foundation (Nigeria and USA)
  • Ms. Silvia Grundmann, Head of Media and Internet Division and Secretary to CDMSI, Council of Europe
  • Mr. Thomas Hughes, Executive Director at ARTICLE 19 (UK)
  • Ms. Liudmyla Romanoff, Data Privacy and Data Protection Legal Specialist, UN Global Pulse (USA) (Remote)
  • H. E. Mr Federico Salas Lotfe, Ambassador Extraordinary and Plenipotentiary, Permanent Delegate of Mexico to UNESCO (México)
  • Ms. Elodie Vialle, Head of Journalism & Technology Desk at Reporters Without Borders (France)

IGF 2018 WS #231 AI: Ethical and Legal Challenges for Emerging Economies

There was a good discussion on the usability of AI and its effect in developing countries and emerging economies. Speakers highlighted that the discussion on emerging technologies and how they affect us. 

At the beginning of the panel, Nnenna Nwakanma from the Worldwide Web Foundation, quoting Sir Tim Berners Lee said, “initial thinking was that if we bring technology to human beings, they will do good things with it.” She shared some of outcomes of the work of Web Foundation, on AI. Specially, she highlighted the use of AI in service delivery sector like health, agriculture and other government service sector. However, one of major issues of AI is, she said, multilingualism. Due to the dynamism of multilingualism, there is a big threat of failure of Artificial Intelligence is being used in Africa. She also discussed about the designing of the code. All the codes are heavily reflections of the people who code. She also stressed that, the coding should be unbiased and for this, an inclusive approach could help out.

Prof. Dr. LIU CHUANG, highlighted the use of AI in China and how China is leading the front. Even in Chaina, she said, it is very hot topic now. Not only in the commerce, rather at everywhere.  Some of area likes earthquake monitoring, restaurant, AI port, education etc. Because this is new, we are changing a whole lot about the society. Artificial Intelligence there is something good, something not good.  So we need thinking about this in advantage.  So I think it's true we thinking about the principles for the AI development.  Developing countries should pay more attention about the AI and education. So otherwise, always could catch this advantage. Besides education in China, it's legislation to pay more attention on this.  She thought the formal legal above the AI is not coming.  One is big data.  Second is human behavior.  Third one is cloud computing or e‑computing.  And third one is high‑speed communication.  But China had a very serious legal system managing the four different issues.  Right now, everybody pay more attention of the advantages to help themselves.  Economic, education, research.  And other things that go this way. 

Mr. BIKASH GURUNG, shared how AI is growing in Nepal, a tiny country situated in between giant economies Cina and India. If we talk about the progress in Nepal how AI has been here, as he said, there has been few companies leading AI revolution and follow up is done by lots of community organizations like AI for Development, Pilot technologies, Cloud Factories etc. There has been technology that has been built for visually impaired.  There is another technology that has been built which can provide smart data‑driven multi-cultural intelligent system.  Also, emergency response, drone delivery system, robot restaurant. There has been a knowledge gap in Nepal.  This technology is not built here.  People are trying so hard to get up on the technology.  And also, we are facing biased algorithms that do not lead us to what Nepal's choice is.  You have to face the developed country prospects.  So that has been one of the challenges. 

Prof. KS Park analyzed the use of AI and freedom of expression perspective. He raised the question of liability of the AI's function on the violation of privacy and freedom of expression. The liability should be based on Safe Harbor Principle. The second set of issues he discussed was economic issues.  Just like in capitalism, the ones that had capital could exploit other people making people depend on the use of the capital to create value and as robots start providing label that replace human label, there will be more inequality.  Anther issue he raised was the use of algorithm. Algorithm might be biased due to adaptation of the culture, language and other aspects. The fourth issue, he raised was ethical challenge of AI from data monopoly.  A lot of people can have copies of AI.  What it makes or breaks is whether you have the data.  Who is building the silos of training data?  That will also decide resource allocation and resource distribution.  So these are the challenges. Prof. Park also argued that AI is a just technology and do not need a specific law. However, to avoid data monopoly, there could be more legislative initiatives that encourage people to share more data but equitably. 

To sumarrize the workshop, it is agreed that AI is basically a technology and it could be governed by ethical approach and inclusive design. It is also agreed that the data used is significant and it should be neutral for the use. 

Additional Speakers
  1. Prof. KS Park, OpenNet Korea
  2. Nnenna Makawana Nwakanma, Interim Policy Director, Web Foundation

IGF 2018 WS #11 AI Ethics: privacy, transparency and knowledge construction

Additional Speakers

Mr. Jasper Wang, Deputy Editor of Sina Weibo, Editor-in-chief of Weibo Think Tank in China. Mr. Wang has 10 years of working experience in news business, social media and Journalism industry. With the special passion towards Internet industry, he moved to the Weibo after working in government's education department.  He researches into the areas of information dissemination and user governance in social media platform, development model of social and public opinion, the ecological pattern of we-media. Depend on Weibo's more than 200 million active user data per day, the Weibo think tank he manages is committed to working with universities and overseas academic institutions to research the evolution of social media, media convergence and development.

 

Dr. Félicien Vallet,  a privacy technologist at the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority. His missions are to analyze technical systems and study the challenges they raise on a privacy and data protection standpoint in order to elaborate CNIL’s position on these issues. While working closely with legal experts of the CNIL, he regularly meets with professionals to help them design their systems in accordance with data protection requirements. He also participates in the works of several groups of data protection authorities such as the European Data Protection Board (EDPB) or the International Working Group on Data Protection and Telecommunications (IWGDPT) in order to develop international standards for data protection. He holds engineering and doctoral degrees in Computer Science from Télécom ParisTech (obtained in 2007 and 2011 respectively).

 

Mr. Yuqiang Chen,  the co-founder and chief research scientist of the 4th Paradigm, which is a AI start-up company in China. He received his bachelor and master degrees from the CS department of Shanghai Jiao Tong University. He was major in machine learning, and his main research interests include transfer learning, large-scale machine learning and deep learning.  Mr. Chen had serval papers published in NIPS, AAAI, SIGKDD, ACL, and one of his work was reported by MIT Technology Review in 2011. Mr. Chen worked in Baidu, which is the largest search engine company in China and deployed online the first deep learning system for commercial advertising use, which consists of a large model of billions of raw features as input.  He worked in Toutiao, which is a leading mobile new feeds company in China,  helping to build up one of the largest ML systems in the world.  Later, with Wenyuan Dai and others, Mr. Yuqiang Chen co-founded“the 4th Paradigm”- an AI development platform that enables enterprises to build their own AI applications in various settings such as finance, telecommunication and Internet applications, and thereby significantly increase their operation's efficiency.

 

Mr. Jake Lucchi, Head of AI, Public Policy, Google Asia Pacific. Jake Lucchi leads Google’s policy work on AI, among other areas, for the Asia Pacific region. In this role, he works with government, academia, industry and civil society to build an ecosystem in which AI can be leveraged to achieve economic and social benefit around the region through promoting the positive development and application of AI, while also establishing strong governance frameworks. He also works to amplify the voice of APAC on AI governance and adoption in various global fora. In addition to AI, Jake also leads Google’s work managing content for the region.Prior to joining Google he worked on a range of law and policy issues for the United Nations, INGOs and as a consultant for the Thai government, for which he helped redesign the migrant labor regulatory scheme to prevent human trafficking. Originally from the US, he has lived full time in the APAC region since 2010 and speaks fluent Thai and intermediate Spanish. He holds a juris doctorate (law) from Yale University and a BA, summa cum laude,in politics and philosophy from University of Missouri. He is based in Hong Kong.

 

 

Agenda

Panel Introduction    by Dr. Kuo-wei Wu          

Presentations From Six Speakers

Dr. Yik Chan Chin/Prof. Changfeng Chen             XJTL University/Tsinghua University                    AI policies and ethical frameworks    

 

Dr. Ansgar Koene            University of Nottingham                           Governance frameworks for algorithmic transparency and accountability

 

Mr. Jasper Wang                 Sina Weibo                               The Rumor shredder —Social media platform has a new role in an age of post-truth

 

Dr. Félicien Vallet            Commission Nationale de l'Informatique et des Libertés (CNIL)          How can humans keep the upper hand? Presentation of CNIL's report on the ethical matters raised by artificial intelligence. (link : https://www.cnil.fr/en/how-can-humans-keep-upper-hand-report-ethical-matters-raised-algorithms-and-artificial-intelligence)

 

Mr. Yuqiang Chen                The 4th Paradigm                                                    How will AI serve human better in the future?

 

Mr. Jake Lucchi                 Head of AI, Public Policy, Google Asia Pacific      Ethical AI at Google         

 

Panel Discussions

 

Q&A