Session
Organizer 1: Anna Zhdanova, Skolkovo Institute of Science and Technology
Organizer 2: Pavel Osinenko, Expert of TC 164 "Artificial Intelligence" (Rosstandart), Work Group 03
Organizer 3: Nikita Utkin, Technical Commitee 194 "Cyber-Physical Systems"
Speaker 1: Nikita Utkin, Technical Community, Eastern European Group
Speaker 2: Maxim Fedorov, Technical Community, Eastern European Group
Speaker 3: Pavel Osinenko, Technical Community, Eastern European Group
Speaker 4: Anna Abramova, Civil Society, Eastern European Group
Pavel Osinenko, Technical Community, Eastern European Group
Anna Zhdanova, Technical Community, Eastern European Group
Anna Zhdanova, Technical Community, Eastern European Group
Birds of a Feather - Classroom - 60 Min
Data-driven emerging technologies Topics: artificial intelligence, IoT, algorithms, facial recognition, blockchain, automated decision making, machine learning, data for good. Example: What is the impact of AI and other data-driven technologies in the exercise of rights of most vulnerable groups? How to implement them to further advance their inclusion and avoid further harm? Cybersecurity policy, standards and norms Topics: Cybersecurity Best Practices, Norms, Cybercrime, Cyberattacks, Capacity Development, Confidence-building measures, CERTs, cybersecurity awareness Example: What is the role of cybersecurity norms, do they need to be strengthened, and how can their implementation be assessed? Security, stability and resilience of the Internet infrastructure, systems and devices Topics: IoT, DNS, DNS abuse, DNS security, Internet standards, Internet protocols, encryption, content blocking and filtering, IPv6 adoption, routing security Example: How can best practices at all layers (transport, DNS, security, applications and services) inform and support governments’ engagement around Internet reliability and stability? Digital Safety to enable a healthy and empowering digital environment for all Topics: Human rights, digital safety, child online safety, CSAM, hate speech, terrorist violent and extremist content (TVEC), platforms, freedom of expression Example: How can a digital environment be created that enables human interaction and communication while ensuring the ability to participate and to access information, freedom of expression, and the privacy and safety of individuals? Trust, Media and Democracy Topics: disinformation, misinformation, “fake news”, terrorist violent and extremist content (TVEC), deep fakes, hate speech, freedom of expression, democracy, election interference, hacking, platforms Example: The proliferation of disinformation and misinformation (e.g. “fake news” and deep fakes) poses threats to the integrity of journalism and the decisions that people make based on that information. How can technology play a role in tackling them and restoring trust? Trust and identity Topics: facial recognition, biometrics, digital identity, decentralized identities, certified identities, blockchain, bias, e-banking, e-health, artificial intelligence, AI, business models Example: How can regulatory approaches stimulate innovation and maximize community benefit, while mitigating associated risks around the use of Artificial Intelligence? Formal methods of AI verification Topics: use case constraints, system stability/safety, robustness, loss of control prevention, fault diagnosis Example: How to establish guaranteed constraint satisfaction associated with the particular use case (e.g., what is allowed when it comes to cancer therapy AI support)? What measures must be integrated into an AI system to maintain safety when some of the control functions are lost?
The main challenge to be addressed in this workshop is the advancement of our understanding of what a safe AI system is, including the machine learning methods involved and data integrity. This can only be achieved by an interdisciplinary approach achieved by gathering expert knowledge from various fields, not necessarily directly related to AI, especially when it comes to ethical and global aspects of AI. This workshop also seeks to raise technical soundness of measures for trustworthy AI.
GOAL 3: Good Health and Well-Being
GOAL 5: Gender Equality
GOAL 9: Industry, Innovation and Infrastructure
GOAL 16: Peace, Justice and Strong Institutions
Description:
AI applications occupy ever more areas of human economy. With the spread of AI, the related risks grow. The latter are associated with privacy, security, safety, reliability, explainability, accountability etc. Development of trustworthy AI cannot be accomplished without maintaining effective measures to mitigate the above risks. The recent whitepaper of the Stanford Center for AI Safety and the ISO Special Committee 42 has set up the goal of achieving frameworks for formally verified AI systems. Following these trends, the current workshop is dedicated to bringing awareness to the trustworthiness matters of modern AI systems. The explainability aspect here not only refers to the transparency of algorithms to the end-users but, most importantly, to open data and their availability and description to the public. Special attention is paid to dynamic risks, which arise in an autonomous application, as well as ethical aspects of AI.
Selected results of the presented talks and discussions can be made into the basis of a whitepaper on trustworthy AI systems. A follow-up event, with the goal of making some highlighted aspects more precise, is expected context
Social tools
Relevance to Internet Governance: Global aspects of trustworthy AI should be taken into account by the governments in implementing digitalization policies. Here, ethical aspects play a particular role. Furthermore, improved transparency and explainability of AI systems should help better the public perception of digitalization.
Relevance to Theme: This workshop will help advance AI trustworthiness matters towards the more rigorous, technologically soundtrack. It is assumed here that reasonable unification and standardization efforts are required to achieve better understanding and maintenance of what could be perceived by the public as safe AI.
Usage of IGF Official Tool.