IGF 2023 Networking Session #153 Generative AI and Synthetic Realities: Design and Governance

Time
Thursday, 12th October, 2023 (04:30 UTC) - Thursday, 12th October, 2023 (05:15 UTC)
Room
WS 8 – Room C-1
Subtheme

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning
Future & Sustainable Work in the World of Generative AI

Theme
AI & Emerging Technologies

IBM Research
Heloisa Candello - IBM Research, Private Sector, Latin America Caio Machado - University of Oxford, Civil Society, Western Europe Diogo Cortiz - Brazilian Network Information Center, Technical Community, Latin America Reinaldo Ferraz - Brazilian Network Information Center, Technical Community, Latin America Ana Duarte Eliza - Brazilian Network Information Center, Technical Community, Latin America

Speakers

Heloisa Candello - IBM Research, Private Sector, Latin America Caio Machado - University of Oxford, Civil Society, Western Europe Diogo Cortiz - Brazilian Network Information Center, Technical Community, Latin America Hiroshi YAMAGUCHI - University of Tokyo, Civil Society, Asia

Onsite Moderator

Diogo Cortiz - Brazilian Network Information Center, Technical Community, Latin America

Online Moderator

Reinaldo Ferraz - Brazilian Network Information Center, Technical Community, Latin America

Rapporteur

Ana Duarte Eliza - Brazilian Network Information Center, Technical Community, Latin America

SDGs

16. Peace, Justice and Strong Institutions

Targets: While generative AI holds great promise, it also carries risks that need to be addressed to promote an inclusive and sustainable society. In this workshop, we propose to anticipate and discuss the challenges from three main perspectives: the use of generative AI to create synthetic realities; the use of AI to affect people in large space; and the risks of automating social media platforms.

Format

This Networking Session will be divided into four segments to engage the audience in the discussion. In the first part (15 minutes), the speakers will introduce the topic and give an overview of generative AI and its consequences. The audience will be divided into three groups based on background and interest. We will assign each group to work with one of the three main sub-themes of the session (synthetic content, chatbots on the Web, and automated social media). In the second part of the session (15 minutes), each group will receive one technical and one governance question related to the sub-theme. Participants should discuss and propose immediate (now) and mid-term (5 years) actions to leverage the potential use of technology and reduce the impacts related to each question. Then, in the third part of the session (10 minutes), each group will share the results with the audience and participants of the other groups. In the final part of the session (5 minutes), the speakers will summarize the findings from those three groups and propose ways to introduce those themes into local agendas.

Duration (minutes)
45
Language
English
Description

The advancement of generative AI in different modalities is a new paradigm for creating content and forms of interaction on the Web. Synthetic texts, images, and videos created by AI models are often indistinguishable from something captured from reality or produced by a human. While this increases productivity and changes the creative process, it is difficult for users to identify the content source. It raises many questions about the potentially harmful use of technology for creating false narratives, fake news, and large-scale manipulation. Generative AI and the emergence of LLMs (Large Language Models) also leverage the creation of appealing chatbots, AI companions, and automated digital influencers. Users engage with them in a fluid and anthropomorphized interaction that could create intimate and affective bonds with the technology. The rise of Artificial relations concerns researchers from different areas and questions the limits of anthropomorphic AI design of those tools. Another point of attention is that generative AI can automate profiles on social networks, allowing digital influencers to be fully digital and driven by algorithms. In this sense, it is opportune to discuss the impact of generative AI on social media platforms and the social and cognitive effects for online users. We invite people from different backgrounds and stakeholder groups to discuss these emerging topics and ideate on how we should address these challenges from a technical, design, and governance perspective.

To ensure the active participation of online participants, we will use a conference tool with breakout room features. In the first session, onsite and online participants will discuss with the speakers in the same room. Then, in the second part, the online participants will be divided into three breakout rooms and will work with the three thematic groups created for the discussion. Both online and onsite participants will be in contact through the conference tool. The speakers will ensure an active and fluid interaction among them. In the last session, all the participants are in the same room with the speakers and moderators.

Key Takeaways (* deadline 2 hours after session)
"Understanding the training data of generative AI systems is crucial."
"Further investigation is needed into how people interact with generative AIs and their potential consequences."
Call to Action (* deadline 2 hours after session)

"Encourage further research in the field of Human-Computer Interaction (HCI) applied to generative AI."

"Explore methods to prevent cybercrimes using deep fakes."

Session Report (* deadline 26 October) - click on the ? symbol for instructions

The meeting focused on discussing Generative AI, with an emphasis on platforms' ability to interact with humans and provide relevant answers to their questions. There was a discussion about the role of children in teaching robots, interface humanization, challenges faced with AI usage, and its capabilities. Additionally, concerns regarding safety, regulation, transparency, and ethics in AI use were addressed, especially in contexts such as small businesses and potentially deceptive situations.

Topics Discussed

Generative AI and Interactivity:

The ability of platforms to receive human interactions and respond appropriately to requests.

Children learning about AI and teaching robots to understand humans.

 

Challenges of AI Usage:

Accuracy errors, interface humanization, scope visibility, misuse, and resolution of ambiguities.

Transparency and interpretation of references depending on the context.

 

AI Capabilities:

Scale, homogenization, emergence, conversation assistants, and hallucination.

Challenges related to hallucination, lack of transparency, and misalignment with human expectations.

 

Small Business Situation:

Small business owners often lack awareness of their business status, affecting credit granting and loans.

 

Key Audience Questions:

Audience asked about the impact of generative AI  in crimes and cybersecurity, especially in deep fakes and voice deception. Discussions on regulation and creativity to deal with these new technologies.

Speakers emphasized the need to get accustomed to new technologies and find creative solutions to regulate their use. The reality of deep fakes and the need to learn to deal with them, drawing an analogy with the existence of knives at home but the importance of laws to ensure safe usage.

Conclusion and Recommendations

Emphasized the importance of addressing generative AI challenges, including hallucination, lack of transparency, and deception, with effective regulatory and educational measures.

Mentioned the need to promote transparency and ethics in AI use, especially in sensitive scenarios such as small businesses. Suggested continued interdisciplinary discussions and collaborations to address emerging challenges of generative AI, safeguarding democratic values and individual freedom.

 

This report summarizes the key points discussed in the Generative AI meeting, highlighting participants' perspectives and audience concerns regarding the use of AI in criminal and cybersecurity contexts.