Session
Organizer 1: Sarthak Luthra, Asia Internet Coalition (AIC)
Speaker 1: Chris Marsden, Private Sector, Asia-Pacific Group
Speaker 2: Alex Toh, Private Sector, Asia-Pacific Group
Speaker 3: Ibrahim Nadhirah, Private Sector, Asia-Pacific Group
Jeff Paine, Civil Society, Asia-Pacific Group
Sarthak Luthra, Civil Society, Asia-Pacific Group
Amin Dalek, Private Sector, Asia-Pacific Group
Classroom
Duration (minutes): 90
Format description: Provides room to have a panel session and an intimate audience to allow for more organic discussions
How can industry stakeholders collaborate to develop transparent and accountable governance frameworks for generative AI in content moderation, ensuring alignment with ethical principles and societal values?
What innovative practices and technologies are being employed by industry leaders to enhance content moderation using generative AI while maintaining online safety and adherence to ethical standards?
What role do industry-led self-regulatory approaches play in mitigating the risks associated with the deployment of generative AI models for content moderation, and how can policymakers support these efforts?
What will participants gain from attending this session? This panel session delves into industry efforts to establish transparent and accountable governance frameworks for generative AI in content moderation, ensuring alignment with ethical principles and societal values. It will address key considerations and potential best practices for governments to consider, showcasing how AI tools like machine learning can positively support the content moderation efforts of internet platforms. Furthermore, it explores the role of industry-led self-regulatory approaches in mitigating risks associated with generative AI deployment, and discusses how policymakers can support these efforts.
Description:
This panel session addresses the relevance of implementing transparent and accountable governance frameworks for the development, deployment and usage of generative AI models in the context of content moderation. It aligns closely with the subtheme of "Harnessing innovation and balancing risks in the digital space" by exploring industry-led self-regulation models that promote innovation whilst safeguarding ethical principles, societal values, and online safety.
The session will provide insights on AI and content moderation and explores the role of industry-led self-regulatory approaches in mitigating risks associated with generative AI deployment, and discusses how policymakers can support these efforts.
Hybrid Format: The hybrid session will include presentations, moderated discussions, and Q&A sessions with the audience, fostering active participation and exchange of ideas. Audience members will have the opportunity to share their perspectives and insights, contributing to the depth and richness of the discussion.