IGF 2025 Lightning Talk #9 The Digital Agora: AI, Content Moderation and Societal Harm

    ChatVE
    Israel Olatunji Tijani, ChatVE Regional Group: West Africa School on Internet Governance (WASIG) Joan Muthoki Kithanze, IWMF Regional Group: East Africa Internet Governance Forum (EAIGF) Ifeoluwa Gbolahan Ojoawo, ChatVE. Nigeria Internet Governance Forum Kehinde Adegboyega, HRJNN, Human Rights Journalists Network Nigeria.
    Speakers
    Israel Olatunji Tijani, ChatVE Regional Group: West Africa School on Internet Governance (WASIG) Joan Muthoki Kithanze, IWMF Regional Group: East Africa Internet Governance Forum (EAIGF) Ifeoluwa Gbolahan Ojoawo, ChatVE. Nigeria Internet Governance Forum Kehinde Adegboyega, HRJNN, Human Rights Journalists Network Nigeria
    Onsite Moderator
    Joan Muthoki Kithanze
    Rapporteur
    Ifeoluwa Gbolahan Ojoawo
    SDGs
    9. Industry, Innovation and Infrastructure
    12. Responsible Production and Consumption
    16. Peace, Justice and Strong Institutions
    17. Partnerships for the Goals


    Targets: SDG 9 - Industry, Innovation, and Infrastructure: The session's focus on infrastructure resilience through AI supports this goal by advocating for technology that builds robust, sustainable systems including digital public goods (DPGs) and digital public infrastructures (DPIs). SDG 16 - Peace, Justice and Strong Institutions: By discussing AI's role in governance, combatting misinformation and online radicalisation, reducing societal polarisation, and content moderation to tackle tech-facilitated gender-based violence (TFGBV), the session contributes to promoting peaceful and inclusive societies, providing access to justice, and building healthy digital space, accountable institutions. SDG 12 - Responsible Consumption and Production: Ethical AI deployment and the prevention of biased information through algorithms align with sustainable practices in technology use, ensuring that AI development is responsible and considers long-term societal impacts. SDG 17 - Partnerships for the Goals: The multi-faceted approach suggested, involving government, private sector, civil society, and academia, exemplifies the partnerships needed to achieve these goals, particularly in the context of AI governance and ethical technology use.
    Format
    Duration: 20 minutes for the talk followed by 10 minutes Q&A Session. Structure (Powerpoint Presentation): Introduction (3 minutes) Key Issues (7 minutes) Case Study: Demo of ChatVE's toxic comment scoring engine that leverages machine learning (7 minutes) Conclusion: Towards Ethical Internet Governance (3 minutes) Q&A Session(10 minutes): Time for the audience to ask questions or share insights on how AI can be better integrated into their communities, focusing on local relevance, especially in West Africa. Participants will also be able to share feedbacks on the demonstrated tool and suggest ways to better improve the AI tool. This structure aims to provide an informative yet engaging discussion on the nuanced role of AI in modern society, balancing the excitement of technological potential with the sobering realities of its challenges.
    Duration (minutes)
    30
    Description
    Overview: Artificial intelligence (AI) is changing present-day journalism. AI is being used to source information, produce news articles, and identify trends. Bloomberg was an early adopter of AI for Journalism using Cyborg, a program dissecting financial reports and instantly writing news stories with all relevant facts and figures. In today's world, more than traditional news platforms like newspapers, radio, televisions, people are depending on social media platforms to get the latest trending news from health, entertainment, sport to politics and lifestyle. These platforms at the same time also allow the spread of unverified information to a large section of the audience. The need to address the exposure of vulnerable children and youth to online toxic content and misinformation is critical in West Africa. As of January 2024, over 120 million Nigerians are online, according to the Nigeria Communications Commission (NCC). In tandem with the way the digital space gains prominence and raises hopes for robust digital governance (trust), online incitement to violent extremism polarising the society and the disruption of democratic process is also on the rise necessitating a comprehensive and multi-faceted approach to tackle this challenge. By exploring the transformative power of AI for social good, focusing on balancing innovation with responsibility, we examine cross-sectoral applications of AI to societal goals such as promoting national values and civic engagement, improving governance systems and digital information ecosystem, and combating misinformation. This session will also analyse frameworks including Microsoft's Responsible AI principles for ensuring fairness, transparency, and accountability in AI design and use, ethical AI deployment, and robust regulatory frameworks to mitigate potential risks. We will discuss the global implications of AI, particularly data, algorithmic bias, Large Language Models (LLMs), the need for widespread AI literacy, and strategies for bridging the digital divide to ensure inclusive AI's benefits. Introduction (3 minutes): Brief overview of the explosion in digital communication, highlighting the challenges of content governance in an era where AI is increasingly pivotal. Introduce the session's focus on how AI, particularly through tools like Google's Perspective API, intersects with issues of online radicalisationn, gender-based violence, and general toxicity. Key Issues (7 minutes): Content Moderation and AI: Discuss will centre around how AI is used in content moderation to scale responses to the vast amount of online content, touching on both the benefits and potential pitfalls (e.g., over-censorship, misclassification). Online Radicalisation: This will explore how digital spaces can inadvertently or deliberately serve as breeding grounds for extreme ideologies, necessitating nuanced AI interventions to detect and mitigate radical content. Tech-Facilitated Gender-Based Violence (TFGBV): Highlight how technology exacerbates gender-based violence through doxxing, revenge porn, and cyberstalking, calling for specific AI tools to identify and curb these behaviours. This session will also focus on our "Kumlinda Project" which aims to build lexicons (annotated dataset) in low-resourced African languages (Swahili, Hausa, Igbo and Yoruba) in response to TFGBV perpetrated in local languages generic models were not trained on. Online Toxicity: Explains toxicity in online interactions and its impact on societal discourse, leading to the silencing of marginalised voices or the spread of hate speech. Case Study (7 minutes): A demo of ChatVE's toxic comment scoring engine Overview: Introduce Google Perspective API as a tool used in designing ChatVE to identify toxic comments using machine learning. Explain the Attributes for Scoring: Toxicity: Measures how likely a comment is to be perceived as toxic. Severe Toxicity: For comments that are significantly more toxic. Identity Attack: Identifies comments that target someone based on their identity or group. Insult: Detects demeaning or offensive language. Profanity: Picks up on sexually explicit or vulgar language. Threat: Identifies threats of physical harm or intimidation. Challenges and Applications: This will highlight the API's limitations, like potential biases in scoring comments containing identity terms, and how it is used by platforms to flag content for human review or to provide real-time feedback to users. Conclusion: Towards Ethical Internet Governance (3 minutes): Emphasise the need for a balanced approach in internet governance where AI tools like ChatVE are part of a broader strategy that includes: Policy Development: For ethical AI use and protection against harm. Transparency: In how AI systems make decisions. Accountability: Mechanisms to address AI's shortcomings or misuse. Inclusivity: Ensuring that AI governance doesn't further marginalise groups but instead promotes equity. Call for collaborative efforts among tech companies, governments, and civil society to create an internet that is safe, inclusive, and conducive to democratic discourse.

    Hybrid Moderation: We will have a session moderator who is adept in managing both physical and virtual environments, ensuring seamless interaction. The moderator will actively monitor the online chat/Q&A platform for questions which will be treated when the session is open for Q&A and comments from online attendees. Q&A Platforms: Utilise a live Q&A feature where both onsite and online attendees can submit questions through a digital platform (e.g., Zoom's Q&A feature or Whova's interactive session tools), which the moderator or speakers can address in real-time. We will also prove for collaborative note-taking, brainstorming, or document creation during the session. This allows both on-site and online attendees to contribute actively and create a channel to reach back on questions asked by participants but not attended to because of time. Polling and Live Reactions: Implement live polls or reaction features (like emojis) to gauge audience feedback on key points, ensuring that online participants have a voice in the session dynamics. Designing the Session for an Optimal Hybrid Experience: Clear Structure and Timing: A well-defined format with clearly allocated time slots for each segment (presentations, Q&A, discussions). Announce transitions clearly ("We're now moving to the Q&A section"). Visual Presentations: All presentations will be visually engaging and accessible to both groups. Clear fonts, high-quality images will be used. Share presentation slides in advance so online attendees can follow along even if the video stream is delayed. Audio Quality: High-quality audio is paramount. On-site speakers will use microphones, and the online platform will have clear audio input. Test audio connections beforehand. Video Considerations: While video can enhance engagement, it's important to be mindful of bandwidth limitations for online attendees especially participant joining from sub-saharan Africa. Interactive Elements: Incorporate interactive elements throughout the session to keep both groups engaged. Polls, quizzes, and word clouds (using Mentimeter or similar tools) can be easily integrated into a hybrid format. These can be projected on-site and accessed by online participants.