IGF 2025 Lightning Talk #121 The AI Dilemma: Balancing Automation and Accountability

    Oversight Board
    - Helle Thorning-Schmidt, Co-Chair, Oversight Board for Meta - Mr. ‘Gbenga Sesan, Executive Director, Paradigm Initiative
    Speakers
    Helle Thorning-Schmidt, Co-Chair, Oversight Board for Meta (Europe / International) Mr. ‘Gbenga Sesan, Executive Director, Paradigm Initiative (Africa / International)
    Onsite Moderator
    Mr. ‘Gbenga Sesan, Executive Director, Paradigm Initiative
    Rapporteur
    Mr. ‘Gbenga Sesan, Executive Director, Paradigm Initiative (Nigeria)
    SDGs
    9. Industry, Innovation and Infrastructure
    10. Reduced Inequalities
    16. Peace, Justice and Strong Institutions


    Targets: - 9.c: Expanding ICT access and ensuring universal internet availability. - 10.3: Reducing discrimination and ensuring fair content moderation policies. - 16.10: Ensuring transparency and accessibility of informati
    Format
    Lightning Talk (Fully In-Person) with scope for quick audience QnA if appropriate. Engagement on social media will happen before, during and after the event.
    Duration (minutes)
    20
    Description
    Content moderation on social media platforms has long relied on a combination of human reviewers and automated enforcement tools. In recent years, Meta significantly increased its use of AI-driven moderation to detect and remove harmful content, especially following criticism over its handling of crises such as the Rohingya genocide in Myanmar. However, in January 2024, Meta announced a major shift: it will now focus automated enforcement only on "illegal and high-severity violations," while relying more heavily on user reporting for less severe policy breaches. This change represents a fundamental shift in how content moderation is managed. While it has been framed as an effort to promote freedom of expression and reduce the risk of AI-driven over-moderation, it also raises significant concerns. In many regions, users are less likely to engage with reporting tools, making user-driven enforcement unreliable, particularly in areas prone to online harms. The shift could also exacerbate challenges in combating misinformation, hate speech, and other harmful content in places where AI enforcement previously played a critical role. This session will explore key issues, including: The potential for increased user agency and fewer automated errors in content moderation. The risks associated with placing greater reliance on user reporting, particularly in regions where engagement with reporting tools is low. How these changes could affect different parts of the world, especially in contexts where misinformation, hate speech, or incitement to violence have historically caused harm. Lessons from past enforcement failures, such as Meta’s role in the Rohingya crisis, and whether these new changes risk repeating past mistakes. The balance between automation and human intervention in maintaining information integrity and online safety. Helle Thorning-Schmidt, Co-Chair of the Meta Oversight Board, will outline the Board’s role in assessing these developments and their global implications. This session will invite discussion on how Meta and other platforms can ensure that changes in enforcement models uphold human rights, fairness, and safety worldwide.

    Although this session is fully in-person, engagement strategies will include: - Audience Q&A to foster real-time interaction. - A live poll to gauge attendee perspectives on the risks and benefits of Meta’s policy shift. - Distribution of key resources post-session for continued engagement. Social media channels such as Instagram and LinkedIn will be leveraged to amplify discussions and invite wider participation beyond the event.