Session
Organizer 1: Civil Society, Intergovernmental Organization
Speaker 1: Srivastava Surabhi, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Lei Ma, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Inssaf Ben Nassar, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Lei Ma, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Inssaf Ben Nassar, Civil Society, Western European and Others Group (WEOG)
Format
Classroom
Duration (minutes): 90
Format description: Because this room layout allows us to break into groups to do the role-playing segment, also ideally there is a wall space or a white board where we can use post-its or write feedback and ideas provided by the session participants.
Duration (minutes): 90
Format description: Because this room layout allows us to break into groups to do the role-playing segment, also ideally there is a wall space or a white board where we can use post-its or write feedback and ideas provided by the session participants.
Policy Question(s)
How can public awareness initiatives and digital literacy programs empower audiences to critically assess AI-generated content and resist misinformation?
What role should media organisations, technology companies, and policymakers play in preventing the misuse of AI in content creation while fostering innovation?
How can ethical AI frameworks be integrated into national and international policies to safeguard digital platforms from misinformation and manipulation?
What will participants gain from attending this session? Participants will gain critical insights into the ethical implications of AI in storytelling and content creation, directly linking to sustainable and responsible innovation. Through scenario simulations and discussions, they will explore how AI can be used ethically in digital media to safeguard public discourse and democracy. By contributing with feedback to the Ethical AI Checklist, attendees will develop practical tools to prevent misinformation, and manipulation. The session will highlight the importance of content authentication in maintaining the integrity of digital platforms and fostering audience trust. Participants will leave with a deeper understanding of how ethical AI practices can strengthen media credibility, ensuring transparency and accountability in AI-driven content creation. They will also be encouraged to share the finalized checklist with their network, contributing to a broader effort to promote responsible AI use in media and protect democratic values in the digital age.
SDGs
Description:
This interactive workshop will engage participants in exploring the ethical implications of AI in storytelling and content creation by employing the following steps: Introducing participants to ethics of AI and reflecting collectively on its implications for digital media makers, including presenting the Haarlem Declaration – our blueprint for applying AI ethically and responsibly across various digital media activities and functions Discussing and unpacking the interlinkages between ethical use of AI and content generation, with specific focus on content authentication. This will entail undertaking scenario simulation with participants tapping into the editorial workflow to analyse and discuss the use of AI tools from content creation to publishing by applying an Ethical AI Checklist, where participants will be asked to provide feedback on its effectiveness to support media makers to ensure that AI tools are used ethically, centering human experiences/ and values. This scenario simulation will also offer an opportunity for participants to reflect about the parameters embedded in content authentication and increase awareness among audiences' awareness about these parameters to sustain ethical storytelling and build their trust with audiences in the long term. The session will conclude with requesting the participants to contribute beyond the workshop in further refining and fine-tuning the Ethical AI Checklist by inviting their colleagues and networks to pilot the checklist in their everyday work related to content authentication and generation to assess its effectiveness, feasibility and usefulness. As RNW Media we will follow-up with the goal of sharing the updated version of the Ethical AI Checklist by September 2025.
This interactive workshop will engage participants in exploring the ethical implications of AI in storytelling and content creation by employing the following steps: Introducing participants to ethics of AI and reflecting collectively on its implications for digital media makers, including presenting the Haarlem Declaration – our blueprint for applying AI ethically and responsibly across various digital media activities and functions Discussing and unpacking the interlinkages between ethical use of AI and content generation, with specific focus on content authentication. This will entail undertaking scenario simulation with participants tapping into the editorial workflow to analyse and discuss the use of AI tools from content creation to publishing by applying an Ethical AI Checklist, where participants will be asked to provide feedback on its effectiveness to support media makers to ensure that AI tools are used ethically, centering human experiences/ and values. This scenario simulation will also offer an opportunity for participants to reflect about the parameters embedded in content authentication and increase awareness among audiences' awareness about these parameters to sustain ethical storytelling and build their trust with audiences in the long term. The session will conclude with requesting the participants to contribute beyond the workshop in further refining and fine-tuning the Ethical AI Checklist by inviting their colleagues and networks to pilot the checklist in their everyday work related to content authentication and generation to assess its effectiveness, feasibility and usefulness. As RNW Media we will follow-up with the goal of sharing the updated version of the Ethical AI Checklist by September 2025.
Expected Outcomes
This session will serve to pilot an Ethical AI Checklist with participants to receive feedback in the responsible use of AI in content creation and authentication. With the feedback, the checklist will be refined, piloted in real-world editorial workflows, and further developed beyond the conference.
The session will also foster increased awareness of AI’s ethical implications in digital storytelling, strengthening participants' ability to identify and mitigate risks related to misinformation, manipulation, and undue influence on democratic processes.
Follow-up actions will include:
Engaging participants and their networks in refining and testing the checklist.
Piloting the checklist to assess its effectiveness with participants' networks.
RNW Media will share a finalized Ethical AI Checklist (with the feedback provided by participants) by September 2025, contributing to broader policy discussions on responsible AI use in media and digital governance.
Hybrid Format: To ensure seamless interaction between onsite and online participants, this session will use a hybrid engagement model with interactive digital tools.
A live-streamed discussion will allow both online and onsite attendees to participate in real time. RNW Media has a training centre, RNTC were we have experience providing hybrid sessions. Our facilitation strategies designed by our in-house learning design experts include:
Interactive polling and Q&A using tools like Mentimeter to gather instant feedback from all participants.
Collaborative document editing via Mural boards to co-create the draft Ethical AI Checklist.
Breakout group discussions (both in-room and virtual) to ensure diverse perspectives contribute to the conversation (with moderation and facilitation by our experts)