The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: Good afternoon.
I'm the director of RNW Media. And I'm also with the generation and sustainable of generation and news media. Here's my colleague Inssaf.
>> INSSAF BEN NASSAR: Good afternoon, everyone. My name is Inssaf Ben Nassar.
I'm the RNW Media programmer.
>> MODERATOR: We have two Goetz speakers followed by AI user stories from our pans from global south and besides we will also showcase the AI checklist and facilitator fishbowl exercise.
So if you can go to next slide. So RNW Media is an international media development organise based in the Netherlands, dedicated to harness the power of individual media to uphold and advance the public good. Our mission is media and the mission to media to champion human rights and the public good.
So we work in 40 countries across Europe and north Africa and central American. So RNW Media also has a training center set up in 196. So in the past 57 years we have provided journalism and also media training to over 10,000 journalists, media makers and also NGO professionals from over 110 countries. So what do we do?
Actually we do three thing, we cultivate a local relevant digital media solutions that would drive engagement and I pack. We facilitate a strategic media called the partnership and building. We conduct advocacy for sustainable media funding and also tech platform accountability.
So basically we focused on two thing. Media reliability and information integrating. For the media variable. We support the public interest in media to deploy digital transformation and develop innovative business model and provide professional capacity building and facilitate the Asia partnership so we can ensure the media in Global South with independence and also freedom. So we address the growing challenge of informing disorder by rebuilding the public confidence in information ecosystem.
Our initiative includes digital inclusion and the universal connectivity, tackling misinformation and supporting local journalism to amplify truce in community and safeguarding digital safety and well-being.
We are also working on Red Cross theme, including ethical AI deployment and discourse agenda and also promoting inclusive migration narrative. As I introduce RNW Media service providers we provide different types of services to media agency to local and nongovernmental organisations and Civil Society organisations and universities.
So we have over 50 solutions supported by our certification across AI and the emerging technology, digital safety and accessibility, journalism and digital media and so on.
There is a table at the entrance. And you can find our flyers of the video if you want to have a look. So what is our impact. So together as our partners we reach 500 million people every year. And young people from global south and almost 350 meetings in 2024 and 91% of audience knowledge and attitude change. And 79% of audience reports significant behaviour change. And as I said we have provided journalism and media training to over 10,000 alumnis globally. And we have 91% of partners affect rate and 90% of alumni reported a positive career change.
So one thing I wanted to mention in the past three or four years, together your partners we successfully would work it from Meta to change content advertising and policy.
Last but not least, I would like to present an international initiative from the Haarlem Declaration. You can find a copy also at the entrance on that table. So it is an issue collect commitment to promote ethical AI in digital media and championing AI for an inclusive and safe and reliable space. It was born in the city of Haarlem, a city known for digital innovation in the Netherlands and created together with 88, 88 public media from Global South and also from the 34 countries.
It outlines six values and principals and also six tangible acts, including an ethical AI checklist. So this checklist will be the center for today.
So it was by 16 media outlets and CSOs and academic organisations. I would like to encourage you guys to talk to us, and if you want to join in this international commitment, international movement, please sign your name reach out to us.
Okay. So as an organisation based in the Netherlands, it would be nice to get to know what is the innovation and the position of the Dutch government on information integrating digital governments and AI. It is our honour to prepare seven our first guest speaker, Mr. Ermst Noorman, the ambassador at large of cyber affairs. Mr. Ermst.
>> Thank you, and it's a pleasure to be with the RNW Media event. If you ever have a chance to go to Netherlands and visit the office, it's in a beautiful building which was before a prison building but now a center which lots of activity and startups and of course RNW Media. So I'm pleased to -- please take the opportunity, if you have a chance, to visit them.
Now on the information integrity. For us, information integrity online is an essential for promoting and joining freedom of expression which includes the right to seek, receive and impart information and ideas. This is why we started in 2023, with the global declaration on information integrity together with Canada, which is signed by 36 countries.
For this approach we try to formulate a positive agenda for the information online. Rather than just talk about banning or debunking disinformation.
Now what do we see as the main elements for information integrity. First the human rights should be at the core of your policy on this. You must uphold the freedom of expression, opinion and access to information as fundamental rights. You must ensure measures to protect the information integrity, which comply with the international human rights laws. And especially with the international confident on civil and political rights.
And for that you have to have also legal and regulatory measures in place. You have to implement appropriate laws and platform governance in line with privacy and human rights obligations. And at the same time have you to avoid restrictive laws that restrict the individual freedoms.
I think the act from the European Union is an example of how the EU tries to cope to do this.
And on AI it's important to manage technologies responsibly. And monitor and regulate Generative AI, an emerging tech through the stakeholder dialogue.
And you have to ensure any response is appropriate, proportionate to Rick and upholding the International Law.
And again I mentioned the EU and the EU-AI Act, for example how we try to do this. With a military stakeholder approach to adhere to all the insights and voice from the different partners in the digital community.
An important part furtherer is to poet adversity and media pluralism. And we have to support independent pluralistic media and diverse content, including local languages and cultures. I think local languages promoted is also an extremely important part and will be in the WSIS discussion.
We have to safeguard journalism and access credible information to counter disinformation.
Another point is to strengthen digital and literal immediacy. And this is where RNW Media comes in with their experience in this field.
We have to invest in civic education and empower individuals to critically assess online content. And we have to build society resilience against misinformation and online harms. And further we have to protect vulnerable and targeted groups. Unfortunately very relevant to today this subject.
We have to address misinformation that targets women, LGBTQIA+ groups and civilians and indigenous peoples and other marginalized groups. And further we have to embrace a military stakeholder approach. We have to collaborate with governments and tech companies and Civil Society and experts. I will say multistakeholder approach is not a religion. It is there because we believe in, it because it created the internet as it is today and will further -- the resilience of the internet and the plural approach of the internet.
And we have to share on the knowledge and better inform for responses to information threats. And we have to foster global cooperation. We have to promote digital inclusion and freedom through partnerships and global form of information and democracy. And we have to encourage crossborder knowledge, sharing and joint acts.
Another point -- important point is tone sure algorithmic transparency and accountability. We have to disclose how the algorithms rank, recommend and express content and user-friendly language. We have to implement oversight mechanisms to ensure responsible algorithm use.
We have learned a hard listen in the Netherlands with the Social Security programme which harmed large groups in society. That's why we created an algorithm registry which also includes a human rights assessment for new algorithms being introduced by the governments.
And it's almost close to 1,000 algorithms have been registered and we encourage the private sector to register their algorithms. So I think a strong example, how we working to within the government but also with the private sector to be transparent on the use of algorithms.
And we have to further to safeguard political and electoral integrity. Last year was the year with the most elections in history. And a lot of discuss was of course on the use of AI and influence the elections. So we have to develop clear policies for political and issue based ads to protect democratic processes.
And we have to support transparency and it content regulation and appeal notices.
And the final point is to build trust and integrity. We have to find monitor misinformation. We have to ensure governances is ethical and transparent. And we have to partner with Academia and government sources to create tools for users.
As I said the NRC is an important step into taking action for protect freedom of expression. And we are working with coalition 41 advisors and the network and the OICE to develop the network of integrity further. Thank you so much.
>> MODERATOR: Thank you very much, Mr. Ermst. We should have another speaker. I would encourage to you check the website a global forum for the media development then are doing amazing work.
So let's move on. We already heard about a story from the Netherlands. And let's switch our focus to more countries from the global south. So Inssaf, over to you.
>> INSSAF BEN NASSAR: Thank you very much. We were supposed to have three partners but unfortunately one of our partners will not be able to make it due to the current crisis that is happening in the Middle East. He is literally stuck at an airport. So we are very pleased to have two partners with us. One in person and the other one online. I'm pleased to introduce Taysir from the Arab center for the advancement of social media. A leading organisation advocating for digital rights and working to create a safe and fair digital space. So welcome, Taysir. And online we with have us, joining us Sanskriti, president of data analytics and MBA participate. I would like to check if our online participants also have access to the mic to participate, to make that possible for us. Thank you so much.
I will start with Taysir. Could you please share about the very important and wonderful work, amazing work that Hemla is doing and about the organisation and working data.
>> Yes, thank you so much. So as you already said, Hemla is a Palestinian digital rights organisation. We are based in Palestinian, in Israel. But also in different regions in Europe and in the United States. One of the most important works we are doing is platforms accessible, especially in conflict affected settings, and how when a conflict happens now, it's not offline but online as well.
So we have been working mostly on content moderation issues, especially after okay 7th, how spec platforms such as Meta, have been modifying their internal policies and have been taking down many contents of Palestinian content creators but also journalists. And we have been advocating for the fact that those accounts being taken down, the content being censored is obviously against freedom of opinion and expression. But also international humanitarian law and international human rights law.
So we have been also working on the use of AI in digital warfares, how Generative AI can lead to digital dehumanisations, especially the dehumanisation of the Palestinian people. And how AI also can be used as a weapon of war, especially when it comes to identification tools that are being used by different armies to target potential people within the Gaza Strip.
>> INSSAF BEN NASSAR: Thank you so much. And online, Sanskriti Pandy. Please introduce the work you are doing.
>> Hello, everyone. I'm Sanskriti Pandy. From YUWA. Our three goals are active citizenship. We are connecting young people to civil rights and second is sexual and reproductive rights and the third is the research unit that basically supports the goals in areas that we are focusing on. So we have our advocacy that is very evidence based. And, yeah. So that is the work we do. And we focus on youth end of the.
>> INSSAF BEN NASSAR: Thank you very much, Sanskriti and do you use tools online for the team?
>> We have a small team in the group. So there's not much of a communication team that can have dedicated photo shop illustrators. So we do use AI but just as a tool. We don't completely rely on. It because we work on sensitive issues and sensitive topics. We are trying to bust myths about abortion. Sexual productive health and rights which is quite taboo in Nepal.
(Applause)
We are using AI just as a tool. We use like ChatGPT. We use Canva AI tools and Grammarly for that.
>> INSSAF BEN NASSAR: And how are you using AI tools?
>> When we looking at our conflict affected setting, the main challenge we face with Generative AI are algorithmic biases. Because most AI tool Wes are currently using are being developed in the northern atmosphere and not understanding and grasping all the issues we are facing within our context but also all the cultural aspects and different languages.
So we have been working on our own AI models designed to classify hate speech on different so social media platforms in two different languages, Hebrew and Arabic. And those models are really unique because they are built on words, terms, beginnings and data that reflect the specific region we live in and countries we work on.
So narratives are contextualised and identities are also defined within their intersectionality, rather than relying on generic or internally imposed standards.
So we are really using ethical AI as localised approach, which is very crucial for creating more intuitive AI systems that truly understand the nuances of global majority communities especially in areas where big context generation tools unfortunately often fail.
>> INSSAF BEN NASSAR: So you do have discussions in the organisation about the use and how to use AI ethically and mindfully in the work you are doing?
>> Yes, we do. And we don't do it only under the umbrella of freedom of opinion and expression, we also do the around the environmental impact of AI. So when we build our own tools, especially these AI models for content moderation issues we have been focusing on very -- what we call a weak AI systems that are elements but not obviously very big processors which means that -- although the training part of it could lead to a lot of water consumption, we also take into account when we use these AI tools now, obviously it's way less than a tool such as ChatGPT or bigger elements we are using currently in our daily lives. But also taking into account all the disparities and differences within the Hebrew language to grasp all of the content that could be considered as harmful and hateful in Arabic. So we are trying to have this balance focusing on both sides of the war, and how harmful content could come from both parts.
Obviously we also take into account the anonymity of the content we are using, so we are not trying to do profiting. Or we are not trying to identify the sources. We are mostly using the data as tools and we don't at all share the information around sources.
And obviously the last part is that our tool is not open source, but we would like to have it open source later on. Obviously because we want other NGOs or other contents being able to use their own systems within their own contexts and own languages and issues.
>> INSSAF BEN NASSAR: Thank you very much. And Sanskriti do you have your own organisations about how to use AI more ethically in the work that you are doing?
>> Yeah, so we have quite long discussions about it. Because it's a new tool for us, and everyone has been using it quite frequently in the individual level, organisational level. So we have had this conversation multiple times on how we should do it ethically. And especially when we are working with so many stakeholders, the datas, especially working around sexual and reproductive health and rights, we have to be very mindful about how and where we are using the data. We are not exposing the names.
So we have had that conversation, and even with the previous speaker, what we are globally using, it's estimatedded in 2027 we will be using 4.2 to 6 trillion, right? Yeah, litres of water, which is huge. So the environmental impact is really huge. So we are trying to make it as ethical as possible. There is always a human touch. There's always biases. And especially when we as an organisation are trying to deal with biases, stereotypes and if we are using AI completely and depending on with without our own human touch, it makes it quite difficult to do the work we are doing very ethically, because of the language they have used, because of how they have presented, what kind of views are being narrated in the general source, right. A lot of things are generalised.
So we have this conversation, when we should use it.
Because I feel like AI is the new way of life. So it's like if we don't use it, there's a lot of extremes as a small NGO. So we somehow are forced to use AI. But if we don't, it's hard to maintain, right. But we do -- like maintain the authenticity. We label it. Like this is AI-generated. This is where we took our source from. Even if we are getting any information from our AI, we doublecheck it, because we don't want to spread any misinformation. And then the content guideline.
So we don't have a very proper guideline, but we have vetted our communication team to be very mindful. However it's quite tricky in UR, because not just the communication team, we have our local youth partners as well, youth champions which are in different parts of Nepal. So they all have social media users to promote their content, their -- work they have been doing in their own places. And we give them complete control over what they want to post. What they want to promote.
We looking at. It but they have that autonomy of posting their work in however way they lie. So it's very hard to internally control everything. So that has been quite challenging. But the discussions are obviously there. But we don't have a framework yet. Like what should be done or what we shouldn't do.
But we do begun race from AI is non-negotiable for us because we know how much damage is cause to the end. But using it when we are having blocks and AI, like Grammarly, are used.
>> INSSAF BEN NASSAR: The challenges. What kind of support would help your organisation implement it in an ethical way, sorry. For you, Sanskriti.
>> I think it would really help if we had literacy training. And like said if there are better alternatives right. So it would be very helpful if you are aware of it. Like if there is the same thing, we could also keep prompt and have the AI generation. But if there is a better alternative capacity we would like to know. And toolkits. Like what should be done? What is ethical which is not good. We may not know what we are doing is causing harm.
So if we had that toolkit. And we can also transfer it to the young champions we are giving. Because we often have orientations for them, how to make content and everyone. So we can also integrate how they should be using it ethically. The AI ethically.
We condition control an individual, but what we can provide is screenings and guidelines and have our communication team do their best to moderate it. So I think those are the things that would really help out. And also when we are doing art competitions it was very hard because digital art also we need to accept. Which is very -- if we just don't review digital art it's unfair for the participant who has put so much time and effort. But right now we don't know whether it's AI-generated or not be right. So it also becomes tricky at those times. If we had a way to verify which is AI generated and which is authentic. So that would help a lot.
>> INSSAF BEN NASSAR: Thank you so much. And Taysir.
>> We are taking ethics we want AI to be designed ethically from the beginning and we don't want regulations at the local, regional or international levels to only try to regulate the risks that are already taking place with AI.
So according to us, what is important is obviously investments around AI literacy but also AI within the education. We need more people being trained and educated in the use of AI within global majority countries. And we need people who kind of take a sort of -- a very holistic approach when it comes to AI to decrease as much as possible algorithmic biases. So we need people with content and other languages without prioritizing northern atmosphere languages.
We also consider it's important to push for more AI models and LLMs being developed within the global majority counts and by people who understand their own context and communities.
Obviously we do consider that AI could have huge potential and could maximize the benefits for human rights principals but at the same moment, if we produce what we already have, we might face some issues in the next years as well.
So global majority countries should take also the lead when it comes to AI projects and implementation.
>> INSSAF BEN NASSAR: Thank you very much. I don't know if that was an AI-generated baby sound. Because I don't see the child. But I can definitely hear. It oh, yes, there. Thank you very much for sharing your experience and work and the goals that you have when it comes to the use of AI in an ethical way.
I was looking in the audience if Laurel was already among us. I would like to invite you to the stage. Thank you very much. And if we could have access to the PowerPoint again. Showing it from the background. Thank you.
>> I wanted to first introduce for those who don't know GFMV we are a network of 200 organisations working to support the RNW Media and working organisations and we are working with others to protect and promote journalism worldwide. So we do this through collaboration, knowledge exchange and coalition building. Both with our members and also other partners I'm happy to see also they worked with the partners and organisations.
What we want to make sure our community is represented and actively engaging into the key policy discussions on digital governments for example. So we want to bring their knowledge and expertise and recommendations.
It's special because policies on media or AI are developed and deployed in ways did we want them to uphold media freedom and interindependence and human right because otherwise our voices are not in those discussions.
As I was saying one of these examples is the media focusing group. We work on the digital services Act, and also the AI Act a bit and also the European media freedom mark which includes protections online for media, journalism and also against surveillance.
We also, and more of a global level, we are engaging with the UN in the back to the future, the Global Digital Compact, which also has a section on AI and also WSIS +20 view review. And the Secretariat. And one of the co-creators of the dynamic coalition, the stainability of use -- sorry the dynamic coalition of stainability of news media and journalism. We just launched a report. I think we have the QR code. If not we can share it afterward on how AI affects the knowledgeability of journalists and there's a lot of case studies there. So I would like to encourage to you visit. It's here. We have the prisoned copy.
And I also presented this. We have also (?) and all of these activities. So what we really want is to strengthen the present of journalist voices of discussions. Because this is not a about niche issue. It is important for the freedom of expression and inconclusivity. And accessible and also furthering the future of the public journalism and also the impact it can have when -- at the moment when our democratic future is at risk.
So among these collaborations efforts on shaping these policies we have practical alternatives we just launched the journalism cloud alliance which is a joint initiative with CRP and other members that through collaboration and collective action and strategic partnerships aims to make the services and the AI tools and cloud-based infrastructure and services more accessible, secure, affordable and stainability for the nations worldwide. So that would be a bit of overlook. Thank you.
>> INSSAF BEN NASSAR: Thank you so much, Lauren.
>> MODERATOR: And unfortunately this session is overlapping with another session. The affairs ambassador already left but he is going to stay here for several days. So if you --
Definitely can you meet him and reach out to him if you have any questions. So I will say let's move on. And I prefer to leave the last 17 minutes to my colleague Sarabi with the AI checklist and we are working with you on this ethical AI economic list. Sarabi.
>> Yes, thank you, Lay and apologies for not being here earlier. As Lauren mentioned a session overlapping. I'm sure my colleagues have already shared a bit about the Haarlem Declaration. And you can find a copy with it at the back table and have the QR code to look at the whole of the document.
But I wanted to talk a bit about our approach to AI as RNW Media, where we center our people over technology. And people over AI and hence we call it the AI supported approach. And also a bit as our partner organisations the need for an ethical AI economic list. A reflection tool that gives you a pathway, what happens next after you have this document of guiding principals and values. So we can move to the next slide.
Yes. So as I mentioned earlier, we are interested in more assisting people with AI technology rather than replacing people with AI.
And of course the job loss and the replacement of -- you know roles in the future, or not so future by AI is a pressing concern. But as much as it's important, I think it's very relevant to us to understand the importance of human oversight and human agency, especially when it comes to news rooms and media organisations and journalists and how important it is to have human agency and human element in everything we do along the workflow of media work.
And hence its very important for us that we pay attention to the assistance part and also support part rather than automation.
Maybe I can -- sorry. And I wanted to take you through also -- I mean ChatGPT came around in 2023, so it's also been a big learning exercise for us internally as an organisation on what has meant to use AI tools.
And this provides a bit of -- a glimpse of what we have been doing over the last two years as a media development organisation, internal learning and processes and creating our own repository of understanding on what these AI tools actually mean in practice.
And this has come in many different forms, but internally through the lens of actually using and experimenting some of these AI tools to understand how they work in park what are their limations and how well do they integrate in our current workflows but also developing our own understanding collectively about different facets of A I. So part of it is about learning about what is actually Artificial Intelligence, and what is algorithms and this whole new jargon of terminology that has become quite normalized now. But also what about are the ethical and -- you know problematic implications of using these tools if we leave human oversight or agency outside of the workflow depending on AI.
We have also been talking to our partner organisations around the world, especially in global majority, global south countries where we work with a lot of inn media and public interest outlets with Civil Society organisations like you heard from Nepal but also digital right organisations like (?) where we are talking to them about how they are using AI. Not just using but how are they thinking about the implications of using AI tools. What are their concerns and apprehends Henges about these tools and how do we envision and we can provide more support and part F support what Laurel referred to. Where several organisations have highlighted in talking about the ethical attributes of using AI tools. And the learning continues of course.
So this is not an exhaustive list. But this is just to indicate that we are very interesting in continuing our learning as an organisation, but also with our partners but also with various different stakeholders to develop solution. So African, looking at AI in journalism but also taking the next step in terms of -- we have a blueprint in terms of our guiding principal but how do we implement them in practice. And I think both Taysir and Sanskriti highlighted that in their presentations. So Haarlem Declaration is a blueprint for us in terms of ethical guidelines we would like to commit to in practice.
And these are the six guiding principals for us. So they range from ethical data practices, securing and restoring information integrity, understanding explainability of AI tools but also looking at the broader annual implications which I think often is left outside of the ethical considerations. But it's becoming fast and important almost as we read more about the data centers but also some incredible investigative journalism being done and how these data centers are being built in communities that are already impoverished and marginalized around the world. So as I mentioned we have this Haarlem Declaration. So the question is what is next? How do we think about translating these tools in practice.
I think really the important almost is to understand what would he would like people to do with the AI checklist. We all use checklists in different ways. They can be boring. They can be tiring. They can feel unnecessary and burdensome.
So that's an important attribute of how do we prevent checklist fatigue. I will come back to that in a bit. But really the key elements you want to look at are understanding what are the ethical implications, having some real practical thing to look at as we work through our every day tasks for the media organisations. Having reflection points to discuss and deliberate over these aspects.
And then also document some of these challenges and continue conversations. Because these are often ethical dilemmas that we will not resolve in one go. So how do we keep coming back to them, and how do we evolve discussions over time with our teams and as well as that individual level.
And I have some examples here. And I will skip. Because I do want to continue to the guiding questions. And I would love to hear from the audience here. But these are just some examples in terms of how could you implement something like a toolkit like this in -- let's say a local news room in India that is trying to do story telling in local languages?
And a part of this needs to highlight this needs to begin from the very beginning, so it's not until you start doing content production that you start thinking about AI. But you start thinking about AI tools at the very beginning and planning of the setup stage. And this include, for example, what AI tools to use.
And that itself can be several days of deliberation. But that brings forward questions of who is building these tools? Where is the funding for these tools coming about? What are the ideologies that are shaping the development of these tools and what about are the financial attributes/aspects of pushing these tools into the market? How do we understand the labour rights issues behind development of these tools as Karen spoke on empire of AI called Disaster Capitalism. How do we looking at these attributes and talk about the whether the tool we are planning to use is ethical or responsible in its development.
And these are quite big questions. And we often may not have the time and resources enough to actually discuss these questions in detail. But this is just to guide the idea of checklist is not just to check economic off certain thing and then move on but really sit with these of these discussions and reflects and think about are we making the most responsible decision as media organisation or as society organisations when we are trying to use these tools.
Because we also work with media makers who are content creators online and on Instagram and TikTok:
Another example could be how do we understand the content for social media channels how do we looking at both a publishing aspects but also production aspects. Are we talking about transparency in terms of any AI-generated content. I think Sanskriti talked about it's hard to tell whether an art piece is AI-generated or not. Bringing in content to how transparent can be claimed through the audience. How this was generated through AI. And how much transparency is needed and whether or not that transparency is accorded to the audience.
Because they know it's AI-generated does it help them consume that information more immediate or whether it's actual, fact based so I think a lot of new wants need to be delved into when we are thinking about this checklist as well.
So some of the challenges, if you are looking at a check list like this. As we looking at our own organisations. They may be considered a bureaucratic hurdle. You may want to move past it in an addressed manner. How do we address. That and looking at media constrains we work with a lot of media organisations with narrow timelines and very restrictive settings. Do they have time to look through this checklist in the first place?
The nature of evolving AI technology, we all know that AI is evolving exponentially. So a economic list really needs to be considered as a living document. It cannot be a static thing we plan to use even two months from now, for instance. So these are some of the challenges I think that we will need to keep coming back.
To and with the ors internally, think about how do we address them to really make sense and ensure that we are using this checklist in the best way possible.
So we wanted to do a fishbowl activity. But sin we are running out of time up but we have guiding questions. And this is where we want to open the floor to the audience. Think about in our own work, if you are opening a checklist, related to AI or another, what are some of your learnings about it? How do you ensure there's a bye-in from an organisation to use a checklist like this. Do you use some kind of incentives? Are you using certain processes to document whether this checklist is being used efficiently? Is it really effective for your people and your colleagues and the organisations that you work in. And yeah, these are some guiding questions but if something else comes to mind, this is really the point we would like to hear from you as well. I don't know if there are any quick reflections or ideas or thoughts.
>> QUESTION: How do you like the idea of another checklist?
>> MODERATOR: Yeah, sometimes I feel as though, having a checklist, sometimes you know people may be -- you know they are too busy on their daily work. They don't know how to use. It but some checklists can be really, really -- starting from something really small, right am I remember -- I learned from somebody that you need to stop saying thank you to ChatGPT. A lot of people are saying thank you to ChatGPT. I'm looking at the audience. So if there is no one?
>> Not to put you on the spot, Taysir but if you have a thought on how do you create a buy-in to use a checklist like this.
>> I think you raced an important point when you said it could be seen as a very bureaucratic process. So it's mostly around how we can make it -- first of all compatible with the capacity that we have within the organisation but also -- also included within our reporting and within our workload. So it's not something that we just do in addition to our work, which is really part of the process.
But obviously I think that everyone needs to be included within the process. Obviously that's something difficult to put into place. But what I learned when it comes to ethical AI and the use of AI is that sometimes we have a very narrow idea of how we can use it and how we use it personally.
But you need conversations. You need to be able to sit with your colleagues, with other organisations, with different stakeholders to really take into account and understand how AI can be implemented and used. And also obviously the risks that could evolve from the beginning to the moment its implemented.
And also, as you said, this is a process that should be always -- it's not something that you finish. That is always a process that we have to do every couple of months. Maybe every year. Because the risks of AI are changing. But that's a huge issue. How can we take that into account within our work without putting too much burden. Especially when it comes to a global majority at local organisations. We don't have that much capacity. How you just reorganise the workforce that you have as well.
Those are really important questions. Otherwise this will be expelled like another bureaucratic process to be done.
>> Right, yeah. Thank you so much, Taysir, for that. And we have a question I think from our online audience. Any reflections from our online audience at this point? No? Okay.
So, yeah, I think we are running out -- we just have 1 minute -- not even 1 minute, I think. We have 10 second. So I will move on. This is just to say if you would like to keep in touch with us and think along with us on how do we co-create and implement this checklist tool, please get in touch with us, our business cards are also at the back table.
Also do check out the Haarlem Declaration. Read through it and consider endorsing it. Especially if you are a media organisation or you work with media outlets who are using AI for their own work. But think that's a wrap for us, then. And thank you so much for joining us. And if there are any questions we will be happy to take them. applause
[ Applause ]