Round Table - 60 Min
Large online platforms have become a dominant source for information and news consumption. These platforms undertake many functions of information management that were previously carried out by traditional actors, such as editors and publishers. Their content governance processes therefore tremendously influence media freedom, access to information and freedom of expression. Along with the exponential growth of information shared online, many platforms have turned to developing and deploying technologies like artificial intelligence (AI) for content governance. AI is used to support the prioritization, downgrading and dissemination of content to audiences (content curation), as well as to filter and take down illegal, harmful, or otherwise unwanted content (content moderation). These AI-led processes provide the basis for how society interacts with information online today. However, the data and advertising-driven business model of online platforms is not necessarily conducive to safeguarding media pluralism or public interest and newsworthy content. On the contrary, their AI-driven content curation processes mostly focus on their own and advertisers’ economic interests rather than diversity, accuracy or the public good. Challenges arise particularly as these same processes of content governance are applied to news content as to all other types of online information. In this context, AI-based tools are not being designed to give due prominence to public interest content, but rather to promote, amplify, and target users with content that optimize engagement to facilitate advertising and generating profit for platforms, at the expense of media pluralism and public interest. Moreover, these AI-driven processes – that shape and arbitrate political and public discourse online – are executed by technology that is designed, developed, and deployed in potentially biased, and error prone ways, negatively impacting freedom of expression. Part of the problem is the lack of transparency of these AI tools. These technologies continue to transform and drastically change the media and information consumption as we know it. So it is particularly important to not only address its societal harms, but also consider ways to harness it for fulfilling the media’s democratic role and promoting human rights online. Latest policy and regulatory developments aimed at regulating the impact of AI on freedom of expression and other human rights, creates a momentum to call for a healthier digital public sphere, including recommender systems. This session will explore ways forward in promoting a healthier online information space, one that serves the public interest, advances democracy, and enables peace and security.
The session moderators will facilitate discussions both on-site and online to ensure inclusive debates. Speakers will be both on-site and online. The speakers will be key contributors who will set the scene but the focus is on moderated discussions rather than a formal panel set-up.
OSCE Representative on Freedom of the Media
Multi-stakeholder discussion involving representatives from international organizations, civil society, academia, the media sector etc.
Targets: Comprehensive security, lasting peace, and sustainable development necessitate that human rights including freedom of expression and media freedom are respected, protected and fulfilled at all times. While technologies provide ample opportunities for increasing access to information and freedom of expression, it is essential to address the challenges to human rights posed by use of artificial intelligence and machine-learning technologies for shaping and arbitrating information spaces. It is equally important to explore ways in which public interest content can be promoted for healthier online information spaces.