IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation

Thursday, 12th October, 2023 (02:00 UTC) - Thursday, 12th October, 2023 (03:30 UTC)
WS 1 – Annex Hall 1

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning

Organizer 1: Noémi Bontridder, University of Namur
Organizer 2: Yves POULLET, University of Namur/Vice chairman of IFAP UNESCO
Organizer 3: Xianhong Hu, UNESCO

Speaker 1: GABRIELA RAMOS, Intergovernmental Organization, Intergovernmental Organization
Speaker 2: Marielza Oliveira, Intergovernmental Organization
Speaker 3: Stefaan G Verhulst Verhulst, Technical Community, Western European and Others Group (WEOG)
Speaker 4: Fabio Senne, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 5: Dawit Bekele, Technical Community, African Group
Speaker 6: Dorothy Gordon, Civil Society, African Group
Speaker 7: Siva Prasad Rambhatla, Civil Society, Asia-Pacific Group
Speaker 8: Changfeng Chen, Civil Society, Asia-Pacific Group


Yves POULLET, Intergovernmental Organization, Western European and Others Group (WEOG)

Online Moderator

Xianhong Hu, Intergovernmental Organization, Western European and Others Group (WEOG)


Noémi Bontridder, Civil Society, Western European and Others Group (WEOG)


Round Table - 90 Min

Policy Question(s)
  1. How can we articulate the private and the public governance of the foundation models?
  2. How can we organize a global debate on that technology in order to avoid counterproductive competition between regional approaches?
  3. Do we need a moratorium?

What will participants gain from attending this session? The contribution of speakers from different regions of the world and from different disciplines will allow the participants to have a first approach of the different ways in which the States and the private actors of the world of generative AI consider the risks linked to this technology and envisage their governance, including in their concrete developments. Furthermore, the speakers will give a first analysis of the different legal and ethical aspects (notably: privacy, multiculturality, intellectual property, harmful content, …). The meaning and the application of the ethical principles of the UNESCO Recommendation will be highlighted.


Generative AI like LLaMA, LaMDA, GPT-4, PaLM 2 developed by different major key players in our digital world refer to a category of AI algorithms supervised or not that generate new outputs based on the data they have been trained with. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of image, text, audio, and more. Some of them (BERT, DALL-E, FLORENCE or NOOR) are qualified as “foundation model”, since they are trained on a vast quantity of data resulting in a model that can be adapted including by individuals to a wide range of downstream tasks, preprogrammed or not.

The irruption of that new category of AI system in all sectors – like computer programming, artistic creation, education, personal, scientific innovations, healthcare, security systems, personal interaction, … – creates not only new risks as regards our individual liberties (privacy, freedom of expression, exploitation of our vulnerabilities, …) and potential collective discrimination but also societal risks (uniformization of opinions, environmental damages, competition infringements, impacts on the democratic functioning of our societies, violations of the rule of law, ...). These systemic risks call for new methods of public and private governance, or even for a moratorium in order to allow a public discussion on the stakes of this technology and its limits.

Accordingly, the IFAP Working Group on Information Ethics (WGIE) is proposing a workshop devoted to how some countries or regions are approaching the issues in light of the ethical principles established by the UNESCO Recommendation on the Ethics of Artificial Intelligence and, based on the analysis, to trigger policy reflections on how these ethical principles could be applied to the development of AI in countries and regions as well as in national jurisdictions.

Expected Outcomes

The recording of the session will be published on the webpage of the Working Group on Information Ethics (WGIE) of UNESCO and of its members’ institutions (in particular academic partners from Morocco, India, China, Germany, Belgium and America). The publication of an academic text by the WGIE can also be expected.

This workshop is part of the work of the WGIE, which is developing a series of workshops in several regions of the world.

Hybrid Format: We encourage hybrid participation to include in the discussion people from different countries, in particular from developing countries. Our online moderator has already organized several online events and therefore has experience in encouraging interactive debates among participants, which we aim for.

Key Takeaways (* deadline 2 hours after session)

Procedures should be mandatory regarding the monitoring of the systems not only within companies but also at the societal level. Public debates should be organised about the development and use of the systems. In these debates, young people should be involved as they are the main users of the systems.

AI regulations are focused on the development and commercialisation of generative AI systems but we cannot ensure proper use of these systems. This should be taken into account by making final users more accountable.

Call to Action (* deadline 2 hours after session)

As there is a consensus on universal ethical values (i.e. UNESCO Recommendation on AI Ethics), what is now needed is local, inclusive and accountable implementation. Financial support for achieving this is expected from international organisations and big tech companies.

Generative AI developments might reinforce digital asymmetries and big tech dominance. Open data and science, and standardisation could avoid this and therefore should be imposed by public regulations.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

As moderator of the panel, Yves POULLET explained what we call “Generative AI”, why these systems and their applications might be fruitful for the society but also encompass a lot of risks not only for our individual liberties but also for societal concerns. He raised the question to the audience “who has already used generative AI systems?”, to which a majority of persons answered positively.

He explained that initially, OpenAI proposed to restrict the use of ChatGPT only to professional users because it could be dangerous if deployed to the general public. However, OpenAI launched a general public application in 2022 and since then, many companies have developed foundation models as well. The foundation models are general purpose models and are not per se developed for specific purposes. We spoke about ‘transformers’, Large language machines, multimodal generative systems, …. Dall E, Chat GPT, Midjourney, Bard, ERNIE 3.0, KoGPT. A lot of generative AI applications derived from foundation models are possible.

Gabriela RAMOS, Assistant Director-General for the SHS UNESCO department in charge notably of the AI Ethics Recommendation, delivered initial remarks, introducing the UNESCO Recommendation on the Ethics of AI, and the recent UNESCO report on ChatGPT comparing the latter with the provisions of the recommendation. She enumerated the various issues generated by generative AI systems and claimed for more reflexion, action and initiatives in this field. She pleaded for governance of these technologies. On that point, she introduced the impact assessment tool developed by UNESCO.

Dawit BEKELE introduced the technical peculiarities of Gen AI: they are designed to generate human like content. They generate coherent and context-related inputs based on the input they receive. He pointed out that they are used on large scale platforms. He then stressed the numerous benefits of generative AI systems as they can be used directly for filtering, rewriting, formatting, etc. They are a major factor of risks for our societies because of potential misuses, creation of harmful content, the fact that many people trust what they see online, risks to information and education, risks for jobs (writers, etc.). Moreover, because of the bias challenge (biased data sets), some countries banned their use. The applications of language models are diverse and include for instance text completion, text-to-speech conversion, language translation, chatbots, virtual assistants, and speech recognition. They are working with big data from a lot of different sources: public ones (as Wikipedia, administration data bases, …) but also from private ones. However, these resources are not representative of the whole world.

The first roundtable focused on the Generative AI governance and faced the following questions: Do we need a regulation? What do you think about soft law based just on ethical recommendations or voluntary code of conduct? What do you think about the “moratorium” requested by certain companies and academics?  What about a global regulatory model as UN is thinking?

Changfeng CHEN first mentioned a concept: cultural lag. Cultural lag is a term coined by sociologist William F. Ogburn to describe the delayed adjustment of non-material culture to changes in material culture in the 1920s. It refers to the phenomenon where changes in material culture (such as technology, tools, infrastructure) occur more rapidly than changes in non-material culture (such as beliefs, values, norms, including regulations). She applied this concept to generative AI. Under her opinion, first, we need regulation for generative AI because it is a powerful technology with the potential to be used for good or for harm. But generative AI is still developing, and the scientists and engineers who create it cannot fully explain and predict its future. Therefore, we need to regulate it prudently rather than nip it in the cradle through regulation. Furthermore, we need to be more inclusive and have the wisdom to calmly deal with the mistakes it causes. That only shows human civilization and self-confidence. Second, a moratorium on generative AI being a temporary ban on the development and use of this technology, it would be a drastic measure. It is unlikely to be effective in the long term. Generative AI is a powerful technology with the potential to be used for good, and it would be unwise to stifle its development entirely. Third, a global regulatory model for generative AI would be ideal but it will take time to develop and implement. AI, including generative AI is developing very rapidly in China and has been widely used. Fourth, she explained that China has been at the forefront of developing and regulating generative AI. In 2022, China released the Interim Administrative Measures for Generative Artificial Intelligence (AI) Services. It was published in July 2023. These measures require providers of generative AI services to: Source data and foundation models from legitimate sources; Respect the intellectual property rights of others; Process personal information with appropriate consent or legal basis; Establish and implement risk management systems and internal control procedures; Take measures to prevent the misuse of generative AI services, such as the creation of harmful content.

Stefaan VERHULST stressed the importance of a responsible approach and raised the question of the extent to which the development of AI should be open or closed. He praised for open data and open science to avoid digital asymmetries. He underlined the fact that the US is again member of UNESCO and endorsed its Recommendation on AI Ethics. He pointed out that the principles underpinning the ethical values are aligned in multiple documents: the US Blueprint for an AI Bill of Rights, the UNESCO Recommendation on AI Ethics, the EU documents, etc. The US approach is based on a co-regulation approach. He stressed the need for notice and explainability rather than a complete regulatory system. On that point, he underlined the fact that States in the US are much more active in working on legislative regulations than the federal authority. Cities are particularly active as well in this field and Stefaan Verhulst underlined the interest of this bottom-up approach more respectful of local disparities and permitting a real participation of the citizens.

During the Q&A, Omor Faruque, a 17-year-old boy from Bangladesh and the founder and president of Project OMNA suggested for the policy questions as a global child representative to: 1. Establish clear ethical guidelines for the development and use of foundation models, with input from all stakeholders, including children and young people; 2. Create a public registry of foundation models, including information about their ownership, purpose, and potential risks; 3. Develop mechanisms for public oversight and accountability of foundation models; 4. Convene a global forum on generative AI to discuss the ethical, legal, and social implications of this technology; 5. Support research on the impacts of generative AI on everyone including children and young people; 5. Promote digital literacy and critical thinking skills among children and young people so that they can be informed users of generative AI; 6. Consider a moratorium on the development and use of generative AI systems until appropriate safeguards are in place to protect children and young people from potential harms.

Steven VOSLOO (UNICEF) stressed that UNICEF is also concerned that we don't yet know the impacts of generative AI (positive and negative) on children's social, emotional and cognitive development. Research is critical, but will take time. He asked how we can best navigate the reality that the tools are out in the public, and we need to protect and empower children today when we'll only fully know how later.

Torsten Krause said that responsibility is a question for all and not only for children and asked if we would need official institutions’ certificates or permission before distribution of technologies likes generative AI systems.

On this matter, Stefaan Verhulst and Changfeng Chen agreed that young people must be involved.

A Finnish attendee stressed that it would be complicate to regulate a technology that is used by the general public.

Doaa Abu Elyounes, Programme Specialist in the Bioethics and Ethics of Science and Technology section, said that it is of course tempting to use these systems because they allow to be faster in writing for example, and therefore that we should be more aware of the risks involved.

The second roundtable was dedicated to specific socio-economic topics linked with Gen AI. First, the problem of non-representativeness of certain languages in big data excluding certain populations and creating a cultural dominance. Second, the fact that the use of most of these generative AI applications are based on a business model which requires payment for the proposed services. 

Siva PRASAD stressed that people developing the technologies look for profit and accentuate the digital divide because they are not interested in populations which are not a source of profit. The use of technology is affecting social innovations, and it is the role of public authorities to pay attention to the digital divide, especially as regards marginalised communities. He evoked the specific problem of the use of generative AI in the education system and underlined the right for young people to use the technology for building up their personality and the obligation of the teachers to help to be aware of the risks. Referring to the opinion of Stefaan Verhulst, he said that the local approach is the only way to develop sustainable and equal societies. The international and national authorities have to finance this local approach. On that point, he asserted that while there is a universal bill of rights, local answers are needed.

Fabio SENNE focused on aspects of measurement of the socioeconomic impacts of AI and data governance. He called for attention to the current scenario of digital inequalities and how it shapes the opportunities and risks related do generative AI. Disparities among countries and regions can impact diversity in data used to train AI based applications. In the case of Brazil there is not only the availability of content in Portuguese, but also in more than 200 indigenous languages. Digital inequalities affect diversity in data used to train models. In terms of access to connectivity and devices, we see persistent patterns of inequalities (ethnicity, traditional population, rural/urban, income, level of education, age). Diversity and inclusiveness have to be principles. The use of Generative AI tools also can be affected by poverty and vulnerability [e.g. income]. Early-adopter tend to benefit more when a new application is available. The impacts tend to be more disruptive in the early phases of dissemination of those tools. Fairness and non-discrimination/ inclusive access for all are also principles.

As concluding remarks, the panellists indicated what are the more crucial issues raised by generative AI in their opinion. We summarised these in the key takeaways and call for actions (see above).

As final remarks, Marielza OLIVEIRA thanked the Working Group on Information Ethics of IFAP-UNESCO for this panel and developed recommendations as regards the IFAP and UNESCO future works. She addressed to the IGF the need to go forth with that topic as a major challenge for all people and our society. She pleaded for continuing the discussion and trying to solve the delicate issues raised by these technologies.