Check-in and access this session from the IGF Schedule.

IGF 2023 WS #383 AI & disinformation: opportunities, threats and regulations

    Subtheme

    Artificial Intelligence (AI) & Emerging Technologies
    Chat GPT, Generative AI, and Machine Learning
    Future & Sustainable Work in the World of Generative AI

    Organizer 1: Marta Gromada, NASK
    Organizer 2: Mateusz Mrozek, NASK
    Organizer 3: Rosińska Klaudia, NASK
    Organizer 4: Aleksandra Osman, NASK

    Speaker 1: Marta Gromada, Intergovernmental Organization, Eastern European Group
    Speaker 2: Rizoiu Marian-Andrei, Technical Community, Asia-Pacific Group
    Speaker 3: Gigitashvili Givi, Civil Society, Intergovernmental Organization
    Speaker 4: Doowan Lee, Government, Western European and Others Group (WEOG)

    Moderator

    Mateusz Mrozek, Technical Community, Eastern European Group

    Online Moderator

    Aleksandra Osman, Technical Community, Eastern European Group

    Rapporteur

    Rosińska Klaudia, Technical Community, Eastern European Group

    Format

    Debate - 90 Min

    Policy Question(s)

    How can we work with disinformation without multi-million dollar technology?
    With AI developing so rapidly, how should citizen education change?
    What actions do we require from social media platforms?
    Can only regulation help to sort out the space of disinformation?
    Are less developed and wealthy countries participants in the development of AI or merely a labour and testing force?
    Can technology created by white, wealthy men from the US represent and help us all?
    How do we ensure access to technology and secure the infosphere in the Global South?
    How do we combat AI bias?

    What will participants gain from attending this session? Participants and attendees:
    will gain information on technological innovations and tools;
    will learn about the terms hallucination, AI bias, AI explainability and the concept of Ethics by Design;
    will become familiar with research problems in the topic of disinformation from representatives of the governments, business and social organisations.
    will learn about the opportunities and threats of AI development in the daily use of social media from different perspectives;
    will get insights about the current legal situation of AI regulation worldwide - from different perspectives;
    will gain access to knowledge on how to complement competences in their organisations/countries and plan their next steps;
    will gain the chance to exchange knowledge and gain new competences;
    will get the opportunity to meet practitioners who will be able to provide useful insights;
    will exchange experiences in monitoring and combating disinformation field;
    will complete their knowledge of the global situation of the fight against disinformation.

    Description:

    The discussion will focus on the development of AI and the fight against disinformation in terms of the challenges and the opportunities we see. We would like to touch on the definition of threats in the coming years, but also talk about flexible solutions that will protect democratic principles (e.g. freedom of speech). The discussion will also include references to the legal aspects being undertaken to regulate AI globally (e.g. AI Act).
    We would like to discuss on:
    1. ChatGPT from OpenAI and other language models
    Recent research (Sadasivan et al., 2023) indicates that minimal interference with the content created nullifies attempts at AI detection. The texts generated do not explicitly use existing content, so there is no basis to explicitly classify the work as non-self-contained.
    In China, there has been a first arrest for fake news generated using ChatGPT;
    2. Midjourney, Dall-E 2 from OpenAI, Stable Diffusion - generates graphics based on text;
    The tools are able to reproduce existing copyrighted graphics and images with high similarity.
    3. Microsoft/GitHub Copilot tool, an assistant for programming and soon for the entire development process;
    Risks arise when contributors do not understand the code they are implementing and do not verify the comments generated.
    4. Tools creating animations such as D-iD or imitating and creating deepfakes and deepvoices;
    Cybercriminals are keen to use AI algorithms and trick victims by generating false images or audio. The number of such incidents has increased by 13 per cent in the past year - experts alert.
    Possible conclusions:
    Neither large numbers of people nor advanced technology need to be behind effective disinformation activities;
    The disinformation operations are carried out through the proper preparation of event scenarios;
    We need regulation of the use of AI;
    Technology, in realising and combating disinformation, is used to optimise working time.

    Expected Outcomes

    The exchange of experiences with guests with different backgrounds and interests during the discussions will result in an exchange of knowledge, contacts and further cooperation. If the participants are willing and able to do so, it will be possible to publish the material on the NASK website and in the form of a report online. We will also invite prelegents to present their work during the online webinar in NASK.

    Hybrid Format: The panel discussion formula allows time to be provided for each speaker - those online and offline.
    The on-site moderator will assign roles and alternate questions from online and offline guests.
    The online moderator will field questions to the on-site moderator - these will be also visible to everyone on the chat online.
    Next we will answer on questions from people on-site.
    In addition to this, at the end of the discussion, thanks to a voting and real-time feedback tool - e.g. Mentimeter - we will check the satisfaction level of the participants and their opinions.