Organizer 1: Dmitry Gulyaev, Youth Digital Ombudsperson office
    Organizer 2: Andrey Kuleshov, Common Fund for Commodities
    Organizer 3: David Otujor Okpatuma, Friends for Leadership

    Speaker 1: Kura Tegegn, Government, African Group
    Speaker 2: Milos Jovanovic, Private Sector, Eastern European Group
    Speaker 3: Mary Lou Rissa Cunanan, Private Sector, Asia-Pacific Group
    Speaker 4: Yudina Alena, Private Sector, Western European and Others Group (WEOG)
    Speaker 5: Jenny Chinchilla, Civil Society, Latin American and Caribbean Group (GRULAC)

    Moderator

    Andrey Kuleshov, Intergovernmental Organization, Intergovernmental Organization

    Online Moderator

    David Otujor Okpatuma, Intergovernmental Organization, African Group

    Rapporteur

    Dmitry Gulyaev, Civil Society, Eastern European Group

    Format

    Round Table - U-shape - 60 Min

    Policy Question(s)


    Digital development and social inclusion: Is there a connection between the implementation of specific digital policies and the social and economic rights of citizens set out in the Universal Declaration of Human Rights? What approach to the introduction of digital technologies into the social spheres of society is the most sustainable and safe for the user?

    Balancing convenience and security in public administration and citizen welfare policies: AI technology makes it possible to greatly simplify social services, bring them to remote areas and help certain categories of citizens who benefit from social support. However, imperfect algorithms, changes in the economic situation and new technological challenges can hinder the positive and safe use of automation tools. It is necessary to focus the attention of regulators and public platforms on implementing balanced policies that take into account all changes in people's lives.

    Promoting equitable development and preventing harm: How can we make use of digital technologies to promote more equitable and peaceful societies that are inclusive, resilient and sustainable? How can we make sure that digital technologies are not developed and used for harmful purposes? What values and norms should guide the development and use of technologies to enable this?

    Connection with previous Messages: The AI topic is specifically mentioned in the Katowice Messages. It is stressed that “Artificial Intelligence (AI) needs to be developed and deployed in manners that allow it to be as inclusive as possible, non-discriminatory, auditable and rooted into democratic principles, the rule of law and human rights. This requires a combination of agile self, soft and hard regulatory mechanisms, along with the tools to implement them.” The discussion at this workshop will elaborate upon the IGF 2021 final document and inherit its spirit and ideas.

    SDGs

    10. Reduced Inequalities


    Targets: The proposal links most closely to SDG10 “Reduced inequalities”. The subject touches on both (a) the reduction of existing inequalities through the use of technologies in specific contexts where social problems are perpetuated, and (b) prevention of new social inequality challenges emerging from expanded use of advanced technologies. Practical solutions that may be discussion in this session will further have impact on SDGs 16 “Peace, justice and strong institutions”, 11 “Sustainable cities and communities”, 5 “Gender equality” and others.

    Description:

    Along with positive examples of the deployment of AI in the provision of various public social services to citizens, processes of implementation of AI to our everyday lives can expose certain groups of the population to risk. Unbalanced use of AI has the potential to cause disruptive and discriminatory effects, violate the rights of individual citizens and lead to the leakage of personal data and breach (violate) the privacy of citizens.

    Application of AI technologies for such tasks as automation of the system of social benefits and payments, calculation of pensions, insurance of population, medical care, registration of transport and others can lead to leak of personal data and biased decisions limiting opportunities and rights of people. The practice of introducing smart systems in such processes should be implemented with caution and after a comprehensive impact assessment and elimination of any possible risks for users.

    In a number of countries, there have been cases of unfair decisions on social benefits, errors in compiling lists of those in need of various types of state support, and massive data leak from social service resources and various public and private organizations. The implementation of AI-based automated systems should be constantly monitored by a human. So far, we cannot fully trust algorithms and machines in complex social situations and scenarios.

    AI technologies could also help in reducing socioeconomic inequalities in manufacturing and trade by identifying the most vulnerable stakeholders in a particular sector or region, who could benefit the most from targeted Government transfers and aid programmes.

    Expected Outcomes

    During the discussion, stakeholders will share their experience in the application of AI technologies to automation of vital public administration, bureaucratic processes, and social services, and will give examples of some positive and negative practical cases. Most importantly the discussion will aim to develop a set of useful recommendations within the framework of the application of AI technologies in social services.

    Hybrid Format: Discussion will be facilitated in a hybrid format, with an online and offline moderators, who will ensure active and equal participation of virtual participants and speakers. Discussion would also encourage elements of reverse-mentoring, which would allow virtual and offline participants to engage in brainstorming, considering development of certain recommendations.

    Online Participation



    Usage of IGF Official Tool.