Session
Inclusion, rights and stakeholder roles and responsibilities: What are/should be the responsibilities of governments, businesses, the technical community, civil society, the academic and research sector and community-based actors with regard to digital inclusion and respect for human rights, and what is needed for them to fulfil these in an efficient and effective manner?
Promoting equitable development and preventing harm: How can we make use of digital technologies to promote more equitable and peaceful societies that are inclusive, resilient and sustainable? How can we make sure that digital technologies are not developed and used for harmful purposes? What values and norms should guide the development and use of technologies to enable this?
Round Table - Circle - 60 Min
Recent years have been marked by an acceleration of the digital transformation, as evidenced by advances in artificial intelligence (AI). As platforms and applications become prevalent globally, more attention is given to the power of AI systems. AI is used in many aspects of our lives from facial recognition to driving. However, it is always important to remember that we have technology on one side and human beings, on the other. In particular, the focus should move towards consumers. Consumers both benefit from the advantages of AI (e.g. personalised offers) and are exposed to the risks associated with the use of AI systems (e.g. unfair commercial practices).
For our part, we offer a wide range of invited guests: representatives of academia, NGO, technology companies, the EU and international organisations and government who will express their views on strengths and possible shortcomings of the use of algorithms by public authorities and businesses. Such a variety of speakers will allow for a fruitful debate on the responsibilities of particular actors in the Internet environment when it comes to AI and their roles to achieve digital inclusion of consumers with respect to their rights.
For example, the Polish Office of Competition and Consumer Protection (UOKIK) will start with sharing its unique experience of introducing AI-powered solutions to better protect customers from unfair contract terms. The aim of the project is to help create a more equitable and inclusive society where an average consumer can count on being treated fairly by businesses operating in the Internet (banks, insurance companies, telecoms, e-commerce, game and service providers, social media etc.) and avoid harm. This is particularly important in case of vulnerable and disadvantaged consumers.
UOKIK is currently in the middle of a process of implementing an AI tool that will search terms and conditions in the Internet and analyse them to facilitate investigation work. This intelligent software robot will identify the clauses that may be unfair to consumers, so that we can eliminate them from contracts. We have gone a long way to define what we would like to achieve by employing a machine to do our work and we are open to share this knowledge during the session. It is easy to say 'Use AI!' but much harder to do it in practice.
Unlike many theoretical discussions this panel will try to answer the question of how to use new technologies, in particular AI, in an efficient and effective manner while securing civil society rights and freedoms. Thus, the discussion will start with the focus on practical aspects of the process such as technical knowledge of the AI technology (identifying actions and processes that can be automatised and choosing areas where AI can be helpful, verifying available databases and preparing data for the AI-powered tool, dialogue with business and academics in order to discuss the most suitable mechanisms and roles for AI (eg. supervised vs. unsupervised machine learning; tool vs. assistant), finding the most effective type of public procurement process, as required by law, to develop an innovative tool), and ethical questions. Also from this practical perspective of using AI for consumer protection, we are faced with several prominent questions concerning AI use. These questions are of legal-ethical character and are equally faced by businesses and researched in academia and by organisations such as OECD. Thus, we will also address questions related to norms and values to avoid developing technologies that could prove to be harmful. These issues may refer to human work (‘Will all the employees be replaced by robots?’) or the government/business (AI-user) responsibility for its decisions (‘How to avoid a machine's discriminatory or vague assessments?’). The presence of a business representative will be here highly valuable, considering their leading role in the use of AI technology. It will be an interesting addition to the discussion, to understand how they approach such problems.
These controversial questions and concerns related to AI are a good starting point for insights from academia and NGOs on how such an AI-based tool/solution can be used and supported in the field of consumer protection, taking into account legal and ethical aspects. Establishing an appropriate ethical and legal framework is key to ensuring fundamental principles and values are respected when dealing with these technologies.
By giving voice to other public institutions, we also hope to gain interesting insights into working with AI in other areas. Finally, the world-renowned international organisation – OECD, will describe the work that its Committee on Consumer Policy and Working Party on Consumer Product Safety have developed in relation to AI. Their presentation will provide a global perspective on the consumer benefits and risks of AI. In particular, it will highlight:
• an OECD business survey on AI use, incorporating questions aimed at better understanding the use of AI in consumer products and services and business’ initiatives to ensure that AI operates fairly and safely for consumers; • how AI may be used by some businesses, in particular through data-driven dark commercial patterns, to manipulate consumer choice or discriminate against particularly vulnerable consumers (whereas all consumers could also be considered vulnerable), and possible AI-powered tools that could potentially be employed to detect and mitigate dark patterns; and • how AI may help to enhance product safety whilst also presenting new risks, such as product hazardisation (whereby a product originally placed on the market as safe becomes unsafe across its lifecycle due to, for example, the input of erroneous or poor quality data).
The proper mutual debate cannot be ensured without the participation of global companies, on whose shoulders lies to be accountable for their practices and uphold international standards.
The discussion starting with a brief outline of UOKiK’s presentation and discussion of the challenges and opportunities in the implementation of AI should lead to conclusions for both public enforces and private sector. For the former, it will concern the adoption of new regulatory approaches for consumer protection with the use of modern technologies as well as their practical implementation. The latter will benefit from best practices in the elimination of unfair practices by online marketplaces and e-commerce retailers. The diversity of stakeholders ensured by representatives of public and government bodies, international organization, NGO, academia and business will lead to a discussion that will address all aspects of the very complex problem – in terms of technical/technological, ethical and legal issues.
To summarise, the Open Forum will embrace the economic and social inclusion and human rights, as the main focus area with a particular regard to two of the policy questions: 4. Inclusion, rights and stakeholder roles and responsibilities and 5. Promoting equitable development and preventing harm. Additionally, the discussion will also lead to several emerging and cross-cutting issue areas such as: Emerging regulation: market structure, content, data and consumer/users rights regulation.
We plan to organise an Open Forum, precisely a Round Table (Circle) within the hybrid format in order to facilitate participation to both speakers and participants present both online and onsite. The Round Table will have a defined structure and parts according to the goals of this format. We will put speakers in conversation with one another after a moderator introduces subject matter experts at the table. We are aware that the discussion has to take place with equal weight and equal opportunities.
First of all, we will provide two moderators - a male and a female - who will jointly facilitate the discussion. One of them will be present physically while the other one will be online in order to create a sense of representation to both groups of participants and to facilitate the later discussion.
The online moderator will also manage the chat.
In our opinion, the dual-moderator approach will ensure that the audience is being actively challenged to follow the speakers, share their reflections and ask questions.
Not forgetting the online devices, we shall use other online tools such as blackboard and voting app. This will ensure that even more passive participants are stimulated to actively participate. Moreover, polls on zoom or metimeter are also a good way of stimulating involvement from an online audience. Menti can be used on phones so could also be on a phone so participants physically in the room can also participate.
AI is a topical and highly emotive issue that will undoubtedly attract public attention, especially when discussing a practical tool.
The discussion will be geared towards addressing the above-mentioned questions and providing answers based on the discussion and interaction with online and offline participants. The Open Forum participants will jointly draft recommendations for different groups of stakeholders. These recommendations, together with a summary of the Open Forum discussions and conclusions, can be captured in a brochure/publication which can be disseminated as a written product of the Open Forum.
All speaker organisations will share information about the event throughout their membership network to scale up engagement and the number of participants/viewers and disseminate the outcomes of the panel. We will also share information on the event via our social media channels. We are also open to collaborating with other partners about a joint social media campaign.
Government of Poland
Office of Competition and Consumer Protection in Poland (UOKIK)
- Martyna Derszniak-Noirjean – Government, Director of International Cooperation Office, Office of Competition and Consumer Protection (UOKIK), Eastern European States
- Natasza Skrzek – Government, Chief Expert, International Cooperation Office, Office of Competition and Consumer Protection (UOKIK), Eastern European States
- Karol Muż – Government, Director of ECC Poland, European Consumer Center in Poland - Eastern European States
- Jacek Marczak – Government, Deputy Director of Branch Office in Bydgoszcz, Office of Competition and Consumer Protection (UOKIK), Eastern European States
- Piotr Adamczewski – Government, Director of Branch Office in Bydgoszcz, Office of Competition and Consumer Protection (UOKIK), Eastern European States.
- Jacek Marczak – Government - Deputy Director of Branch Office in Bydgoszcz - Polish Office of Competition and Consumer Protection (UOKIK) - Eastern European States - confirmed;
- Bob Wouters – International organization (EC) - e-Lab - Project Manager EU eLab at the European Commission - Intergovernmental Organization - confirmed;
- Prof. Monika Namysłowska – Academia - University of Łódź - Eastern European States - confirmed;
- Thyme Burdon – International organization - Project Manager - OECD Committee on Consumer Policy and Working Party on Consumer Product Safety, Directorate for Science, Technology and Innovation - Intergovernmental Organization - confirmed;
- Marcin Krasuski – Business - Government Affairs and Public Policy Manager - Google Poland - confirmed.
Martyna Derszniak-Noirjean – Director of International Cooperation Office, UOKIK, [email protected]
Karol Muż – Director of ECC Poland, European Consumer Center in Poland, [email protected]
16. Peace, Justice and Strong Institutions
16.6
Targets: 16 – Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels.
16.6 - Develop effective, accountable and transparent institutions at all levels.
The main topic of our workshop is related to the use of AI systems in consumer protection.
The Polish Office of Competition and Consumer Protection (UOKIK) representing the governmetal institution will introduce AI-powered solutions to better protect consumers from unfair contract clauses.
The abovementioned main idea of the proposal lies within the scope of the issues written in the Sustainable Development Goal number 16 – Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels.
From our side we offer a wide range of invited guests: representatives of academia, NGO, the EU and international organisatons, business and government who will raise the crucial issues covered by the following SDGs target: 16.6 - Develop effective, accountable and transparent institutions at all levels.
Report
1. Interdisciplinary approach and cooperation is required while using AI. Using AI for consumer protection carries risks that should be kept in mind while implementing AI-based solutions. 2. Human supervision is essential to balance and to minimize the risk of discrimination and bias. Check and balance system is crucial.
3. It is important to learn and work with AI in order to know how to implement it; experimentation with low-risk, repetitive tasks can be helpful. 4. It is important to develop guidelines and framework for implementing AI.
Report on the IGF Open Forum “Artificial Intelligence (AI) for consumer protection” held on December 10th, 2021, in Katowice
Speakers:
- Jacek Marczak - Deputy Director of Branch Office in Bydgoszcz, Office of Competition and Consumer Protection (UOKIK)
- Bob Wouters - e-Lab - Project Manager EU eLab at the European Commission
- Prof. Monika Namysłowska - University of Łódź
- Thyme Burdon - Project Manager, OECD Committee on Consumer Policy and Working Party on Consumer Product Safety, Directorate for Science, Technology and Innovation
- Marcin Krasuski - Government Affairs and Public Policy Manager Google Poland
Moderators:
- Martyna Derszniak-Noirjean - Director of International Cooperation Office, UOKIK
- Karol Muż - Director of ECC Poland, European Consumer Center in Poland
A summary of the main takeaways of the session:
The session was started by the moderator introducing the topic of the Open Forum and the panelists introducing themselves in turn. The conversation was commenced with an overview presentation of the AI tool that UOKiK is currently developing. The aim of this tool is to help find and eliminate unfair contract terms, through the use of a web crawler that reads and compares scanned terms with a registry of unfair contract terms that has been developed for years. It will help automatize this process of scanning the terms and conditions of contracts within the database as well as external contracts. It then highlights and notifies its human supervisor if an unfair term has been found, for further analysis. As this AI tool is to be used at the end of the next year, it triggered a more general discussion on the types of problems, challenges and issues perceived, in the context of public administration usage of AI and similar technologies.
One of the speakers noted that despite AI technologies being used to improve or enhance consumer protection interests, there are many associated risks. These can be seen in the form of discriminatory or non-transparent outcomes, such as automatic decision making or issues related to data processing. Interestingly, many of these issues arise from the over or under reliance of public administration staff, which leads to the subject of human accountability and supervision. It was suggested that, public administration should reflect on this. A relevant piece of legislation in this context is being drafted in the EU law. The Act on Artificial Intelligence addresses potential risks when using AI.
There are many areas in the field of activity of consumer protection agencies where such an AI tool could be used, for example in websites as chatbots, market surveillance, identification of unsafe products being supplied on online marketplaces.
Later in the meeting, other speakers tried to answer the question about the most important technical challenges in the implementation of AI by public administrations. There is a need for using AI in public administration. Taking into account that the implementation of such an AI tool is time consuming, we need to seek for an agile approach. At the same time, public administration cannot afford low-quality evidence. Therefore, learning from each other through collaboration is crucial. The advice, which comes from the business representative, is to harmonize regulations across the EU in a more scalable way.
Important ethical issues related to the use of AI by public authorities, such as transparency and fairness of decision-making or proper oversight and concerns on employment reduction, were raised. The key principles that should be applied while using AI are fairness, non-discrimination, awareness of biases, accountability by the agency and possibility to opt out. Breaches of ethics could potentially undo years of credibility and progress. The speakers highlighted that to maintain proper oversight, regular training of staff would be beneficial so that they are aware of their responsibilities but also risks. UOKiK representatives have emphasized that they are very mindful of such issues and wish to anticipate them and incorporate appropriate solutions into the tool from the start. This is the reason why they widely engage in their discussion in initiatives such as the IGF.
Some of the speakers argued that all tools should be held accountable to a human being and not work without regular supervision. AI is a tool which requires human responsibility to ensure that it functions in a safe and secure manner. However, the counterargument to human supervision, which one speaker put forward, was that not all AI requires oversight. For example, tasks that do not deal with highly sensitive data or those that solely monitor time, do not need constant human supervision. In these cases, such intervention could hinder the AI process, which was developed to increase pace of tasks and shift some from humans to machines. It is, thus, essential to correctly balance the interaction of human supervision and the capacities of the tool.
The session was concluded with a short summary of the main points discussed and concluding statements from the speakers.