IGF 2023 DC-DAIG Can (generative) AI be compatible with Data Protection?

Time
Tuesday, 10th October, 2023 (08:00 UTC) - Tuesday, 10th October, 2023 (09:30 UTC)
Room
WS 10 – Room I
DC
Dynamic Coalition on Data and Artificial Intelligence Governance

Round Table - 90 Min

Description

This session will explore the tension between the development and use of AI systems, particularly generative AI systems such as ChatGPT, and data protection frameworks. The development, adoption and popularisation of AI has led to increasing calls for AI regulation, but also exposed the need for more effective implementation of existing data protection frameworks. Participants will provide different perspectives of how the automated processing of personal data is regulated in different national or regional frameworks, what type of regulatory proposals have been put forward to address the challenges of AI, stressing the interactions between AI and data policies. This session will be the first IGF meeting of the new IGF Data and Artificial Intelligence Governance Coalition (DAIG), which aims at fostering discussion of existing approaches to data and AI governance, promoting analysis of good and bad practices to identify what solutions should be replicated and which ones should be avoided by stakeholders to achieve a sustainable an effective data and AI governance.

  1. To facilitate interaction between onsite and online speakers and attendees, we will leverage a hybrid event platform that provides real-time communication channels. For the onsite attendees, we will project the virtual attendees and their questions/comments onto the screen to ensure that both groups can engage with each other. In addition, we will use a moderated chat on Zoom for online participants to interact with onsite speakers and vice-versa.
  2. The session will be designed with both online and onsite participants in mind. The session will be structured with interactive segments to engage all attendees, such as Q&As and debates to cater both online and onsite participants.
  3. To increase participation and interaction during the session, we plan to use an online document to allow participants to contribute their thoughts in a shared digital space. We will also utilize social media platforms for pre-session and post-session engagement, such as Twitter and Instagram for live updates.
Organizers
  • Ana Brian Nougrères, UN Special Rapporteur for Privacy
  • Walter B. Gaspar, FGV Law School, Rio de Janeiro
  • Shilpa Jaswant, Jindal Global Law School, India
  • Luca Belli, FGV Law School, Rio de Janeiro
Speakers

 

Brief intro on the DAIG’s work by Luca Belli, Professor and Director, Center for Technology and Society at FGV Law School (5 min)
 
First slot of presentations 6 minutes each

  • Armando Manzueta Digital Transformation Director, Ministry of Economy, Planning and Development of the Dominican Republic, Dominican Republic
  • Melody Musoni, Policy Officer at the European Centre for Development Policy, South Africa
  • Gbenga Sesan, Executive Director, Paradigm Initiative, Nigeria 

Q&A break (10) minutes

Second slot of presentations 5 minutes each

  • Jonathan Mendoza, Secretary for Data Protection, National Institute of Transparency Access to Information and Protection of Personal Data (INAI), Mexico
  • Camila Leite, Brazilian Consumers Association (Idec)
  • Smriti Parsheera, Researcher, CyberBRICS Project, India
  • Wei Wang, University of Hong Kong, China Kamesh 

Q&A break (10) minutes

Third slot of presentations 3 minutes each

  • Michael Karanicolas, Executive Director, UCLA Institute for Technology, Law and Policy
  • Kamesh Shekar, Senior Programme Manager, Privacy & Data Governance Vertical | AI Vertical, The Dialogue.
  • Kazim Rizvi, Founding Director, The Dialogue.
  • Giuseppe Claudio Cicu, PhD Student at University of Turin & Corporate Lawyer at Galgano Law Firm.
  • Liisa Janssens LLM MA, scientist department Military Operations, unit Defence, Safety and Security, TNO the Netherlands Organisation for Applied Scientific Research.

Open debate (10 min)

 

Onsite Moderator

Luca Belli, FGV Law School, Rio de Janeiro

Online Moderator

Shilpa Jaswant, Jindal Global Law School, India

Rapporteur

Shilpa Jaswant, Jindal Global Law School, India

SDGs

8.3

Targets: "Promote development-oriented policies that support productive activities, decent job creation, entrepreneurship, creativity and innovation, and encourage the formalization and growth of micro-, small- and medium-sized enterprises, including through access to financial services". The current impact of AI, and especially generative AI, can already be felt in day-to-day activities, ranging from the integration of these new technologies into exiting processes in productive sectors to matters of cybersecurity and regulatory effect, such as copyright regulation and personal data protection. The projected impact of these technologies, however, is even greater, with a potential 7% raise in global GDP and a 1,5x impact on productivity over the next ten years, but also with significant effects over exiting jobs and necessary professional capabilities (https://www.goldmansachs.com/intelligence/pages/generative-ai-could-rai…). Thus, discussing the determinants of access to these new technologies, effective training and learning, innovation diffusion, as well as the regulatory framework that ensures respect for fundamental rights in face of their potential harms, especially from a multistakeholder account and from a multitude of national perspectives, closely relates to a development-oriented policy debate.

Key Takeaways (* deadline 2 hours after session)

AI transparency and accountability are key elements of sustainable AI frameworks but different stakeholders and policy debates define and interprets such concepts in heterogeneous fashion.

Most AI governance discussions focused on and are led by primarily developed countries. The Data and AI Governance (DAIG) Coalition has proved to be one of the few venues with strong focus on AI in the Global South.

Call to Action (* deadline 2 hours after session)

The DAIG Coalition will keep on promote the study on key data and AI governance issues such as algorithmic explicability and observability which are critical to achieve sustainable policy frameworks.

The DAIG Coalition will maintain and expand its focus on Global South perspectives, striving to increase participation from African countries.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

Session report: Can (generative) AI be compatible with Data protection?

IGF 2023, October 10th, 2023, WS 10 - Room I


The session explored the tension between the development and use of AI systems, particularly generative AI systems such as ChatGPT, and data protection frameworks. The DC aims to present a diverse set of views, in the spirit of multistakeholder debate, from various sectors, countries, disciplines, and theoretical backgrounds.

Professor Luca Belli, Director of the Centre for Technology and Society at FGV Law School, opened and moderated the session. He discusses the concept of AI Sovereignty – “the capacity of a given country to understand, muster and develop AI systems, while retaining control, agency and, ultimately, self-determination over such systems”. Regulating generative AI is a complex web of geopolitical, sociotechnical, and legal considerations, whose core elements compose the AI Sovereignty Stack.

Armando Manzueta, Digital Transformation Director, Ministry of Economy, Planning and Development of the Dominican Republic – gave insights on how governments can try to use generative AI in their infrastructure and public services. When an AI complies with data privacy laws along with a transparent decision-making mechanism, it has the power to usher in a new era of public services that can empower citizens and help restore trust in public entities improving workforce efficiency, reducing operational costs in public sectors, and supercharging digital modernization.

Gbenga Sesan, Executive Director, Paradigm Initiative, Nigeria – emphasized the role of existing data protection laws, but also how this ongoing discussion on generative AI opens an opportunity for the countries that do not yet have a data protection law to start considering introducing one to regulate mass data collection and processing. There is also a need to de-mystify AI and make it more understandable to people. Sesan also pointed out that there is a lack of diversity in the models of generative AI like ChatGPT, as well as a need to establish review policies or mechanisms when they deal with information on people.

Melody Musoni, Policy Officer at the European Centre for Development Policy, South Africa – spoke on how African countries are taking steps to carve out their position as competitors in the development of AI. There is a need for AI to solve the problem in the African region. E.g., the digital transformation strategy showed the urgency for Africa to start looking into AI and innovation to develop African solutions. The speaker also mentioned setting up data centers through public-private partnerships.

Jonathan Mendoza, Secretary for Data Protection, National Institute of Transparency Access to Information and Protection of Personal Data (INAI), Mexico - explores current and prospective frameworks, giving a descriptive account of ongoing efforts to promote transparency and accountability. Due to the diverse nature of the population in the Latin American region, generative AI can pose a threat and therefore a policy to process personal data must be in place. There is also a need to balance the ethical designing of AI models and the implementation of AI to make these models more inclusive and sustainable while reducing potential threats.

Camila Leite, Brazilian Consumers Association (Idec) - explored the general risks of AI on the Brazilian consumer population. Financial and Mobility services can immensely benefit from generative AI, however there have been instances in which the output from generative AI was found to be manipulative, discriminatory, and violated the privacy of people. It is important to put consumer rights and protection at the heart of policies regulating generative AI.

Wei Wang, University of Kong - elucidates the disparate conceptualizations of AI accountability among various stakeholders at the Chinese level, thereby facilitating an informed discussion about the ambiguity and implementability of normative frameworks governing AI, specifically regarding Generative AI. China has a sector-specific approach contrary to the comprehensive one as seen in the EU, UK, etc. China has established measures to comply with sectoral laws and Intellectual property laws.

Smriti Parsheera, Researcher, CyberBRICS Project, India - discusses the why and how of transparency obligations, as articulated in the AI governance discussions in India and select international principles. She argues that the need for transparency permeates through the lifecycle of an AI project and identifies the policy layer, the technical layer, and the operational layer as the key sites for fostering transparency in AI projects.

Michael Karanicolas, Executive Director, UCLA Institute for Technology, Law and Policy - argues for the need to develop AI standards beyond the “auspices of a handful of powerful regulatory blocs”, and calls for the inclusion of the Majority World into standard-setting processes in international fora.

Kamesh Shekar, Senior Programme Manager, Privacy & Data Governance Vertical, The Dialogue - argues for a principle-based approach coupled with a detailed classification of AI harms and impacts. He proposes a detailed multistakeholder approach that resonates with the foundational values of responsible AI envisioned by various jurisdictions geared toward ensuring that AI innovations align with societal values and priorities.

Kazim Rizvi, Founding Director, The Dialogue - spoke about domestic coordination of regulation and then international coordination. Alternative regulatory approaches can also be looked upon through public-private partnerships.

Giuseppe Cicu, PhD Student at the University of Turin and corporate Lawyer at Galgano Law Firm - spoke about a framework to regulate AI by Corporate Design to fit together business management and AI governance concerns into a step-by-step implementation process, from strategic planning to optimization. He provided a game plan for responsible AI by bringing transparency and accountability into the organizational structure of the firm and having a human in the loop. The approach is grounded in the human rights global framework and privacy policies. He suggests that corporations introduce an ethic algorithmic legal committee.

Liisa Janssens, LLM MA, scientist department Military Operations, unit Defence, Safety and Security, TNO the Netherlands Organisation for Applied Scientific Research - provides a focused responsible AI framework for military applications, developed through a scenario-setting methodology for considering AI regulation’s virtues and shortcomings. The disruptive nature of AI is considered in the face of the demands of Rule of Law mechanisms to trace the requirements that make up responsible use of AI in military.

Comments and questions: What are the key privacy principles at a normative level (e.g., transparency and data minimisation, purpose limitation) that should be ensured so that generative AI can comply with them? Will the data protection laws expand their scope to include non-personal data since most of the data to train a generative AI is non-personal data.