IGF 2022 Day 2 Open Forum #94 Privacy Risk Management in AI – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> DABIR SINGH:  Hi, everyone.  Is the room, the conference room set up and can you hear me?

>> Hello?  Can you hear me?  Hello.  This is Alan, I don't know it you can see me?

   >> DALBIR SINGH:  We could for a bit but we don't see you anymore, but that's okay.

>> ALAN:  Okay.  Thank you very much.  There was a session just finishing.  As a MAG member it's a pleasure to be on site and moderate this session and this workshop with ‑‑ oh, okay.

   >> DALBIR SINGH:  Perfect.  Thank you so much, Alan.  Thank you to everyone for joining.  I think we can get started.  I think what I'll do here is just ‑‑

>> (Speaking off mic).

   >> MODERATOR:  We'll wait a minute to begin online, right?  We'll wait one minute and then start.

   >> DALBIR SINGH:  Okay.

   >> MODERATOR:  Thank you.

   >> DALBIR SINGH:  I also have some slides so I'll need to share my screen if someone can ensure that I'm able to do that.

   >> MODERATOR:  Sure.  Can you prove that you can share your screen.

   >> DALBIR SINGH:  Yes, it works now.  Thank you.

   >> MODERATOR:  Okay.  Hello, everyone.  Good morning, good evening.  As well as you can see, I'm the only person on site who is going to moderate this session.  It's a pleasure for me as a MAG Member to be involved in this, so as you know, the theme of this open forum is Privacy Risk Management in Artificial Intelligence and this is a very interesting topic, and with me on the Moderator is going to be Mr. Dal.  I'm sorry, Mr. Dal, right, and many speakers as you can see are connected online as well.  So, Dal, the floor is yours and I understand you have a presentation for all of us.  Thank you.

   >> DALBIR SINGH:  Perfect.  Thank you so much, Alan.  Can we confirm people can see the screen?

   >> MODERATOR:  We can.

   >> DALBIR SINGH:  Okay.  Well, yes, hi, everyone.  It's a pleasure to be with you today.  I wish we could be in person, but plans get disrupted.  It's particularly nice for us to be able to present the first time here in this Open Forum Number 94 in our capacity as members of the Global Privacy Assembly, so first I think we'll just do some introductions.

I'm Dal Singh senior policy analyst at the office of the privacy commissioner of Canada.

   >> ETHAN PLATO:  Hi.  Everyone, Ethan Plato legal counsel for privacy commissioner the British Colombia.

   >> KRISTINA ZENNER:  Kristina from Germany and technical data protection department.

>> Good morning, Roberto from Italian data protection authority and head of artificial intelligence.

   >> DALBIR SINGH:  Sophia, we can't hear you.  Oh, no, I'm afraid not.

   >> MODERATOR:  Maybe microphone.  Maybe you need to restart the Zoom session maybe.

   >> DALBIR SINGH:  All right.  We will wait for Sophia to rejoin.

Go ahead, Sophia and try speaking.

   >> SOPHIA IGNATIDOU:  Can you hear me now?

   >> DALBIR SINGH:  Yes, we can.

   >> SOPHIA IGNATIDOU:  Great.  Good morning, again.  I'm Sophia and I'm a Group Manager for AI and data science at the Information Commissioner Office at the Data Protection Authority.

   >> DALBIR SINGH:  And we should note we're only five members of about 26 other authorities who compose the broader working group.

So today we'll be discussing privacy risk management in AI.  AI governance has had considerable attention in the past few years, about you we find the discussion tends not to emphasize the role of data protection and privacy in that sort of puzzle, or tend to just cover it very briefly without describing what the specific risks and kinds of things we must do to address them.  It's with that our working group decided to pursue this further on the international level and the things that we believe as privacy regulators believe should be part of the conversation.

So, what we will be covering today is just provide first a background of the GPA in our AI Working Group and then sort of managing and mitigating the risks of AI systems, the risk‑management process, and then we'll have some considerable time for questions and discussion.

So first let me just provide some context about the GPA for those that are unfamiliar.  The Global Privacy Assembly first established in 1969 ‑‑

   >> AUDIENCE MEMBER:  We don't see the slide now.

   >> DALBIR SINGH:  Oh, no.  We can now.  Okay.  So the conference ‑‑ our organization was first called an international conference to data protection and privacy commissioners.  We're an international organization surprised of over 130 members of the world's data protection authorities or privacy regulators, and we're independent bodies of the national governance that we represent.  Because privacy is a global issue, we recognize that international cooperation is, you know, a good method to sort of address some of these issues, and is we do this through the GPA.

And so the GPA meets annually for a conference and most recent was just in Istanbul last month and adopted a strategic plan and documented and those can all be accessed on the website at globalprivacyassembly.org.

There are a number of working groups, you know, that the GPA has established, so we are the AI Ethics and Data Protection AI Working Group just here but there are a number of other ones as well.  Can you access all the reports and documents from all the working groups on the website.

So, past items that we've worked on include the declaration on ethics and data protection in AI, and it has several other co‑sponsors, but it's just cut off here for breavity, but more recently we had the resolution on accountability and development and use of AI, so these types of documents reflect ‑‑ the significance is they reflect data protection authority position and views and signal or general sort of approach to emerging and topical issues.

We've also conducted a number of internal activities, including a survey on our authorities' own capacity to deal with AI issues as well as created a subgroup on facial recognition technology, which has produced its own sort of framework.

In our ongoing work, we'll focus on issues such as the use of AI within the employment context, among others, and so we value the opportunity to engage with you and other groups so please get in touch with us, even just to introduce yourself.  Our emails will be posted at the end of the presentation.

So with that, I'll pass it over to Sophia.

   >> SOPHIA IGNATIDOU:  Thank you, Dal.  I hope you can hear me again.  So the thinking of the General Risk Management Framework representing here is actually grounded on the Declaration in Ethics and Data Protection in AI that Dal just mentioned in signed in 2018 in Brussels and also the resolution and accountability and development and use of AI adopted in October of 2020.  And that affirmed that responsibility for the operation and effects of their systems remain with human actors, and accountability should be assessed, and again clearly defined principles and frameworks.

So, with that in mind, the GPA AI Working Group that we're representing here went on to develop a framework for managing risk in the AI systems across the supply chain, and that was presented at the last GPA conference this year in Istanbul that Dal also mentioned earlier.

So the framework that we agreed on sets out nine overarching aspects that we think should be considered across the AI lifecycle.  Some of these aspects you can see in this slide, and are linking to well‑known and data protection and privacy frameworks that are existing frameworks from around the world; namely, the GDPR that data protection and privacy experts are familiar with.

Yeah, so most of these aspects are not controversial and quite common sense, I would say with fairness, lawfulness, transparency.  But, yeah, we'll get into more details in the next couple of minutes.

Can we pass to the next slide, please. thank you.

So, just giving more detail in terms of what we mean by those aspects that we think should govern the risk mitigation in terms of AI going forward, so first of all, there is an issue around fairness and lawfulness, and we believe that organizations using and deploying AI should make sure that they process personal data in that context in a way that is fair and in a way that leads to fair outcomes.  We believe AI should be used in good faith and should not seek to exploit human vulnerabilities, and there should always be a legal basis on which data is being processed for AI training and use.

And in terms of transparency and explainability, again, transparency is needed in terms of when AI is actually used.  Sometimes people are not even aware that AI is actually implicated in a decision‑making process.  Transparency is also relevant to making sure models and training data and other details around the system are accessible and transparent to regulators so they can discharge their respective functions accordingly.

There is also a need for AI explainability, so explainability relates to making people aware of the nature of the decision, what kind of training data was used in development, and also what kind of personal data in general inform the decision across the decision‑making process, and general rules around how was the decision‑making process structured, and what were the parameters of that decision‑making process?  But as, people should be aware of what is the likely impact of an AI‑driven decision more broadly.  Again, this kind of thinking also maps into existing data protection and privacy frameworks that I think are already utilized to regulate AI to a certain extent.

All of this is considering the fact that inferences ‑‑ AI‑driven inferences, when they're linked to a individual person or information, so relevant information a plies, and any approaches that we suggest here should not prejudice law enforce am functions or other legal obligations, organizations that use AI or develop AI have.

Can we go to the next slide?  Yes, thank you.

So another aspect that we think organizations using and developing AI should take into account is putting in place appropriate measures to accommodate recourse and redress, so in order for individuals to exercise any rights they may have in relation to AI, they should be aware of any tools and processes that are in place to enable them challenging AI‑driven decisions and AI‑driven deployments.

Mechanisms enabling human review scrutinizing AI‑driven outputs should be in place, and in certain jurisdictions, actually those may be actually legally mandated as well, especially when it comes to Article 22 related decisions.  Article 22 is one of the key provisions in the GDPR that relates to solely‑automated decision‑make progresses.

The other aspect that we feel is important for organizations to take into account is the data minimummization and storage limitation aspects that are already principles in data protection, and more specifically in the GDPR, even though I'm aware that GDPR is an EU framework.  So that really relates to using the least amount of data to train and use AI systems and making sure any data retention periods are proportionate to the goal that you're trying to achieve when you're using or developing AI.  I think I will just stop here and pass it on to Ethan.

   >> ETHAN PLATO:  Thank you, Sophia.  Good very early morning to everyone, at least for me.  Over here it's currently 3:07 a.m. for me, so I apoll fiez for maybe the bags under my eyes a little bit but I'm very pleased to be here.

So I'm going to continue on and take the batan from Sophia in speaking to the last few elements of the risk framework that the Working Group has identified.  Continuing on that theme, we've got some more, I would say as Sophia said, some fairly common privacy principles that we see cropping up as they apply to artificial intelligence.  The first one here being purpose limitation data processing, essentially that processing or use or whatever, the use that is relevant for the particular jurisdiction should be limited to a specific purpose or rely on exceptions that may exist, depending on the jurisdiction, and Sophia identified law enforcement, but we're also talking about some jurisdictions ‑‑ some jurisdictions have things like comparable or compatible or consistent purposes where there is research provisions, or they may well have, you know, as we're seeing come up slowly now, AI‑specific legislation that would be used for training provision purposes in it.

The next one being the accuracy of data and data quality.  Personal information, this is essentially the issue of garbage in and garbage out with artificial intelligence or any kind of use, but artificial intelligence is particularly acute or, so inadequate quality data will impact the output o so a regular review of data sets and outputs for discriminatory results is also essential as we know there is some risk to harm to fundamental human rights that I think are ‑‑ that we know are immutable and should not be cast aside just because of a use of a new technology.

The next one is accountability and liability, and this, again, is a fairly one of the pillars of most data protection or privacy pieces of legislation, which is that organizations are responsible for the inverse impacts of AI systems and there also needs to be some sort of system that comes into place, a liability system of course depending on the jurisdiction again, that attaches to individuals themselves, so it's more than just organizations, but there is some individual accountability that the Working Group recognized would be necessary or is necessary.

And then moving to the next one, we've got ‑‑ oops I'm sorry, Dal, back one last click there.  No problem at all.  This is a quick one, I promise.  It's a quick one.  It's data security.  This is something that we always as data protection authorities are always very careful to state that data protection is essential especially with artificial tenls.  And technical measures should be appropriate to the current state of the technology, in this case data protection is very closely tied to the protection of individuals.  Okay.  Now we can go to the next one.

Okay, so the last one here on my end is, it's a doosy, a big one, consideration of ethical aspects, and so this is one that's a little bit more, I would say, aspirational or high level because we're talking about here this tension that we're seeing play out everywhere or anticipate it will continue to play out in a lot of ways, the huge potential on one hand that AI systems or artificial intelligence has for doing really good for humanity, but then the idea that there needs to be some protection it's and there is still a human creation and there is a lot of ways ‑‑ a lot of the ethical challenges that we face in everyday life with technology or just in interactions are going to be even more heightened when you overlay such a powerful technology on top of it, and there is things that an AI system can do that, you know, an individual or kind of more analogue systems can't do.

I mean one that I always like to talk about, or at least bring up is the case of, for example, law enforcement in a individual police officer or one police force can only knock on so many doors in one day, but an AI‑enabled system is able to knock on a lot of doors, a lot more doors, so to speak, digitally, than an officer would be able to do walking down through a neighborhood.

And so, this is what we're talking about here an empirical and careful review of any new system, and AI systems must still abide by and be contained by legal frameworks set out to humans in particular fundamental human rights, and in particular we're also looking at the group also pointed to things like group social scoring or group‑level correlations that have the potential for some very serious infringements on a social level on some protected grounds.

So, what we have on the right here are a number of ethical principles.  They're not exhaustive but I invite you to take a look at them.  Our paper has more detail on each of them, but essentially what we're talking about is nonmalfeasance, Benive thens, justice and fairness of outcome, including discrimination, liability, whistleblowing actions and autonomy and self‑determination and freedom of choice.

And so with that, I'm going to pass it on to Roberto, who is joining us from Italy.

   >> ROBERTO LATTANZI:  Yeah.  Many thanks also for getting up also so early today, and good morning to everybody from Rome.  So, like the more traditional data protection systems, also art intelligence systems are general‑purpose technology and actors and stakeholders that can be engaged are weighted numbers and on different level as well.  We can look at the slide and we see that there are regulators, and first of all is the legislators as well as public authorities that can be in charge of the governance of artificial intelligence systems, and then researchers obviously, and standard organization.

And then what is most interesting and direct liability could be ‑‑ could be introduced, and it would produce and provide different nature in providing artificial intelligence systems and user, in turn as users of artificial intelligence systems.  In terms of data protection, usually the end user could be or should be considered as data controller, and to them that applies to data protection principles that Sophia has spoken about at the beginning of our discussion.

The kind of liability and engagement of all of these actors and stakeholders is not the same and could be different more or less, but all of them have something to do, something to say speaking about risk management process in artificial intelligence systems.

What is also relevant to this is that the task is not done once forever.  The process that relates, the management of artificial intelligence system is a dynamic one, a continuous one, and goes through the lifecycle of an artificial intelligence system.  On all the sides of the individuals we have identified here, and these are nonexhaustive list as it can be identified other stakeholders and actors due to the ongoing legislative process in different jurisdictions.  You can find at the end of the document of the GDPR a useful matrix, an accountability matrix that could be useful.  Next slide, please.

Okay.  What are the main factors concerning the artificial intelligence specific risks?  The GDPR has identified too many areas at that could be relevant in order to identify the specific risk introduced by a artificial intelligence system and one is give to know the proper characteristic of the different artificial intelligence systems that can be used at different level using or looking at the software, at the functionalities that are used, at the data processing operation that is being done, both relating to training of data and then the application of data.

So the first factor I would say is the characteristic of the artificial intelligence system.  The second factor that is very, very much relevant is the context of application of the artificial intelligence system.  So, as we have said, the systems are dynamic and they need to be managed and measured in one sense in concrete and not in abstract.  Every system needs to be looked in its own contest by the different stakeholders that we have identified.  So, that's very relevant, and from this point of view, the sociopolitical values that characterize the application contest of the individual affected by artificial intelligence system should be considered in order to identify the consequences for a individual that more or less is introduced speaking about the ethics aspect related to artificial intelligence system.  Next slide, please.

And I mean sure a lot of you will be familiar with the risk identification that are here identified, non‑exhaustive list, I would say, and in some cases we have just, in the practices, in practice we have seen that there has been violation of fundamental and ethical principles related to the self‑determination of human beings or exploitation of human facilities, but then we have also the violation and this is the core of the interest of the data protection authorities, a fundamental principle related to personal data protection.  As you know, the data protection rights is not a standalone right and it's a cluster of rights, so it aims to protect other fundamental rights and legitimate interest of individuals.

The other category of risk is well known concerning artificial tension, is the risk of unfair discrimination of individuals for many reasons.  The more traditional one related to gender or ethnic origin, and we have just case law in this area, but the GPA raised the attention to the need to look at the margin for discrimination as well.  Deprivation and freedom of rights of individuals and safety of individuals is a risk that could be raised by artificial intelligence systems, and then we don't have only individual risk of group risk in one sense, but we could have also risk that are related to the society at large, more on the social systems just like deep fakes or disinformation.  Or another area that is relevant is social ones, is not only individuals, it's related to the impacts on environment and due to the use of artificial intelligence system.  The next an the last slide, I think.

So, as is traditional concerning the determination of risk level, we have two main factors as well.  The first one is the likelihood of occurrence of the event caused by artificial intelligence, and the second one is the severity of the magnitude of the consequences concerning individuals, groups, and society at large.

So, is it necessary to calculate appropriately the risk by each of the relevant stakeholders that we identified at the beginning, in order to allocate the responsibility and identify the suitable mitigating measures that would be part of the talk of Christina.

   >> KRISTINA ZENNER:  Thank you.  Hello from Germany.  Very pleased to be here.  Let's have a look of mitigating measures.  We've heard quite a lot of important aspects and frameworks that need to be considered when building or using AI systems, so and so let's take a concrete look at possible mitigating measures.

Mitigating measures are a core element of responsibility use of AI systems because that's what we can actively do to find a good way to deal with AI.  They can offer us ways for preventing harm to individuals and society by providing framework for dealing with ethical, privacy, and data protection risks as we heard before.

So, everything we heard so far tells us that risk‑based approach is about identifying risks, and classifying them, and managing them as appropriate.  And the mitigation measures offer approaches to do just that.  And it is not about from the outset but minimizing and controlling risks for the benefit of the people.  This allows for innovation, and at the same time protects the rights of those effected.  Next slide, please.

So, what are possible actions we can take?  Implement profound risk management process, this might sound simple, but it is an essential basis for the adequate analysis and dealing with risks during the entire life it is cycle of an AI system.  All actors must be involved in this.  AI systems should be developed in a way that does not jeopardize the rights of individuals or groups, and assuming there is even a slightest reason to doubt this, it has to be precisely a definition of controls on potential high‑risk use cases.  Use cases can always be very helpful to estimate the risks.

And then, of course, we must not forget the area of security.  We heard something about that too.  It is very important to implement appropriate technical and organizational measures and procedures.  Proportional to the type of system, the risk level, the nature of person of the data processed and categories of individuals effected.

Depending on the structure of the system, of course, this is partly to be decided on a case‑by‑case basis, but in some cases we can also take a systematic approach.  Ensuring algorithmic transparency is a challenge, no question, but we have to find ways and systems that AI does not become the so‑called black box.  This includes providing adequate information on the purpose and effects of the AI system and ensuring that individuals are always informed appropriately, for example, when they're interacting are directly with an AI system or chat bot or when the information is processed by such systems.  Next slide, please.

An enormous by important aspect of this is ensuring the accuracy of training data.  Ethan mentioned this before.  There is a high risk potential there because biases occur over and over again.  We find a lot of examples how algorithms surprisingly consistently discriminate against different groups with machine biases.  I guess we all know the example of I think it was in 2014 when a large company in the U.S. developed software that used artificial intelligence to rank female and male job applicants and became clear that the algorithm discriminated against female applicantses because the training data was consistently obtained from existing staff, primarily men.

Ensuring the accuracy of data training sets and application of data minimummization principle including by using anonymized or synthetic data is a very central aspect.  This also shows how important it is that we produce specific guidance and principles in addressing biases and discrimination.  There has to be an awareness raising and understanding the massive potential effects of such biases for individuals and for the society.

One option can also be fostering collective and joint responsibility, involving the whole chain of stakeholders, including through the development of sectoral standards around the sharing of best practices.  That means promoting accountability of all relevant actors as we heard from Roberto, including audits, continuous monitoring, impact assessment, and periodic review of existing oversight mechanisms.

And then of course establishing governance processes, such as relying on trusted third parties, certification mechanisms, setting up ethics and so on.

Last but not least, will not surprise you we would like to make this point especially strong here, supporting data protection and authorities placing them at the center of AI governance.  Effective data protection supervision requires very well trained staff and proper equipment of authorities.  This might be the only way that we can empower people to exercise their rights, including the right to information, the right to access, the right to object or at least restrict processing of personal data, the right to Eurasia and right not to be subject to a decision based solely on automatic processing.  All actors involved in a process, regulators, research, academia, standards organizations, designers, producers, and service providers and the end users as well, must be involved in this way to implement strong AI risk‑mitigating measures.  Thank you.

   >> DALBIR SINGH:  All right.  Thank you so much, everyone.  As promised, this is our contact information for each of us should you want to reach out, and we would love to hear from you no matter what stakeholder group you belong to.  Feel free to reach out to any of us.

So, yes, I'll just maybe allow you a few moments to ‑‑ we have some time now for the remainder of the session to have questions and a discussion.  So, I mean I'm not sure because I can't see the room in Addis, but I wonder if we should give preference to people physically present if there is anyone who has any questions first, and then maybe we can look at the chat later.

   >> AUDIENCE MEMBER:  Hello.  My name is Thieago and fellow from DPA and President of DPA.  We'll been following initiatives such as the GPA regarding AI frameworks and also OECD and my question is considering that the OECD also is developing some studies on well the management of AI systems, is that ‑‑ has that been taken into consideration in a way, any kind of comparative study to see the compatibility of the framework designed by the GPA and one of the OECD.  Not saying that is necessary, but at the same time, if we are trying to have some kind of standards for risk management framework, at some point this should be done.  Right?

   >> DALBIR SINGH:  Yes.  I'm not sure if others want to chime in as well.  The OECD is actually an observer to our Working Group as well, and so of course there are various international organizations as well, the Council of Europe I know has done work, UNESCO Agreement as well on recommendations for ethics in AI.

And so, you know, there are a lot of frameworks out there.  We're primarily speaking from a data protection sort of point of view to sort of reinforce the idea because as I think I mentioned at the outset that the specifics of data protection often get sort of left out from the frameworks or people just say to comply with privacy and leave it at that sort of thing, but there is a whole lot more to the issue as well.

So, yes, I mean when developing this, we absolutely did look at other frameworks as well, and so there very well could be some overlap because of that, but I don't know if anyone else wants to discuss or add to that?  No?  Okay.

   >> ROBERTO LATTANZI:  Just to add a couple of words, if you want, what is happening in the area of artificial intelligence is more or less as happened in the past concerning data protection is a transnational issue, so the ‑‑ in the different international fora, the idea, I'm saying to Theiago to be as far as possible of the consistent, the main chapter I would say of the discussion we are saying.  So, yes on one side, the GPA looks to other developing instrument.  And on the other side we see in the international arena, that each one is looking at the development national and subnational level in order to be consistent with the solution proposed.

   >> AUDIENCE MEMBER:  Hello.  This is Batsa from private consultant from Ethiopia.  I really wondering to know as far as I understand, AI systems, especially the modern AI systems are relying on individual datasets, particularly individual private datasets, and an aggregate of this to generate their results.  So I wonder how we can enforce the privacy of a individual to be protected on the eyes of AI?  In what form?  Even under the legislation or under the law, how we can really users know that is private data set used on the systems, and so that he can claim his right.

The other thing that I'm really wondering is I think I heard from the first presenter, she was stating that all AI systems should be transparent for the users, and who are the users after all of that they can understand that these AI systems to deny or to allow or to give a permit so they can clearly understand the AI systems that they cannot use with private datasets.  Really wondering to know these few things on this regards.  Thank you.

   >> SOPHIA IGNATIDOU:  Dal, I'm wondering if I should start answering that question.

   >> DALBIR SINGH:  Sure, go ahead.

   >> SOPHIA IGNATIDOU:  Yeah.  In relation to transparency towards individuals, there are various stakeholders as Roberto mentioned as well that need to be considered when it comes to the use of AI, so you have what in data protection terms is called the data subject, which is basically the individual that will be impacted by the use of AI.  You have the stakeholders that decide to use the AI in the first place.  For example, AI may be used extensively and, indeed, has started being used in the public sector, so in the public sector context, you have public sector organizations that decide to use AI to distribute welfare benefits, but that decision effectively has an impact on citizens.

You also have development of AI systems.  In that kind of decision‑making process, when it comes to allocating resources and budget and all of these kind of state funding as effectively, you do need citizens that are aware of AI being implicated in a decision that impacts them, and especially in particular data protection and Article 22, has relevance for what we call legal or similarly significant decisions.  So we're not saying that you have to explain to all individuals that AI is being used in any kind of context because that will create a lot of friction, but in the context of really impactful decisions for their lifes, they should be aware that certain decisions are automated and do not entail the deliberation that you would expect from humans.

In terms of the first aspect of your question, and how can we expect individuals to retain their privacy rights in an age of AI there is a stream of work the ICO is engaged with and others are probably working on, which is there is a field of research and technological development called privacy enhancing technologies, so machine learning can actually be used to.kind of like harness the power of data while also preserving the privacy of individuals, so there are technical approaches to this, and we're trying to figure out ways where you can extract, basically, information and knowledge from the datasets without compromising the privacy of individuals.

And just a final comment to round this up, well at least in the UK and EU context where GDPR applies, those have the aim of protecting individual rights and freedom, not just data protection and rights and privacy, but protecting freedom in relation to the processing of the personal data.  That means that privacy is one of the rights that we seek to protect, but it's not a absolute right.  Privacy needs to be balanced with other rights and freedoms, and that's why data protection and privacy is really like a nuanced legal framework.  So there are no clear solutions yet, but technological process can help towards preserving privacy.  Yeah, I think I'm going to stop here.  Thanks.

   >> ETHAN PLATO:  Dal, if you don't mind.  Sophia, that was a fantastic answer.  I'm not going to add much more than to say that as this is not a new ‑‑ I mean it's a new kind of technology, it's a new world that we're working through as regulators, but the issue of individuals being able to understand what is going on with their personal information is not just an AI issue, but it's something that I think is part of the reason why within our framework it is putting data protection agencies at the center of this because this is where the expertise really is.

And, you know, often at least from our perspective in our subnational jurisdiction, the only party in the ecosystem, if you want to call it that, that really knows what's going on is the organization itself, or the public in our case.  Unless there is a complaint and someone has a reason to check, it's hard sometimes for people to know what's going on, so that's where requirements with that accountability and requirements with transparency and, you know, limited requirements of notice are really important in this case.  And then, of course, if that's not done, actual enforcement ability and consequences or accountability that attaches to the actions that might not ‑‑ that are inconsistent with the various national frameworks that exist in the privacy world.

   >> KRISTINA ZENNER:  Last one, last comment to this is because we heard so much before, but I think that what we heard from Sophia and Ethan is that there are existing tools that we already have and that we can fall back on.  For example, from our classic supervisory activities, we heard a lot from Sophia and of course adapted to the challenges of AI, but at the end it's not always necessary to start from the bottom up.  We have a lot of mechanisms and tools that we can use, and these are very, very strong tools, so that's just to have in mind after all the good things we heard before.  Thank you.

   >> DALBIR SINGH:  Thanks for the question.  Any others?

   >> MODERATOR:  I don't see anymore questions from the audience on site.

   >> DALBIR SINGH:  I don't see any online either.  I might just add that, I mean, even in terms of promotion, I mean part of that question was about how data protection, data subjects should know that they're in a dataset and, you know, consent is a big principle that is common in a lot of jurisdictions, and so obtaining meaningful consent and ensuring the individual knows what's happening with their information in the first place is pretty central to ensuring that they can then exercise their data protection rights.  But otherwise, I mean unless there is a lawful exception, and those exist and I think Ethan outlined the research exception, and Sophia outlined the privacy‑enhancing technology, synthetic data, deidentification that organizations can do to make personal information nonidentifiable and then use that to train AI, and then that sort of alleviates any concern for or much of the concern for what needs to be in place for, you know, what would otherwise be required for personally identifiable information.  Yeah.  Just to add in on that.

We have a couple of minutes left, so if there are no further questions from the room or online, I mean I wonder if anyone is sort of willing to share if they work in the AI space or, you know, have questions about how regulators are looking to deal with AI or even more generally, we're happy to discuss that as well.

Otherwise, I think we can wrap up and let the next session have some time to prepare for to have the room.

   >> MODERATOR:  Okay.  Thank you, Dal.  Thank you to the distinguished panel.  Speaking as ethical engineer and lecturer in Peru this was essential in connecting the dot in AI challenges in the following years.  So I can say that now we can close the session.  Thank you very much.