IGF 2024-Day 2-Workshop Room 9-OF 38 Harnessing AI Innovation While Respecting Privacy Rights

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> LUCIA RUSSO:  Can you hear me now?  I can't hear myself.  Thank you so much.  Good afternoon to the audience here in Riyadh and online.  Welcome to this Open Forum by the OECD.  Is it okay, the sound?  Okay.  On how to harness AI innovation while respecting privacy rights.  This is the focus of this panel today.  And it's a concern that has been heightened by recent developments in technology and OECD recommendation in its revision earlier this year.  It has evolved to reflect the evolving technology, the landscape, increased challenges raised by advanced AI systems to include privacy rights.

In our discussions today, we would like to navigate this the main aspects, the privacy challenges in the AI systems, the evolving policy landscape for AI governments and interrelation with privacy, and how to develop practical forward-looking solutions.

I am joined for this discussion today by an exceptional panel of experts who have diverse perspectives on AI governments, spanning from government, policy, Technical Community, academia, and regulators.

So, I would like to join, to welcome today Juraj Corba from the Slovak Ministry of Innovation.  Juraj is the Chair.  CDI (Audio breaking up) and the Global Partnership on AI.

I have Clara Neppel, Co‑Chair of the OECD expert group on AI data and privacy and Thiago Guimarães Moraes, specialist on AI governance and data protection at the Brazilian Data Protection Authority.

We will also have Jimena join us a little later.  She's a member of the UN High‑Level Advisory Body on AI.

So, the way this panel will unfold will be to have our speakers bring their perspectives around this topic.  Then we will also have time for discussion with the audience, both here and then online.  We are monitoring the chat.  So, we are ‑‑ we will give voice to those who have questions online.

So, I will now start with Juraj.  There should be some slides on the screen.  Okay.

As the Chair of the Working Party on AI governance played a key role in guiding the discussions that have led to (Audio breaking up) on AI, could you (Audio breaking up) and also tell us what are the primary concerns that ‑‑ and now this (Audio breaking up).

>> JURAJ CORBA:  One, two, three, do you hear me, please.  If you can change my machine.  I'm afraid it's not working properly.

1‑2‑3.  This is better now, I hope.  Not really?  Is it better?  1, 2, 3.  Anyway, at least you hear me.  So, first of all, I would like to thank the organisers for providing again an opportunity for the international organisations to share the latest results of their work, including the OECD.  We are happy to be here.  This has been an outstanding year for us at the OECD, for us who work in the AI agenda for multiple reasons.

One of the reasons is the fact that we have created a so‑called integrated partnership with the Global Partnership on AI.  So, the family of countries that cooperate and share knowledge together, not only knowledge but hopefully also solutions, the family's expanding, now we are covering 44 different jurisdictions from all around the world.

I was trying to calculate actually what proportion of the world population we cover in the Global Partnership on AI now.  And it's 40% of the world's population.  So, it's really a significant club.

Now, notwithstanding the enlargement and possible further enlargement in 2025, we managed, as was already mentioned by Lucia, to update the first ever intergovernmental document on AI, which was adopted in 2019 by the OECD, the so‑called OECD AI principles, which were then later incorporated into G20, AI principles into the first international convention on AI at the Council of Europe with the participation of non‑European countries.  And to some extent also into the AI Act of the European Union with which some of you may be familiar with.

So, there are some successes that we really can look back at.  I must say I'm proud for the whole group that we managed.  When it comes to the reasons why we had to update the OECD AI principles in 2024 was primarily for reasons of clarity, for reasons of reflecting on the latest technological development.  And of course, we had to take account of many different interests that have been raised.

As you know, the OECD works and now also The Global Partnership on AI.  We work on a consensus basis.  To be able to come to any modifications, any updates, we had to listen to basically hundreds of people, not only people acting on behalf of the governments, but also people involved in the expert groups.  You'll learn more from Clara.

So, this was a very interesting exercise.  Surprisingly enough we managed to have this revision updated in May by the Ministers in Paris.

Now, one of the key milestones that I would like to convey to you on the basis of the work that we did is actually the definition of the artificial intelligence as such.  So, when we discuss impact of artificial intelligence on privacy or personal data, we really need to make sure that we discuss the same thing.  In other words, what is actually the artificial intelligence when we talk about it?  How we can recognise, or can we actually recognise and make a clear difference between AI and what we would call classical software systems.

Now, you can judge our work if you go to the OECD website, you will find an explanatory memorandum on the updated AI definition there.  You will see how we actually arrived at the final solution.  I recommend you to read this.

Of course, there it is clear from the definition as such that any AI is highly dependent on data, on its quality, and of course there is a clear bridge to the privacy concerns.

The last thing in relation to the AI definition I would like to mention is, of course, the fact that the definition is imperfect by definition.  In other words, it's a work in progress.  It will be reviewed again.  And we also need to understand that making a clear line between software, as we know it or as we knew it, and the new elements that we call artificial intelligence is not necessarily as clear‑cut as we would wish.  We would rather see it as a scale.  Because also, of course, the systems that we call AI, they're also dependent and interact with classical software as well.

So, it's very delicate.  With the privacy, of course, we need to realize that, as I mentioned, AI is hungry for data.  It needs data to actually be built and to work properly.  The thing is, of course, any restrictions on the use of data can be detrimental for building of AI models.

At the same time to complete the triangle, it's not only about building of models and systems, but it's, of course also about the way security environments access information about us and evaluate possible threats and risks.  So, any limitations there, of course, interact with this field which is not always discussed, but we need to be aware of this.

So, it's a delicate balance we need to draw between the protection of privacy on one hand and security needs and the needs of building up of AI models and systems on the other hand.

There are three principles in our OECD AI principles which are foundational also now for the whole Global Partnership on AI community.  These principles express the need for privacy.  Of course, we recognise that even inside this broad family of countries and jurisdictions, the approach to privacy varies.  They are, of course, also contingent on certain cultural notions, on political approaches, so many issues are in place there.

With that, I would like to commend the work of the expert groups.  We have multiple groups comprised of experts feeding into the work of our bodies at The Global Partnership on AI and the OECD.  This is a treasure, a big asset that we can build on.  You're welcome to find out about the way we work.  Of course, the more we can engage with you in a meaningful way, the more knowledge and the more understanding we can build.

Last but not least, I would like to also commend the work on the you know advisory board of AI of which Jimena is a distinguished member from Mexico.  If you look at the you know advisory board report that was published in September, if you look at the UN digital exact that was adopted in New York City also in December.  When it comes to the first pillar of the UN digital compact which is to create knowledge and understanding of the AI systems and the impacts on economy, society, et cetera.  It is actually the OECD and The Global Partnership on AI that is relied on to feed the first pillar of the UN digital compact to provide necessary knowledge to share it with the global community.  So, besides the opportunity there at the OECD and Global Partnership on AI to engage with all of you, we can also engage at the global level together.

With that, Lucia, I thank you very much again for having me here today.

>> LUCIA RUSSO:  Thank you, Juraj, for providing this overview of the most recent work of the OECD and what we have been engaged in with ‑‑ during this past very busy here.  Now, I would like to welcome and turn to Jimena.  You're an international lawyer and scholar and advisor on AI on peace and security.  You lead a consultancy firm IQuilibriumAI, which is specialized on AI, peace, and security.  As we heard, you served as a member of the UN high‑level advisory board on AI.

If you could unpack the social risks that you have identified at intersectional AI and privacy and also comment on how proposed UN recommendations and to create a more robust global framework for responsible AI deployment.  Thank you.

>> JIMENA VIVEROS:  Hello.  I don't know if anyone can hear me?  Yes.  Okay.  Great.  So, it is great to be here.  Sorry for the delay.  So, thank you for the introduction.  I would also like to start commending the work of the OECD and the new partnership with?  Which I think is going to be very fruitful and going to be very good for advancing global governance and recommendations in this space.  So, I'm happy to be an expert in the working groups.  I look forward to contributing to that.

As Juraj was saying, AI is data.  So, we cannot have AI without data.  Data comes with privacy issues.  That's just a problem.  So, when we look at it from the perspective of peace and security at large, I mean it brings a lot of problems.  Because if we look at it, even from like, say, the civilian domain, all of the ‑‑ we live in a society where everything we consume it consumes back our data, whether we willingly accept it or, you know, or there's no other choice.  So, all of that data gathering by all of these platforms is then fed into systems which could be civilian, which could be military, which could be of some security organisation or intelligence organisations and we don't know what the purpose will be at the end.

So, we see this problem also in terms of all the decision of support systems.  For, say, autonomous weapons and other types of security implications that come along with the systems that work in this space.

So, we have a lot of complications regarding that.  What we also find is now the big hype with Generative AI and all the breaches that come in that space, which we are all very familiar with, which is just exacerbated by the different jurisdictions and approaches that are being used at universities.  What we're witnessing is the patchwork of initiatives.  That's why we should really strive to global governance.  The work we did at the advisory body of the Secretary‑General leading to the Summit of the Future and what was a part of the Global Digital Compact and the Pact for the Future, it included this.  We mentioned the security problems that comes with all of this data, breaches, hacking, misuse of information, malicious or intended uses, both in the civilian and the military domain.  Which affects the broader international stability frameworks.

So, we in the report highlighted that even beyond the implications of data and privacy security problems at the individual or community level, there's also very large-scale impact on society.  And we say in the reports that it could even impact democratic institutions as a whole in terms of misinformation and the erroneous use of data, which is ‑‑ can also affect the geopolitical, the economy, in different parts of the world and different regions as we have seen already.

Another problem that we have with data and with privacy in terms of security is the fact that we are now shifting the power dynamics of the world in terms of the technological dependency.  So, it's not about who has the best systems.  It's about who has the best data or who has more data.  And that is something that has been accumulated, even years before AI was booming like it is now.

So, we have a problem.  We also have a problem like in the lack of data, it's a risk in itself.  Because misrepresentation, bias, all of these things are a clear problem in terms of data.  And this also affects the privacy of children.  That's a big risk that we have identified.  And everything regarding future generations.

So, now the question is, what we can do about this.  So, first of all, we should really recognise data as a digital public good.  This is something that is also stated in the Global Digital Compact and has been at a high list of agendas at the Secretary‑General, all this common digital goods.  So, data is one of them.

And what we could do is create a global AI framework to protect all audio kinds of human rights that can be affected by the use of data and obviously implicating privacy issues.  The GDC also offers some solutions.  For example, awareness raising, capacity building, controlled cross‑border data flows to foster response, equitable, and interoperable framework to maximize the data benefits while minimizing the risks to data security and privacy.

Because, as I said, the lack of data is also a risk in itself.  So, that's why it's so important the work that the OECD and Dubai is doing in this respect.  It's precisely that.  Awareness raising, capacity building, and bringing experts together to come up with solutions.  Because the risks and the problems we have identified many times.  The thing is, how to do it and how to come up with actionable recommendations.  Because this is vital.

The OECD recommendations that were revised now this year with all of the human centered AI issues is vital.  I recommend for whoever hasn't read it to read it, because it's a really important material that you can find there.

And obviously cooperation and synergies across organisations, across jurisdictions, across communities, across everything is vital.  Because everything is complementary, and everything helps.

So, with that, I will close.  Thank you.

>> LUCIA RUSSO:  Okay.  Thank you so much, Jimena, for this.  It is really great work that you have been doing and outlining also the key risks and also some policy solutions already.

So now I will move to Clara.  As we heard from Juraj, the OECD has established an expert group looking at particularly the interrelations between AI data and privacy.  And you're cochairing that expert group.  So, what we would like to hear from you is what are the motivations that led to the establishment of this group?  But also, what methodological approach you're using to assess comprehensively privacy risks across the AI life cycle?  And lastly, if you could please share with the audience the key findings that have emerged from the first report that was published with the support of the expert group.

>> CLARA NEPPEL:  Thank you.  Thank you, Jimena.  Can you hear me?  I cannot hear myself.  Okay.  Thank you for inviting me here as well.  And I'm very pleased to share our experience with this cross‑section, what you just mentioned, the collaboration between different communities.  As mentioned by both of my co‑panelists before, we had privacy issues with AI even before Generative AI.  But this has been exacerbated with the vast collection of data across basically geographies but also the possibility to then ‑‑ reidentify individuals but also to identify, let's say characteristics which were not disclosed in the first place.  I think I would very much like to say, very often you are surprised by the things that the system knows about you, which can be accurate or not accurate.  And if it's accurate, then you're kind of in alien space.  If it's not accurate you're in a (?) space.  We now know at least that Generative AI is not always to be relied on.  I think that's maybe the positive effect of let's say the vast adoption of AI.

With the OECD that has been so active in AI governance, as mentioned by Juraj, there are already a lot of expert groups.  So, I'm part of the AI and climate expert group as well as on the AI futures expert group, and I'm cochairing now this expert group on AI data governance and privacy.  You asked me about the motivation about why we created this.

I think in the AI communities you will find a lot of technologies, of course also Civil Society and so on which are looking into different aspects of AI that start realizing and also establishing frameworks for governance for these different aspects.

In the data privacy community, we already have an established framework.  We have jurisdictions.  We know how to enforce.  We have also institutions.  Of course, methodologies.  What we saw in the AI space, there is a lot of innovation that you just mentioned also addressing privacy but without knowing there is already a lot of work going on in the other community and the other way around.

So, this was, I think, the main motivation to bring these two communities together, to establish this working group.  Indeed, the first deliverable of this working group is this report that was published in June.

So, one of the deliverables, outcomes was also really to map the principles, the AI principles to the privacy principles.  Can you hear me?  Yes.  Okay.

As you can see here, it's a lot.  I will just go into some, which I think are specifically relevant.  So, principle one is really go about inclusive growth, sustainable development and well‑being.  Here I think is something really close to my heart, namely to weighing economic and social benefits of AI against privacy rights.  This translates to have the right balance between the metrics of success.  Not only concentrate on profit and performance but also on planet and people.  I think that has a lot to do with what we just heard before.  Privacy being one of the important aspects here.

The second really is about really respecting the law of human rights and democratic values.  Here, it's also interesting to learn from each other's terminology.  So, we both have established definition of what transparency means.  But, it's not, for instance, exactly the same.  Justice for fairness.  In the AI space, transparency relates more into how the system is set up, what kind of deliverables, so understanding what the outcome is.  In the privacy space it's more about data collection and the intended use.  Again, we needed to map the different definitions also so that we have the same language.

Here I also see the human rights impact assessment.  We just had a session yesterday about (?) which was set up by the Council of Europe, the human rights assessment network.  That needs to be harmonized with the data protection requirements.  So, I already talked about transparency.  I think security is something that Jimena also alluded to.  Here it's also coordinating data security, technologies, privacy and enhancing technologies being, for instance, being one of the most important ones.

Last but not least, it's also about accountability.  Here I think that's what we bring, let's say ‑‑ I'm a technologist myself.  What we bring to the data privacy community is the understanding of the technical aspects.  So specifically, to the AI life cycle and where in the AI life cycle privacy can play an important role and also beyond data collection.  Also, in the space and other phases, privacy is important.

Next one.  This is basically the AI life cycle which is now the basis for further developing privacy‑related recommendations but also others.  This is, as you can see, it starts from planning and design.  What is new now, this is also revised, we have a new phase of retire and decommissioning.  So, it goes through collection and processing of data, building of models, testing, make it available for use and deployment, operation and monitoring.  And you can see here ‑‑ so basically what we want to do now as a next step of our working group is to go to every phase and see which recommendations, policy recommendations we have for these phases.

Especially when it comes to collection and processing of data, we have to see what does it mean, the limitation of AI when it comes to data collection.  What does it mean if we are looking at large language model data scraping for the web.  What are the privacy implications to that, which are, of course, a lot.  What is the role of synthetic data?  A lot of large language models are fed by synthetic data which are also generated by models itself.  Here I think it's an important evolution that we also need to take into account.  Of course, data quality, which was mentioned before, is important for accuracy but also for discrimination and bias.

And we go further, as you can see here then what is going to be important also to see what does it mean to have a right to forget in AI systems?  What kind of oversight and oversight and accountability as well as transparency measures we can put in place?

We know for the moment the data cart, but we should work toward more than that for transparency.  I think that basically ‑‑ this is a work in progress.  So, as I said, we want to go into each of these phases request and also share and welcoming also inputs.  Thank you.

>> LUCIA RUSSO:  Thank you, Clara, for this overview.  This is really instructive as well for those of who are not privacy experts as you are.  So, it's good to see how privacy affects each of the stages of the life cycle of an AI system.

So, I will now turn to Thiago who is a specialist.  He has the perspective also of the Brazilian Data Protection Authority.  What I would like to ask you, what are the most critical privacy challenges that you are observing in the context of advanced AI systems?  And also on the practical side, how are Data Protection Authorities developing practical approaches and solutions to protect privacy rights along with innovation.

>> THIAGO GUIMARÃES MORAES:  Okay.  First of all, thanks a lot, Lucia, for the invitation not only to be here but also for an invitation for being part of this community of the group of experts on data privacy which I've been following since basically its inauguration.  It's been amazing to be part of this community where we see the amazing work that has been done which you accurately gave some highlights today.

I could start from here.  I could say that many of these topics that has been just highlighted by Clara is part of the day‑to‑day critical thinking that regulators such as Data Protection Authorities have been struggling on.  What I would like to share here and starting from the challenges perspective is that as privacy community, start to understand what AI governance and AI regulation means.  When you start from the privacy, data protection starting point is that you have to see how all these other values are now coming.  That's why I put the circle there, where we have privacy, fairness, human agency.  I know these are some of the main values we've seen in several frameworks.  They come.  And when you look at it in a more technical level and you see the Technical Community is always thinking about tradeoffs, which does make sense from a technical perspective.  Because what you're trying as a technician is like to create parameters and try to see like how much you can achieve, any of these values.

At the same time, as like anyone who works with policy making, especially for legal approach, human rights cannot be traded off.  And that's one of the main challenges.  We're talking about trade off of values at a technical level.  That cannot mean undermining of human rights.  So, this, for me, is the biggest challenge, not only for regulators from the privacy field but in any other.  For sure since Data Protection Authorities have been looking day to day on how these measures are coming and working and balancing of the human rights that we should be concerned about.  And just to give an idea ‑‑ this is like the other things mentioned, the one in the middle it just shows this very quite, I would say, a bit of a common sense.  When we are talking about one of the main features of like de identifying which is the idea of harmonisation, we have this privacy tradeoff.  This is illustrative.  I put in this arc because we shared the work before where we show what we might be looking here is finding this optimal point where you can still assure some level of privacy but guaranteeing utility for the system, of course.

But when we go to real use case, things are not so simple, especially when we're considering other values.  Just to consider between privacy and fairness, for example, fairness itself is challenging for you to define on a technical level.  There are several parameters for that to try to guarantee some aspects of what fairness could mean on a technical level like some ideas of group fairness and some parameters that try to translate what we should expect and the idea of statistical parity, for example.

But when you add to that privacy issues and like how to bring, for example, privacy techniques here, it gets even more challenging.  I'm just sharing here in the last part some of the work that we found of this is not the impetus work but the work of the Technical Community and the privacy community that has been working on that, how they were trying to find this adequate balance on how, for example, you embed different privacy like the privacy community well know, how you can use that and try to find this balance.  It's still ensuring a good fairness level as well for this fairness parameters, for example.

What was particularly interesting in this research is that they found out that when you're looking for federated learning models which are models that are trained at local level and then you aggregate the data for the main AI model, you can apply differently privacy in the local parameters to ensure first, better privacy, if you apply the global level you leave the local levels unprotected from the privacy another thing you have to fine‑tune the level of noise you're adding on the differential privacy.  Because if you go too far, you don't only lose accuracy but brings several issues of like ‑‑ (no audio), okay.

So maybe now, can you listen to me?  Hello.  Okay.  Half the room ‑‑ yes?  Okay.  Good.

Well, then let's turn to the last slide so I can ‑‑ good.  This is also just very generally speaking like what DPAs such as (Audio breaking up) offer ‑‑ I don't know.

Yeah.  Okay.  Now?  Okay.  Every Internet Governance Forum we have tech stuff.  We can see how challenging it can be in practice.  So, talking about ‑‑ I see.  Thanks for the tip.  The DPAs ANPD has been doing that but several others.  They've been working on first on guidance so we can share best practice with practice on some specific topics.  Just recently ANPD has published work on the idea of how Generative AI is bringing some challenge for privacy like what Clara just said.  Sometimes synthetic content is being created, which it can infer some personal data.  And it can infer inaccurate and an accurate way.  In both cases there are consequences.  So, we try to tackle a bit of what is part of this discussion.

We know some other peers have been doing the same.  We know in France, for example, they've been doing very interesting work on that.  And Singapore, the authority there is doing a sandbox on the path for Generative AI, privacy enhanced technology for Generative AI.  We see that like for the works are doing ‑‑ both in the theoretical level with guidance but also more hands‑on like sandboxes.  The Brazilian Data Protection Authority we're doing a sandbox next year on transparency so we can discuss this concept and what does this mean in the context of data protection framework like ours in this case and the Brazilian ChatGPT.  And besides that, we have also ‑‑ all the DPAs I could say have been asking themselves what are the roles now that we have AI regulations coming up and we have to think should we be the main center authority, even if that's not the case, because sometimes it can be a political level discussion.  What will be the role and how can it ensure that our role is still guaranteed and protected even like if we now -- we have a more complex environment where we have to work together with other regulators that are also dealing with data‑related, lack of governance‑related issues.

I'll stop here but thanks again for the invitation.

>> LUCIA RUSSO:  Thank you, Thiago.  So, I have some follow‑up questions, but I would like this to be a conversation with you.  So ‑‑ I give you this.

>> AUDIENCE:  Thank you very much.  You can hear okay?  Thank you very much for these very interesting interventions.  The issue that's a really great example ‑‑ (audio difficulty)

That's okay?  Thank you.  This is a really nice case study what's happening across technologies, this issue of convergence.  I'm with UNICEF.  We're looking ‑‑ I've led work on AI and advisory board and how AI impacts children and looking at how neurotechnology affects children.  AI neurotechnology, these issues, privacy is the issue.  But if you look at the technologies, whose responsibility is to set the governance rules?  I was interested to hear about the working group.

My question, Clara, what is the end goal of this interesting useful exercise?  Because it sounds like there are some governance recommendations within the AI space, in the privacy space, you're looking at mapping them.  But what's the output had?  Is it a new merged set or an update on both sides, or do we update like the principles from time to time, which is necessary?  UNICEF also has recommendations or AI and children.  We've been reflecting in 2021, the world has changed.  It's time to refresh those.  The principles stay, but how you apply them ‑‑ yes, where do we go from here?  Thank you.

>> CLARA NEPPEL:  Thank you.  Thank you for bringing up the issue of children.  Actually, I also wanted to bring it because I think it's a big issue, not only on privacy, but also on mental health of our future generations.

So, I think that we have different issues how we can tackle this.  Just to give you an example of specifically age-appropriate design, because that is something which I think we need to take into account in the AI design system.  We are working, for instance, IEEE, with the (?) foundation to set up, I hope, a universal standard of how to collect children's data.  Okay.  If you can hear me.

So, I think that's ‑‑ that is one of the practical examples of what we can do for the moment on a voluntary basis.  But in certain jurisdictions it is also, I think, in the UK where it is obligatory.  So also, I think this is what we want to do in the working group is first of all understand what are the issues?  Do we already have solutions that we can leverage from each other?  And to identify the gaps and some of them would be very certainly policy recommendations, but we also very clearly want to target the developers.  For instance, when it comes to scraping data so that they understand what the policy implications ‑‑ sorry, what the legal implications are.  Because a lot of them don't have that.

So, it's both sides.

>> LUCIA RUSSO:  Is there any other question?  Yes.

>> AUDIENCE:  Hello.  Thank you so much for the presentation.  I'm from the service center.  In my study field there's this technology called blockchain specifically for data.  By storing data in multiple different sectors, any slight changes to the data can be tracked and detected.  But like at this very much protected transparency, but this technique in and itself is at the center of debate, privacy.  So, like you said, it's like a tradeoff system.  I want to know you think of this technology and how we can actually find that balance?

>> THIAGO GUIMARÃES MORAES:  Does it work.  Thanks for the question because it's actually very important.  I can try to give an idea of like as a DPA, I shared the experience of the Brazilian Data Protection Authority but what we've heard from other peers some similar approach happens.  Like we do look ‑‑ usually most Data Protection Authorities, DPAs as we call, we have (?) that work with monitoring technology progress.  I am part of one of those units there in Brazil.

We know other institutions like in the UK, in France, they have something similar.  What's interesting about this innovation, technology monitoring units is they have to look, not only to AI but several other technologies like blockchain.  So blockchain, for example is a topic we also follow.  We have like part of our team is working on looking on specific privacy‑related issues and with blockchain technologies.

For example, one very big challenge when we talk about privacy and blockchain is because usually when you register information, the blockchain, it stays in the blockchain.  And we do have a ride for elimination how can we do that if privacy data is embedded in the blockchain.

This is part of the discussion we're having.  It's very challenging when we decide to provide a solution because we have to be very sure what we're proposing at a policy level.  So far as I know, this is a topic that the privacy regulators, the privacy community has been discussing.  But I am not aware of a very strong argument of how it should work.  And I do believe that what we need to come to this answer is to be better engaged with the Technical Community that's working on that.  We've seen this work happening in the AI governance level.  Like I can say the work of the OECD is a big example.  I think we should have more of the same in the blockchain discussion.  Because eventually we will actually be seeing these two emerging technologies getting together more and more as time passes by.  Thanks for bringing blockchain to the discussion.

>> JURAJ CORBA:  If I may briefly intervene ‑‑ 1, 2, 3.  Do you hear me?  Okay.

On the topic of converging technologies just like blockchain and other, it is very important actually to realize that when we talk about privacy and AI, we cannot really discuss only this.  You really need to have a whole picture of the whole digital stack.  In other words, we can hardly talk about governance of privacy in AI without actually fully understanding the implications of digital platforms for privacy and the way platforms are being driven by AI or enabling AI via collection of data about the users.

The same applies to Internet of Things.  Because those data taken from the sensors from the internal things, they will feed into the AI systems.  The same applies to digital finance, possibly now also to some new efforts in the field of biology, which can be even more delicate when it comes to privacy and our biological predispositions and design.

So, I think it's a very good example also, of course, blockchain as well.  But this point that was raised really leads us to the necessity to have a full picture of how these different digital spheres interact and how they are integrated into the most sophisticated services and products available on the market.  Because the most successful ones on the market, they manage to integrate all these environments together.  Of course, then the implications for privacy are even more imminent.

>> JIMENA VIVEROS:  Just to add on the conversation, I think what ‑‑ if we're talking about human rights, and I think we should start from there.  The right to privacy stems from the right to identity.  So, which also is a very link to a right to be forgotten.  What we're seeing now is we're trying to create or foster the protection of our personal digital identity or signature or print.  And that is a new type of concept that we haven't had thought about before.

So, when this information, our personal information, especially when it comes to biometrics, genomics, neurotechnologies that are happening and all types of information, when it's locked into something, such as blockchain or any other technology or any other environment, it's complicated.  Especially because sometimes the capturing of this information isn't necessarily consensual or well informed.  This is a problem that's been happening for a long time.  But the situation now is what is (Audio breaking up) that is being given to this information, whether it's locked or just being captured now or whatever is happening.

So, because, again, coming back to the implications on peace and security, we can think of in law enforcement predictive policing.  We can think of border control.  We can again think of biometrics, which is pretty dangerous.  Even in governmental services, even access to healthcare, access to loans, access to financial services, housing ‑‑ all of these things are being predetermined by the data that is stored and how they represent or misrepresent a person.

So, I think it's very important to remember that at the basis of privacy it's identity.  That is one of the most precious things that we have.  That's where we should strive to protect it.

>> LUCIA RUSSO:  Thank you so much, all the speakers.  We have two questions here and one online.  Okay.  I think I'll take the one online first so that we have ‑‑ but we only have two minutes to go.  Please quick reaction from the speakers, how do we deal with privacy by design within the AI changing state?  One quick reaction so that we can hear another question from the floor.

>> THIAGO GUIMARÃES MORAES:  It's welcome, this discussion, because when we start discussing by design process like privacy by design, we are talking ‑‑ we're asking, how we go hands‑on from now?  We're building amazing policy frameworks, but how these frameworks translate to concrete considerations?  What I can say that has proven to be a nice experience from the part of the DPAs is using the sandboxes.  Because all the privacy sandboxes that have been organised from Camil, from the Norwegian DPA, the ICO, Brazil now, Singapore, we see ‑‑ we usually bring some discussion that at the end of the day what we are trying to test with that particular given technology, like AI, for example, but it can also be blockchains technologies or data sharing practice has come up in the end with good practice and way of like how this actually is practical experimentation on privacy by design.

So, I would stop here because I know we don't have more time.

>> CLARA NEPPEL:  I would like to add one sentence here.  I think some of the issues are so important that it should be enforced like coming back to the children, I think the collection of children data should be really regulated, because that has enormous implications for them and for our society.  And for others, I think that the context will be important.  Again, what was mentioned before, privacy is dependent on the context.  Some things need to be enforced by regulations.  Sometimes we need to take privacy into account for a specific use.  And without tradeoffs or having the optimal tradeoff.  Thank you.

>> LUCIA RUSSO:  Thank you.  I think we are at time.  Five minutes?  Thank you.  Okay.  Please.

>> AUDIENCE:  Thank you very much.  I'm from Slovakia.  I have a question from Jimena.  Have you seen the lawyers can protect human rights for today the new emerging technologies we don't often have the laws, but we only have principles?  Thank you.

>> JIMENA VIVEROS:  Yes.  So that's a problem indeed.  These principles, these guidelines are very useful as stepping stones, but they're not binding.  We come to the problem of enforceability.  So, what we need is the adoption of the standards, protocols, guidelines, principles, however you want to call them or frame them and get them adopted at the national level and push for them to be actually consolidated into global governance.  Because if we don't have a global framework that's global like internationally ‑‑ because all of this is transboundary.  We have this patchwork of initiative, even if they are regional, it's not enough so everyone is protected in the same way because our information is everywhere.  So, we just need to convert principle into action and action that is enforceable, that it can be monitored, verified, and there's proper oversight mechanisms.  That's why I was mentioning before, like a centralized authority that controls all of this and conducts all of the oversight at an international level would be a good approach.

But in the meantime, all we can do ‑‑ and it's very valuable work ‑‑ is these principles which are ethical values, whatever, but stemming always from human rights, which already are existing.  And the problem now we're facing is the revamping of even those basic human rights that have been there for the past 70 years.  With the excuse of AI everyone is like opening up the box again and rethinking if they're applicable ‑‑ they're always applicable.  But we just need to find a way to integrate it into the reality that we're living in.

So, the solution is get governments to regulate it in a human way and make it a global regime.

>> LUCIA RUSSO:  Okay.  The very last question.

>> AUDIENCE:  Thank you so much for your presentation.  I'm from the Association for Family Stability in Riyadh, Saudi Arabia.  The rapid advancement led to collection and processing of personal data without sufficient safeguards to protect privacy and relies on vast amounts of data hastening the risk of privacy violation and misuse of data in ways that can harm individuals.  There's a growing concern that current legislation lacks behind technological progress creating gaps that allow the exploitation of personal information without explicit consent or comprehensive understanding by individuals.

We call to strengthen legal frameworks, update and enhance legislation to ensure effective privacy protection in the age of AI, ensure transparency and accountability, require companies and organisations completely disclose how data is collected and used while implementing the robust accountability mechanism for violations.  Engage Civil Society to include Civil Society organisation and user in the development of AI and privacy, related policies, and recommendations.  We recommend the following, develop impact assessment tools, create and utilize tool to assess the impact of AI technologies on privacy before their implementation, raise awareness and provide training, organise training programme for developers and policymakers to emphasize the importance of privacy and strategies to protect it during the design and deployment of AI systems.  And finally, encourage exceptional initiatives such as his Royal Highness, the crown prince, the global child protection in cyberspace CBC initiatives which aims to strengthen collective action, unify international efforts and raise global awareness among decision makers about the growing threat to children and cyberspace.  Thank you.

>> LUCIA RUSSO:  Thank you so much.  I think we couldn't have a better way to end this debate.  I think we could go on and on discussing with you.  It's a topic that deserves a lot of the policy attention as we are seeing.  This is really at the core of the discussions that we are undertaking in the international governance and privacy sphere.  So, with that, I would really like to thank the distinguished speakers here, Juraj, Jimena, Clara, Thiago, for their excellent contributions as well as the audience for participating so vividly in this discussion with us.  Thank you.