The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> LUCA BELLI: Good morning to everyone and welcome to this IGF 2022 session of the IGF coalition Platform Responsibility, which is this year dedicated to Platform Responsibilities in Times of Conflict. My name is Luca Belli. I am a Professor of FGV Law School where I coordinate the Center for Technology and Society, and I have the great pleasure to be the coordinator of this coalition together with my colleague, Yasmin Curzi, also from FGV Law School, who will also be so kind to co‑moderate this session with me.
So, before we start and introduce our distinguished panelists, we have an impressive set of speakers, as every year discussing with us today. Let me just share a couple of introductory thoughts about our work and what we aim at doing with this session and what we have been doing over the past year, right? So, as you know, coalitions of the IGF, they have this peculiarity of working in an intercessional way, right? So, we have been working platform responsibilities since ten years ‑‑ well, almost ten years ‑‑ nine years, to be precise. And we have coined the term "platform responsibilities" eight years ago. And since then, we have agreed to analyze the impact of private orderings regimes of platforms on human rights, on democracy, on the evolution of economies. And we have debated and emphasized constantly the great role that platforms play and the responsibility they have in respecting human rights. However, there are some obstacles that become quite evident when we try to get into the details of how platforms have to respect human rights.
Now, the session of today aims precisely at tackling two key issues. On the one hand, we will analyze some ongoing regulatory initiatives that will make us understand, what are the tensions, the regulatory tensions that are existing, and what are also the ways in which these tensions can be reconciled. And that is the second main point of today. We are going to present this outcome document that we have elaborated, dedicated to meaningful and interoperable transparency for digital platforms, where precisely, we try to define, what are the elements that make transparency a key ‑‑ a core element of platform governance as something that could be standardized, that could be legally interoperable, or at least semantical interoperable amongst different jurisdictions, different players, and a different jargon of different stakeholders.
So, a very good example of how this is very relevant and very important issue is actually if we think about content moderation and content regulation in social media. On the one hand, we have daily examples of why content regulation is absolutely needed, starting from the weaponization of harassment and disinformation to manipulate democratic processes or to harass with violent online content specific individuals, political opponents, minorities, and how this online violence then translates into offline violence.
And the problem is that while we know that governments have a duty to protect human rights and platforms have a responsibility to respect them, and they jointly have to provide remedies, effective remedies, it's very difficult to put this in practice also because there are strong incentives, to be honest, both for many governments and many platforms, not to bear their responsibility, not to fulfill their duties and responsibilities. Many governments drive things to sharing this information, and we have seen especially this during the pandemic. And many platforms have a very strong economic interest in not regulating that much, and both have fear, very strong fear, of being considered censors when they regulate content, and that is neither good for actions, for government, not very good for business, because it reduces engagement, and therefore, the core business of platforms ‑‑ stimulating sharing of information, sharing of data, at least large platforms.
So, one of the key elements that have been stressed, and maybe is the only constant element that we find in almost all initiatives, regulate platforms, is the transparency, the need for transparency. And transparency is essential for accountability. The problem is that if everyone defines transparency in different ways and does not explain what meaningful transparency means, then it's meaningless to have commitment to transparency, if everyone can decide what the platform will be transparent about and how. So, the key reason of having operated over the past months this document towards meaningful and interoperable transparency for digital platforms is precisely to start a suggestion, a proposal on how to standardize transparency, to make sure that there is not only meaningful information about how content is moderated within terms of service, but also meaningful information about how automatic algorithm moderation functions, which is the most relevant form in platforms, and how users can have high agency on it, eventually opt out or tackling how this is organized.
And also, we have been stressing over the past years ‑‑ and we will hear also from Nic, who has been one of the most vocal scholars stressing this ‑‑ this quantitative data alone about takedowns is not enough. We need qualitative data. It is useless to know that a platform takes down 1,000 content if we do not know how many content have been signaled, which content have been taken down, which content is flagged, what measures are available. And no regulator has this kind of information. So, it is also essential to make this kind of information auditable and accessible by independent regulators, but also by independent researchers.
And all of these elements, we have tried to merge them, put them together into the outcome report for this year, and they are all based on the research that many of us have been conducting over the past year and many other scholars ‑‑ very distinguished scholars ‑‑ have also been advocating for over the past year. So, we really hope that this will be a useful starting point for discussion and useful suggestions.
Now, without further ado, I would like to just pass the mic to my colleague, Yasmin, for a little bit of introduction, and then we can start with the lively part of the discussion with our speakers. Yasmin, the floor is yours.
>> YASMIN CURZI: Many thanks, Luca. Hi, everyone. So, I'd like to welcome you all to this year of DCPR, entitled "Platform Responsibilities in Times of Conflict." So, in this session, we aim to find paths to explore how platform regulations are affecting Internet fragmentation worldwide. Our understanding is such relations may be causing negative externalities, both for our users and law enforcement. Some examples, as Luca mentioned, are data concentrations, conflicts of jurisdiction and others. In this session, our aim is to explore how platform governance may improve the current scenario.
We've invited our stellar speakers to discuss possible guidelines to have policymakers in creating a more (audio breaking up) use of the environment that may be able to foster user control and also interoperability. So, without further ado, I would call to the floor Mr. Oli Bird, Head of International Policy at OFCOM, to initiate our discussion. Thanks, Oli. Good to have you here.
>> LUCA BELLI: Sorry. I was just reminding the technical assistance to unmute the speakers when we call their names. Thank you very much.
>> OLI BIRD: Thanks, Luca. And thank you, Yasmin. And thank you to all those who fed into this year's output, the 2022 outcome document, which I think is a really useful one that sheds light on a number of important aspects.
I was going to just make some brief remarks from the perspective of a regulator. So, just to introduce myself, my name is Oli Bird. I'm from OFCOM, which is the UK communications regulator. I've been following this Dynamic Coalition and involved in some of the multi‑stakeholder conversations around platform governance, such as through the Internet and Jurisdiction Policy Network.
I wanted to speak briefly about some aspects of platform responsibility, fragmentation, and interoperability from the perspective of an independent regulator that is already engaged in platform regulation, through different regimes, including a limited online safety regime, but we are preparing for furthermore comprehensive duties.
Just to introduce OFCOM a little bit, for those of you who are not familiar with us. We are the UK independent converged communications regulator. We regulate some aspects of online, for example, net neutrality. The European Audiovisual Media Services Directive has created a regime. This applies to OFCOM in the UK because it was implemented just before Brexit, and that sees the regulation of user‑generated content on video‑sharing platforms. Many of you will be familiar with the UK discussions around the broader online safety bill, which is still being debated in Parliament.
I think I'd like to emphasize at the outset the growing role of independent regulation in the platform governance space. Independent regulators can play a distinct role from governments, bringing detailed technical expertise to secure outcomes on behalf of users, or as we say in the UK, citizens and consumers, reflecting a broader understanding of "user" than just a consumer of services in an economic or financial sense, bringing in broader dimensions like Human Rights and democracy.
Independent regulators are often asked to balance considerations such as freedom of expression and protection from harm, and good regulators are evidence‑driven and use public consultations to enable broader input from a wide range of expert stakeholders to make sure they've got things right. So, I think it's something we're trying generally to promote, the understanding of this role of independent regulation.
Whenever I register for the IGF, I have to choose between government and technical community, and I'm never sure which one, because I think OFCOM and other regulators lie somewhere in between those two things.
So, I mentioned that the UK government is currently proposing a broader online safety regime, but I don't propose to go into that because it's still before the Parliament, and OFCOM will operate within whatever legal framework Parliament then creates, but it's not for us really to make the law.
But at the global level, we do see a gradual proliferation of national legal regimes for platform regulation of different types. And I think there is some global convergence of regulation for certain purposes, notably, CSAM and perhaps terrorist content/hate speech. But I think the pitch is that national regimes will inevitably diverge. They're always going to be the product of national discussions and debates with particular concerns and local circumstances. So, I think, therefore, there is, as you identify in your document, the risk of fragmentation at the level of platforms and services, which may be brought about by these different divergent national regulation approaches.
But I think there is an evolution at the same time, and that is away from regimes that focus on notice and takedown of certain types of content with enforcement around timelines, time frames, and towards perhaps a more sophisticated regulatory approach which is more focused on the systems and the processes of platforms. So, in the UK, we've called this a Duty of Care. And I think the DSA in the EU takes a similar approach. And so, it's about having regulatory objectives that are in line with the objectives that platforms should have for themselves and about regulation, making sure that platforms are designing their products, their systems, and their processes to achieve these shared goals. So, the idea is for platforms to be taking responsibility themselves with regulatory oversight of that. And this new type of regulation will require a new regulatory toolkit, which includes things such as risk assessments, provisions for transparency, audit, and reporting by platforms.
And I think it is here that there is considerable scope for collaboration amongst regulators, because even if the national regimes differ, the underlying regulatory toolkit can be developed between other regulators and be common across different regimes. And this is why OFCOM has jointly founded the Global Online Safety Network with our counterparts in Australia, Fiji, and Ireland. And we held a session earlier today that maybe some of you were able to join. We're happy to talk more about that, if you want. And we really are keen through that network to engage with a broad range of global stakeholders. So, I think this approach that sees this kind of collaboration at the global level can help avoid some of the fragmentation at the regulatory level nationally.
A final point I'd make is that there is a need for dialogue as well, I think, between regulators engaged in different, adjacent regimes in the digital space. So, for example, between online safety regulators, data protection authorities, competition authorities. And in the UK, we've pioneered this through the DRCF, which is the Digital Regulation Cooperation Forum. Founded by those three authorities in the UK. And we've seen, for example, a recent statement jointly between OFCOM and the UK's ICO, the data protection regulator.
So, I think in sum, these are exciting times for collaborating on platform governance. Regulators are new to some of these conversations, but we're really keen to engage and work for the best to minimize risks of fragmentation. That's me. Thank you.
>> LUCA BELLI: Excellent. Thank you very much, Oli. And now we go straight to the second speaker, our good friend, Nicolas Suzor, who is a member of the Oversight Board and also Professor of Queensland University and he has been with us in this coalition for many, many years. I think he is one of the few that probably was at the first meeting of this coalition, so a very hipster title of being one of the founders of this ‑‑ co‑founders of this coalition. And Nicolas, you have very good firsthand experience on how this type of regulatory proposals may play out and also how self‑regulation plays out, thanks to your role of the Oversight Board. So, really, we look forward to hearing from you. Please, the floor is yours, Nicolas. And I'll like to ask the technical assistance to unmute Mr. Nicolas Suzor.
>> NICOLAS FIUMARELLI: I was just wondering if you might indulge me for a moment ‑‑ assuming Emma is okay with this. I was going to talk a little bit about methods for analyzing transparency data. And I think it might actually be best if Emma doesn't mind going next, because I think that she's going to talk a little bit about how we actually get data for platforms, and then I'll jump into where we might go in terms of methodologies for analyzing.
>> LUCA BELLI: So, you want ‑‑ you want to switch having Emma first, or are you proposing ‑‑ yes, okay. Okay. Wonderful. So, we have ‑‑ we can go with Emma. Of course, we are very happy to have her. Emma also is a very good friend and also one of the key participants to this coalition and not only to this coalition, I also have the pleasure to be with Emma in the steering committee of the Action Coalition of Meaningful Transparency that is also another entity that has provided a lot of very interesting inputs and comments on the work of this year. And of course, Emma is the co‑author of a fantastic paper on meaningful transparency, which has also been a very interesting source of inspiration for our work. So, please, Emma, the floor is yours.
>> EMMA LLANSO: Great! Thank you so much, Luca. And hello to everyone. I'm so sorry I can't be there in person with you, but it's wonderful to see so many faces also on the Zoom. So, as Luca mentioned, CDT has ‑‑ my organization, the Center for Democracy and Technology ‑‑ has done a lot of work over the years looking at transparency and how to make it more meaningful and useful in the overall product of trying to hold tech companies accountable for how their decisions and policies and practices and products affect our information environment and our human rights.
And so, in the Making Transparency Meaningful framework that we published this time last year, we talked about four different areas that policymakers should really understand and consider around transparency. To think about, we often sort of put a lot of things under the umbrella of transparency, but there are at least four different ways you could think about trying to get more information from companies and analyze it and use it to understand the impact that they're having on users' rights, the first being transparency reporting ‑‑ the kind of regular, standardized reporting, or regular and periodic reporting that tech companies make about both government demands for access to user data or content restriction, as well as more recently their own content moderation enforcement.
There is also the really important kind of transparency of that notice to users, that information that is provided to users, whether it's in the form of written policies and practices, or especially that that information directly to users when action is being taken against their content or when they have flagged something and are waiting to know, has this problem that I've identified been resolved at all? That user notice is a really key kind of transparency that directly affects that kind of day‑to‑day interaction between the user and the online service, and it can often get a little bit overlooked in conversations about transparency.
As I think others have mentioned, and as discussed in the framework that the Dynamic Coalition has put together, auditing is also really important, this idea of being able to go into the information that is published, and not just sort of take it at face value, but understand and have procedures in place for understanding, where does this information come from? How accurate, how reflective of the company practice and experience on the platform is this information really? And that's going to be crucial to underlying all of the regulation that is coming, because we need to sort of know if the information being provided really aligns with the actual practice and experience within these companies.
But then the piece I really wanted to talk about today and what Nic previewed, is this whole question of researcher access to data. So, enabling independent researchers to have access to data I really think is a crucial form of transparency. We need independent research by scholars, by journalists, by civil society organizations, in order to inform our public policy‑making, in order to make sure that we actually understand what's happening on online services, what effects different interventions have on addressing abuse or shaping our information environment. And right now, as I'm sure Nic will be able to tell you even in more detail, it can be really hard to conduct that research. Companies obviously have the data. They're the ones who are running the services that are kind of generating/creating the data and making it available is not always obvious or easy to do or, like, financially beneficial for companies to do and can actually create a lot of different kinds of risks for the companies that have this data, but it is really, really crucial for all of us to have this better‑informed discussion about what public policy we actually want to see happen around platform regulation.
So, one of the projects CDT worked on is holding a workshop from researchers around the world, asking them some pretty key questions, fundamental questions like, what kind of data do you actually want? What sort of format for accessing or getting, you know, getting your hands on it could you actually use? There's a lot to explore around the difference between sort of datasets being published, versus having ongoing access through an API where researchers can sort of set parameters for different kinds of studies or conduct ongoing studies or realtime analysis of information. And then a crucial question, and something that a lot of regulators have already been starting to grapple with, is the key question of who should get this data. Who counts as a researcher? What kind of vetting needs to happen of researchers, if any? Should they have to be affiliated with some kind of accredited institution, like a university or a research institution? Is it possible to make data available just completely broadly, no strings attached, to the general public?
One of the big and probably pretty obvious trade‑offs in the conversation about researcher access to data is concerns around privacy. There are a lot of concerns around privacy, both in the sort of Cambridge Analytical style, you know, the idea that a researcher could gain access to a lot of information and use it for their own ends or potentially sell it to other individuals, use it in ways that are manipulative, not expected by users, and invade users' privacy in different ways, so the sort of threat from that private actor space is a pretty significant and known concern.
But then we also think at CDT that there is a real concern to be considered around how researcher access might potentially enable greater law enforcement access and government surveillance of individuals, and we have this concern in part because we've seen already with U.S. law enforcement and law enforcement in some other countries around the world that different tools that third parties have used or have created to enable different sorts of analysis or understanding of what's going on, on social media have, in fact been used by law enforcement as ways to track people's social media activity, try to identify what organizations are involved in planning different sorts of activism or protest. It can be an incredibly invasive kind of surveillance and something that I think any framework around thinking about researcher access really needs to take into account. So, as we're thinking about moving forward in all of this, I did want to flag a couple of different initiatives that are working on trying to assess these different trade‑offs and come up with ideas of how to sort of bridge these gaps.
One is the initiative that Luca mentioned, the Action Coalition on Meaningful Transparency. This came out of the Danish Summit on Tech and Human Rights ‑‑ or Tech and Democracy ‑‑ that was held last November, I believe, and launched a whole series of action coalitions trying to bring governments, civil society, researchers, and companies together from around the world to really think critically about issues around technology and democracy and how the two can be mutually reinforcing, rather than at ends and in tension with each other. So, the Action Coalition on Meaningful Transparency is a gathering place and hub for information and all of the work and research and efforts on transparency that are happening around the world, to be a sort of central place where people can go and find that information and think through some of these different questions about how are different regulatory environments pursuing things like requiring or enabling researchers to have better access to data.
Then there's also an initiative that just had a public launch and release of a kind of concept paper a few months ago called The Institute for Research on the Information Environment. And just very briefly, that is looking at trying to effectively see how we could model something after CERN, the nuclear research laboratory and gigantic kind of multinational initiative housed in Switzerland that is home of the large ground collider and kind of other amazing tools for enabling research into theoretical physics. That was a very particular style of collaboration that was really identified as necessary to enable a real uptick in research in physics and identified that there were sort of both resource needs and information‑sharing needs that were really crucial to kind of ‑‑ excuse me (coughing) ‑‑ that were really crucial to help the whole field doing research in this area level up. Sorry. It's a lot of talking for first thing in the morning. So, anywhere, CERN for the Information Environment.
And then the idea that there really does probably need to be some kind of coordinated and international effort thinking about these questions of research around online platforms and the information environment, because the resource constraints of everybody trying to roll their own version and do their own kind of approach to this could be so high as to be sort of prohibitive.
So, the last thing that I'll flag is that there are some really kind of exciting opportunities around researcher access to data and other forms of transparency that are already getting under way. These, I think, really come from the Digital Services Act in the European Union. So, the DSA, which people have probably heard about or probably are somewhat familiar with, has a lot of different kinds of transparency in that regulation, including requiring different online services to produce transparency reports, to improve the notice that they give to users, to engage in auditing, and to require platforms to provide researchers with certain data in order to do research.
So, I would just flag for everyone that as we're thinking through how to kind of maneuver in this space and how to avoid fragmentation in this space, there will be a very active conversation in the EU around how to actually implement all of these provisions, and we'll be getting a lot of data out of companies as a result of the DSA. All of that, I think, can be incredibly useful to informing ongoing policy conversations, really kind of testing out what works and what doesn't, what approaches to auditing and approaches to researcher access will actually work well and what lessons we might learn about what doesn't work as well. And so, I would kind of point to everyone to think about what is going on with the DSA implementation and really encourage policymakers in Europe in particular to be thinking ‑‑ obviously, it's a big task to get going just for the EU ‑‑ but to be thinking about how the DSA potentially can serve as a model and a set of lessons to learn for the rest of the world so that we're not all having siloed conversations about how to implement these different kinds of transparency in regulation. Thank you.
>> YASMIN CURZI: Thank you so much, Emma. I would just like to make a few remarks about the access of information, access of data, regarding the Global South perspective. There are several power imbalances that make ‑‑ that create those obstacles for Global South researchers to access data from platforms and also to get involved with regulation proposals. So, I think that work of transnational coalitions are super necessary, and I'm really excited to hear more about the Action Coalition on Meaningful Transparency.
Without further ado, I would like to call to the floor Professor Nic Suzor. Thanks, Nic.
>> NICOLAS SUZOR: Thank you.
>> YASMIN CURZI: And sorry to interrupt, but the streaming on the YouTube is not working. People are complaining here. So, if the technical assistance could fix that, it would be good.
>> NICOLAS SUZOR: As far as I can tell, they've just been staring at a very bad still photo of me for the last ten minutes, which can't be pleasant for anyone. So, we apologize for those technical problems.
I really want to follow up from the really important work that Emma was talking about here. I think it's important to note that we have made really large inwards in transparency over the last decade or so, and Emma, in particular, has been really leading a lot of that fight to convince platforms to make available regular transparency reports and a lot more data than we've historically had.
Where we are now, I think, is quite interesting. We're approaching the point where we need a lot more information than is available and the platforms are willing to give us from heavily curated transparency reports. There are big challenges, both on the sides of platforms and on the sides of civil society and academia to actually moving beyond where we are now to get to the point that Emma's talking about, where we can have secure, privacy‑respecting access to data on a granular enough level that it allows us to hold the actions of platforms to account and to understand how other rapporteurs are ‑‑ and malicious actors, in many cases ‑‑ are using platforms for nefarious ends. In many cases, platforms are quite reluctant to give us the information that is required to hold their actions into account. That's the first stumbling block.
The second is, platforms don't even ‑‑ they do have a lot of data, but they're still making improvements in the way that they're capturing information to be able to answer some of the questions we have. It's only been relatively recently that a platform can even tell you if your content is removed or you are suspended, which rule you have broken on the platform. For a very long time, that sort of information was not captured. A moderator would make a decision according to one rule or another, which might be different to the reason that someone originally flagged a piece of content or that particular machine learning classifier thought there might be a problem, and so most of the major platforms weren't recording the information that's required to be able to report to us what rules that they are enforcing and what levels of accuracy they have against particular rules. Now, that is starting to improve, but there's still a bit of engineering work on platforms' side to continue to make sure that they can collect data and report data at a granular enough level.
What concerns me next is what we're going to do with it. The questions Emma is asking and the point Yasmin makes about making sure that we can get equitable access to data in a form that is useful to be able to help civil society try to hold platforms to account I think is a huge resource and methodological challenge. As most platforms move more towards ‑‑ most major platforms move more towards machine learning for the initial detection of potential content breaches and the eventually enforcement, there's a real problem with where we are now with the data that we've got.
At the moment, the main metric for success amongst platforms is accuracy for their machine learning classifiers, right? We have seen over the last few years with the global pandemic a lot of major platforms have turned more towards machine learning classifiers to directly act on content. And the way that we are told how well these systems are working is with a flat number on accuracy, which, when you start looking into it, actually means consistency. It is a measure of how well the content classifiers agree with the human moderators who used to do that job or who are doing content‑level appeals of that job. That's not accuracy. That is a measure that necessarily reflects existing biases in the system.
Any machine learning system trained in the unequal world that we live in will learn the biases of that world. What that means is that we can expect ‑‑ and we do, in fact, see ‑‑ that the outputs of the classifiers the platforms are using tend to perform much worse for marginalized populations and vulnerable groups, groups where, by definition, there is less training data and there are fewer appeals and less resources spent to identify mistakes and correct for them. What that means in real terms is more silencing of counterspeech, of people who are speaking back to power, and more, on the other hand, permitting or not catching false flagging and abuse that is directed towards, disproportionately, towards minority and marginalized groups.
We also know that marginalized groups are ‑‑ people from marginalized groups are likely to have more trouble dealing with, adapting to, fighting, trying to correct incorrect decisions when they are made. Now, here we get to a challenge. We know all of this, and we know we should expect this from the current iterations of the classification systems that platforms have put into place. We don't have a good way for measuring, quantifying, let alone correcting for those sorts of challenges.
For academics like me, there's a really big set of problems about how we would actually go about analyzing the outputs of classifiers and content moderation systems if we did have access to the data that we've been asking for. For a long time, it's been so hard to get the data. Now that we are starting to get a little bit of data, there's still a gap amongst researchers about how we can understand that information in a way that's useful, meaningful, in the words we've been using. And that's not aggregated statistics. We know that.
The problem is that state of the art amongst machine learning communities at the moment is really about equalizing error rates within subgroups, which means you divide up your populations into demographics, and you try to measure the rate of false positives and false negatives amongst those different groups. Now, this gets really tough. One, it puts you immediately in ethically quite difficult position, if you're a researcher trying to guess/infer someone's characteristics from their content or their name or their pictures, and we've seen lots of examples of researchers getting that very horribly wrong. And I think that consensus is starting to emerge that that sort of research probably is fraught at best and probably shouldn't happen in the vast majority of cases where people are using it.
So, then we need to think about how we go about working with communities to empower them to understand how platforms are operatable. This is the big challenge. I think for academics, in particular, this is where we need to really spend the time understanding the qualitative issues that are different for different communities, that different groups and intersectional groups are experiencing content moderation problems in quite different ways.
For me, I think some of the promising ideas are around creating test sets, for example, of known counterspeech or known common false positives or false negative detection examples and starting to curate those in collaboration with the people on the ground to then be able to assess accuracy rates for the particular sets of content that we're talking about. And I think that's the key that we're not able to do this ‑‑ that we're not able to do a lot of the evaluation we need to do at a generic level, that race‑blind, gender‑blind approaches really don't help. You need to be able to work at a much more fine‑grained level in order to be able to understand properly the information that you're looking at, and we don't have the methods. We have the methods to be able to do very fine‑grained quality of work. What we don't have, so far, is the ability to move between the very fine‑grain quality of work and the large‑scale statistical work that platforms need in order to be able to make changes to their systems.
I've got plenty more to talk about in terms of methodological challenges, but I'm going to stop with one final point, which is, I haven't even mentioned the human challenge that Oli spoke about at the start, which is now we're not even talking about lever and takedown decisions which we can see and measure as binary decisions, but we are looking about decisions to amplify or hide or make less visible ‑‑ there are a lot of different terms for this ‑‑ but it's decisions about upranking and downranking of content.
Now, I've got to be honest, I don't even know how to measure that. It's not like there is a neutral existing baseline for how content is distributed on the Internet that you can measure up and down from there. Everything is always mediated. Platforms by definition amplify content by making it available to more people.
Lots of people are concerned about upranking and downranking for many different reasons, and Oli, if you've got time, I want to hear about what happened this week and where we are now with the debate in the UK about what platforms are expected to do, but we don't have consensus. We don't have social consensus about what we expect from platforms, but we know that we expect them to be doing something, and we don't know how to measure what that something is. That is a massive challenge for academics and civil society groups, once we do get the data that we're talking about, that we're all going to have to work together and particularly work with people on the ground who have firsthand experience in order to understand.
>> LUCA BELLI: Thank you very much, Nic. Very eloquent, as usual, and I think there are a couple of points that you are mentioning that are quite well reflected also in our outcome document, which is first the need to stress ‑‑ the importance of observability. So, it is not only about having information. It's about having information that can be meaningfully assessed, analyzed, to reach some meaningful conclusion. And again, what you were mentioning, that the possibility to analyze how algorithmic moderation functions, so when content is prioritized or downgraded, the so‑called shadow ban, this is really challenging for academics. Even if academics or regulators have the possibility to access this kind of data, which is not the case so far, it would be tremendously difficult for them to make sense of it. And this is ‑‑ while it is essential to have access, in a safe environment, about meaningful data, to understand the functioning of moderation, it is also essential to invest in the research on how to make sense about those data, because so far, we neither have that access nor the capability to make sense out of it. And we are speaking about calls for transparency, when, honestly, pretty much all regulators are walking in the darkness without understand ‑‑ Even if we had those data, they would not be able to understand properly how to regulate, how to make sure their duty to have oversight on platform, on regulated entities, is fulfilled. So, that is a major problem that we need really to solve.
Now, with this big question, I would like to open for the first very quick round of questions. We have ten minutes for a couple of questions for the first set of panelists before we go to the second set of panelists. And we already have a question from Guy Berger, who is also very kindly bragging about this session, so thanks very much, Guy, for this. So, his question is directly to Oli Bird, to the news that recently emerged about OFCOM now being obliged, or at least being ‑‑ planning to consult the victims commissioner, the domestic abuse commissioner, when drawing the codes that will regulate platforms. So, how ‑‑ the question is how this process will be organized, how this consultation will be organized, and what will be the relation with the elections management authority. This is a very long and detailed question. Please, Oli, you have the honor and pleasure to reply to this question. Yes, I think you can speak now. You've been unmuted.
>> OLI BIRD: Thanks, Luca. Just waiting for that. Thank you, Guy, for the question, and Nic, as well, for your point, which I think they probably have the same answer, which I'm afraid is a slightly boring one, which is that these are details of the latest government proposals here that are made to the UK Parliament, and OFCOM, similarly, is just starting to understand and digest them ourselves, so it's probably too soon to kind of give you any helpful or detailed comment about how those things might work, because I think we would want the parliamentary process to conclude here, and then we'll be able to take a view on what the regime would look like in reality.
But I think just while I've got the microphone, maybe I could just pick up on Emma's point, and Nic's, as well, about research access to data, which I think we think is really important. It seems likely to be something that can play a pretty critical role in a future world where there is meaningful transparency and accountability with a sort of regulated backstop for that. And we will be thinking more about this at OFCOM because the Online Safety Bill in the UK has a proposal that OFCOM should make a report on research access to data within two years of the new bill becoming law. So, yeah, this is a very live issue for us, and we think it's a really important one. And we will continue to talk to you guys and others about that. Thanks, Luca.
>> LUCA BELLI: Okay. Excellent. We also have ‑‑ of course, we have a request on ‑‑ I mean, to share the document. So, you can share ‑‑ everyone can access the document that we were mentioning about meaningful transparency on the IGF website. So, we also have ‑‑ as we perfectly know that the IGF website can be a little bit arcane and not really easy to access. We have also created a mini URL, which is bit.ly/IGF22plot. So, you can access. Everyone is very welcome to access the document and feel free to share it or to provide feedback. We would be very happy to hear your feedback. Now, we have ‑‑
>> YASMIN CURZI: I put it in the chat also.
>> LUCA BELLI: Excellent. Wonderful.
>> YASMIN CURZI: I put it in the chat also, the link.
>> LUCA BELLI: Wonderful. Thank you very much, Yasmin. And do we have ‑‑ I think there might be some questions from the floor? I know that some of you have put questions in the chat. But as we are in a hybrid format ‑‑ and unfortunately, now we cannot see the room at the IGF venue because it is still frozen, the video. So, if there is anyone there willing to share, to ask a question, please take the mic and introduce yourself. Do we have any questions? I hear some sounds, but unfortunately, we cannot see anything. So, if there is anyone, please introduce yourself and ask the question.
>> YASMIN CURZI: I don't think there is anyone there.
>> LUCA BELLI: It looks like there is not anyone asking questions from the floor, so I think we can go on with the second segment of the session with our first speaker of the second segment, which is Vittorio Bertola, Head of Policy and Innovation at Open‑Xchange, who is going to talk about some of the latest European developments, especially as far as interoperability and the recent EU policies that have been adopted or are in the pipeline. So, please, Vittorio, the floor is yours. And I would like to ask to unmute Vittorio Bertola, please.
>> VITTORIO BERTOLA: Thank you. And you can also open my video. So, yeah, thank you for inviting me. And yes, I wanted to address a little of the development in Europe, but first of all, I also wanted to broaden a little of the discussion, going a little bit beyond the content discussion and expanding the discussion of Internet fragmentation, which is one of the main themes of this IGF. Because I think that there's been a lot of talk on how the states fragment the Internet with Internet shutdowns and censorship, especially in authoritarian regimes, and it is also an important thing, but there is less talking of how platform regulation really is also an element of the struggle against Internet fragmentation, and sometimes also in favor of Internet fragmentation, so there are dimensions of fragmentation that are actually positive in this regard.
So, of course, there are multiple ways through which the big Internet platforms ‑‑ I mean, the private sector in general, but mostly the big Internet platforms tend to create barriers to fragmenting the Internet. And of course, there is a lot of this happening at the content level. So, in two dimensions, I mean, sometimes the platforms make agreements with governments about setting up to block stuff, such things as social media, make specific content, specific websites unavailable in specific countries just to be allowed for them to continue with business there. At the same time, it's also the opposite. So, sometimes the platforms tend to disrupt the national regulations and content, and this is a problem especially for what we have been discussing up to now. So, if we want to introduce some degree of transparency or some constraints on the algorithms, on the mechanisms that the platforms use to control in applying the removal of content, then there must be a way for jurisdiction to apply.
What that means is increasingly, the platforms tend to build their own internal infrastructure, in a way, even they build encrypted channels from the devices for applications straight to the cloud service, sometimes in a different jurisdiction, and so this way, they bypass any kind of national content filter or regulation that might be imposed. And in the end, this leads to applying maybe the content values of the home country of the platforms, or sometimes even the values of the single (?) alone as we are seeing now with Twitter, basically to the entire world. And so, this creates a fragmentation which is not anymore along the lines of national states or national sovereignty but along the lines of who is the owner of the individual platform.
But then, we have what is ‑‑ when what has been maybe more concerning at the European level, especially not for the Digital Service Act but for the Digital Market Act, which is what I've been following more directly, which is fragmentation at the user experience level. And in the end, the very idea of an Internet platform is becoming a mechanism for Internet fragmentation because these platforms tend to be closed ecosystems under the form of all gardens, so the business model is based on bringing people into a separate subset of the Internet.
And sometimes, we do not even realize anymore how this thing is basically creating barriers for people. But for example, if you have always used an Android phone and you talk to an iPhone user, sometimes they talk about things that you don't know what they are, because the interface really shapes the way you use the Internet and the language you use to describe maybe even the same functions, but they appear to be completely different, so this creates really a separation, a fragmentation of user experiences, in a way even of the conception of the Internet. And this is especially visible often in terms of messaging apps. So, this is what I also wanted to mention.
In Europe, we are busy with discussing, at least, the implementation of Digital Market Act, which just became finally law a few weeks ago. And this is an important discussion because there's been a lot of talk on which rules we should impose on the platforms, but there's not a lot of talk on how we can actually make sure that they are respected, they are implemented. And this is not an easy thing, especially for regulators. I mean, sometimes there's just a lack of people that can talk the dual languages, understand the technical issues and the legal issues at the same time, because this is necessary if you want to write the rules or even read the rules and be able to turn the rules into practical, implementable, technical measures.
And again, the current situation of messaging apps is basically another nice example of fragmentation which, we all do the same thing, but very similar interfaces, which the coloring of the backgrounds is different because of commercial consideration that tend to keep people closed into their war gardens or specific app. But the problem is that this approach could become more and more of a problem if these platforms started to become more controlling. So, I mean, especially some of these big platforms tend to have a really controlling attitude in the sense that, I mean, we will decide which applications, which third‑party software is allowed to run on our devices and we will charge you even for that and we will charge you for payments and ‑‑ so, I mean, there is really a need to put an end to this under control if we want to prevent further fragmentation.
And then, there is the third dimension which is often forgotten, especially by policy people, which is fragmentation of the network itself at the technical level, because it's not just a matter of these, let's say encrypted overlay networks that the platforms have created to encrypt through adopting encryption everywhere. It is really a matter of networks. We really think of the global Internet, but there is no global Internet in a way that in terms of intercontinental global traffic, now private networks of the big platforms carry up to 75% of traffic, depending on the study.
So, I mean, Google, Meta, all these companies own their own cables and fibers and satellites and whatever to connect their data centers across the globe, and most of the traffic just is routed within this private network and not on the public Internet anymore. So, in a way, there's less and less of a public Internet. And the risk is that in the long‑term, the Internet is no more because of this kind of approach, because we end up not with one Internet, but with, like, Internet flavors or internet working services like we had in the '90s. We had CompuServe and competing global private networks, and we could end up having the Microsoft Internet, the Google Internet and the Meta Internet and you have to choose to which one you want to subscribe. And they might have the same content to a certain extent, but they may also decide to show you certain websites or not show you some, like it happens today with streaming websites, streaming services. You might ‑‑ there might be some content that is only available in exclusive ways on a specific Internet provider. So, it is ‑‑ I mean, really a troubling direction that could realistically happen if this trend is not stopped.
So, in a way, I'd say that platform regulation is really a part of keeping the Internet global and open and unique, so it's really a part of the Internet fragmentation debate. But the point, the final point I want to make is that as other speakers were also mentioning, indeed, platform regulation is national or most regional in nature, and so it introduces Internet fragmentation. This means that fragmentation is not necessarily a negative thing. So, in this case, I think it's actually desirable to have national regional rules for content moderation by platforms, and so it's the least‑bad solution to this certain situation in which we didn't have this kind of regulatory fragmentation in the past and the only result was to have this kind of global oligopoly in size, so I think we understood that we need some competition rules and trust rules of the Internet, which are affecting platforms. And even if this brings up certain fragmentation, then it's fine.
So, I'm sorry, I'm a bit worried of all this rhetoric of avoiding Internet fragmentation, because sometimes it's pushed for business reasons by some of these environments, and of course, there is this narrative that the internet must be borderless and open, which is, at the end, at a technical level, absolutely valid and agreeable, but if brought up at the content level, then this really means ‑‑ I mean, we have the platforms and we don't want to have the rules anywhere in the world, and we don't want to have to deal with national jurisdictions. And so, this is the final message is that in the end, some kind of fragmentation is good, and especially when it comes to content regulation, it's partly unavoidable due to the differences in values that we have across the world. So, the point is, how do we make sure that these rules are decided in democratic ways and they are accountable and they are transparent and they are implemented well, but there's no way we can avoid fragmentation and we should not be avoiding it. Thank you.
>> YASMIN CURZI: Thank you so much, Vittorio. Thank you to Vittorio. Vittorio gave us a great overview on how Internet site fragmentation, deriving from regulation, from private regulation, from private interests is also ‑‑ and how it also relates with technical infrastructures. Here in Brazil, we are seeing Starlink Projects, like entering the country that transparency in the contracts. Meta's project also in Ethiopia, it's really concerning. So, we see that power imbalances between the countries and the lack of development, infrastructure level, can also affect several levels but without any ‑‑ I don't want to extend myself here, because we have few time.
So, Professor Rolf Weber is the Professor of University of Zurich. I can unmute him, but I can't make his web ‑‑ turn his webcam on. I don't know if he is here yet still.
>> ROLF WEBER: Well, thank you, Yasmin.
>> YASMIN CURZI: Thank you.
>> ROLF WEBER: I guess you can hear me.
>> YASMIN CURZI: Yes, but I can't see you.
>> ROLF WEBER: As Vittorio, I get to reply that host has stopped my video, so you only see my photo and you don't see my face. Maybe my photo is nicer than my face anyhow.
Being a relatively late presenter confronts me with the problem that many things which I wanted to mention have already been said. And secondly, my presentation coming from an academic angle looks more at the structural elements and maybe a little bit less at practical elements, so I have to come back at a couple of aspects which, indeed, have been discussed before.
Let me start with a remark to the title of the document says "Meaningful and Interoperable Transparency for Digital Platforms." "Meaningful" is relatively clear. "Interoperable" is, of course, a term that can be used in very different meanings. We have technical interoperability, we have legal interoperability, and we see some progress, at least in the European Union, as far as the technical interoperability is concerned. Emma has mentioned this act, Vittorio mentioned policies. I would only like to add we also have interoperability and transparency rules in the adopted Digital Markets Act as well as a new proposal of the Data Act. But in particular, we do have interoperability rules in the regulation on promoting fairness and transparency for business users of online intermediary services, the so‑called platform to business regulation, and I think we can come to the conclusion that as far as EU regulations are concerned, certainly we are coming ahead, step by step.
But what I would like to add in this context is the following. I think if you talk about technical interoperability, we have to refer to standards. And standards is something which should be developed based on a consensus of a group of interested parties. And in this respect, I would assume that more could be done. I also have the impression that we have not yet deeply discussed the aspect of open standards, if we would proceed away from silo approaches and go more into the direction of open standards to certainly improve the interoperability situation, and we would better the interoperability measures only as a side remark. At least as far as the academic part is concerned ‑‑ this goes to Nic ‑‑ we do have many more open access rules now. So far, we do have access to more data than in the past.
Let's close now from my side, the interoperability discussion. I would like to say a few words also to transparency from the academic side. I would distinguish between three forms of transparency, namely, the procedural transparency, the decision‑making transparency and the substantive transparency. Looking from a backwards, substantive transparency is obviously the most difficult kind and type of transparency because consensus needs to be reached on such issues as hate speech or moderation of content. And here we do have cultural differences insofar, we don't have something like cultural globalization.
Then we do have the decision‑making transparency. In this context, I would like to refer to the multi‑stakeholder approach. Obviously, the IGF tries to go the way of multi‑stakeholderism, but it is difficult and interested participants in a certain sector is not easily reached. And for example, Brazilian moderators in 2014, Sao Paulo has shown how difficult it is to really implement the multi‑stakeholder approach, everybody on at the same level.
And finally, the first type of transparency, which I mentioned, is probably the most legal on, namely the procedural transparency. How can we make sure that due process is, in fact, complied with; if it gives data access, how can it be avoided that different people are not treated in equal way, that we have discriminatory behaviors, et cetera. And I would also think that it would be verse to be a bit more precise to invest more efforts into this part of the transparency when further elaborating on the underlying document of this session.
And finally, since I would like to leave some time for discussion, I would like to bring up one additional issue, namely, how do we interpret transparency in general as far as information and data is concerned? And we do have a paradigm which has been introduced and implemented, in particular in financial markets law, but also partly in consumer law, namely, the mandated disclosure paradigm. And at least from the academic side, maybe mainly from U.S. academics, but also from some European academics, this mandated disclosure paradigm is contested as viable instrument because the critical voices argue that a lot of information leads to overinformation, and then people are running into confusion effects or into Cassandra effects. And in addition, it could also lead to overconfidence, which have been the result of not really reflected decisions.
So, in a nutshell, more information, more transparency is not better, and I would somehow like to close the circle by referring to the introductory remark of local we need more quality of information; we need auditing processes; we need, somehow, supervisory measures which try to foster the quality of information. It's probably a long way to go, but I think it's worth to go it. Thank you very much.
>> LUCA BELLI: Thank you very much, Rolf. And, yes, indeed, I think that is ‑‑ your last point, especially, is essential for us, the need to focus not only on quantity, but also of quality of information. And also, to this extent, what you also mentioned about relevance of a multi‑stakeholder approach, again, not to merely pay lip service to multi‑stakeholder reason, but to indeed to have received the feedback and standpoints, diverse feedback and standpoints that are indeed necessary to increase the quality of the process, right?
And also, something that Yasmin was mentioning before, and also Nic was somehow reinforcing, the fact that most of these discussions take place in very Northern atmosphere policies. So, we really need to try to expand this conversation to Global South actors. The Action Coalition need for transparency has also been doing some efforts in this regard, together also with this Dynamic Coalition of the IGF. So, we really have to take into consideration this importance. And then to keep on the discussion on this point, I would like to introduce our last, but of course, not least, speaker who is Dr. Monika Zalnieriute, who is with the Australian Research Council and also Senior Fellow at the University of Sydney. Monika, it is a great pleasure could have you with us, and please, the floor is yours. And I would like to ask the technical assistance to unmute Dr. Monika Zalnieriute. I'm not seeing Monika here anymore. Perhaps we have missed her, we have lost her.
>> YASMIN CURZI: No, I think she's here. I'm going to ask her to unmute, but I can't turn her camera on. I depend on the host of the session.
>> LUCA BELLI: Is Monika here? And have we unmuted her? Because I'm not seeing her. I see another Monika, but I am not seeing Monika Zalnieriute. Okay. So, as we try to find Monika, I think we can take advantage of the fact that we have ten minutes left to start with some questions. I see that there is also some questions in the chat regard any potential feedback on the recent European Digital Media Sovereignty Working Group report on platform‑to‑researcher data access. Maybe if any of the speakers ‑‑ all the speakers, of course, both in the first and second segment of the question ‑‑ have any questions on this ‑‑ sorry, any comments on this. You are very welcome. Do we have any others? Before we open the floor for ‑‑ well, we open the mic to all our speakers for replies, do we have any questions from the floor at this point? Or from any participant in the chat? Again, I am sorry we cannot see you. So, I have to ask you to manifest yourself by taking the mic in the room and saying your name, your affiliation, and your question. Do we have any questions from the floor? I don't hear anything so far, but I see a new question.
So, Rachel Pollack, you had a question. You still have the question in the chat. Let me find... okay, yes. I found your question. So, UNESCO is organizing global conference in ‑‑ so, maybe you can open the mic of Rachel so you can ask the question yourself. I'm opening your mic. So, Rachel, you are now free to ask your question.
>> Rachel Pollack: Yes. Hi, Luca.
>> LUCA BELLI: Hello.
>> Rachel Pollack: Yes, hello. Thank you for this session. Very, very interesting. So, my comments and then question is to alert everyone that UNESCO is organizing a global conference on regulating digital platforms. It will take place in Paris from February 21st through 23rd, and also virtually in a hybrid format. And the goal of this conference is to develop a model regulatory framework for securing information as a public good while respecting freedom of expression. And so, transparency is a key element of this framework and also of the conference.
I have put in the chat some of the goals of the Regulatory Framework for Transparency, content management policies consistent with human rights, user empowerment mechanisms, accountability, and independent oversight. We also list ten issues on which platforms should report to regulators, and it's very much in the vein of outcomes and the Online Safety Bill in the UK and the Digital Services Act in terms of focusing on processes and structures.
One of the questions that we have now ‑‑ and so, this is a draft document that's being open for consultation. It will be posted online on December 9th, so we would welcome any comments. And one question that we faced ‑‑ and I think that this session this morning also has approached ‑‑ is what are the hazards in trying to set out a global policy on regulation, given how varied political systems are around the world in the way that content moderation affects different groups differently and connected with this idea of interoperability and being nuanced to local contexts as well. So, I would be very curious for the thoughts of the panelists on this question. Thank you.
>> LUCA BELLI: We have a hand raised. Sorry, I'm not seeing who is raising it. And now I think it's Rolf?
>> YASMIN CURZI: It's Vittorio and then ‑‑
>> LUCA BELLI: Sorry, Vittorio. Please, go ahead, Vittorio, and then Emma.
>> VITTORIO BERTOLA: Okay. Yes. I wanted to ‑‑ I think this is an important question. I think that the problem is whether we want ‑‑ we are talking about hard regulation or soft regulation, in a way. Because I mean, most of the platform regulation we are talking about today is hard regulation and necessarily will be done at the national level. And indeed, there are risks that in certain regimes, this would be bad for human rights, but I don't see, I mean, how at the global level you can make any kind of binding hard ‑‑ maybe you could promote some kind of international treaty, but I see it really hard.
So, what could happen at the global level is rather setting principles, so setting suggested frameworks. At the same time, if we want to do that, I think we have to get rid of this rhetoric that there has not ‑‑ there should never be any kind of fragmentation or national affiliation or so on, because the problem in having this debate now is that, at least I mean in the technical community, where I come from, there's a significant part of the community which is refusing to have this debate and just saying, there should be no national rules on content or national rules and platforms, and this still has to go away before you can have any kind of meaningful dialogue.
>> LUCA BELLI: All right. So, we have Emma and then Rolf.
>> EMMA LLANSO: Great, thank you. Rachel, it's a great question, and I'm really glad that UNESCO is going to be hosting this discussion in full in February. For me, I think the main hazard in setting a kind of a global framework, or you know, a global approach to policy, is how different the underlying conditions in different jurisdictions and legal systems are. And so, the same regulation in place in a country where there is strong rule of law and clear kind of commitments to human rights and in practice, achievement of most of those human rights through, you know, respect of those rights through the operation of the court system and the legal system. That kind of regulation can look very different, operating in a different environment where whatever it might say on its face, there's not that kind of fundamental protection for people's rights baked into the overall system.
So, even just in the question of researcher access to data, CDT is working on research right now about even just comparing the U.S. and the EU on the question of does exposing data to third‑party researchers change the way that law enforcement might get access to that data under legal standards? Very different answer across the Atlantic because of how both jurisdictions kind of approach the question of reasonable expectation of privacy and what privacy rights you have in your communications data. So, I think that kind of comparative overall legal environment question is really key to understanding the impact that any kind of framework idea could have in practice.
>> YASMIN CURZI: Thanks, Emma. Rolf?
>> ROLF WEBER: Okay. Thank you very much. I would like to very quickly reply to two or three questions. First of all, as far as UNESCO conference and the earlier question of this is concerned, I think we have to overcome siloed solutions, and we should include in somehow media silo, include technical community, include policies in general. Also include businesses and try somehow to bring the different streams together in order to be successful.
And since I'm in the process of speaking, I would also like to quickly answer the questions of Natalia. Ruggie principles are not as soft as some people think, because Ruggie principles have been taken up in the OECD Guidelines on Multinational Enterprises already in 2011, and these guidelines are under revision now and are strengthened, so they are becoming relatively hard. In many countries, and in particular the European Union, the Ruggie principles are implemented in regulations or in national laws. So, we are in the process of making them hard, but obviously, this is a best hemisphere approach and maybe also an approach in New Zealand and Australia and Japan, but I am relatively confident that we are on the way of making them harder.
>> YASMIN CURZI: Thank you so much, Rolf. Oli?
>> OLI BIRD: Yeah, thank you, Yasmin. And thank you, Rachel, for the question. We've been corresponding in the chat, but we'd love to follow up on this. I think it's a really important and timely conversation to have, and your conference sounds like a great way to take that forward.
I think to some of the other points made, I would just echo and agree with both Vittorio and also Emma. I think we are inevitably seeing a divergence of different national regimes. I don't necessarily think there's anything we can do about that, and I think the challenge is how to move forward, given that. And some of my ideas at the start of the session around the sort of underlying regulatory tools being common across different regimes I think might be part of the solution there. But definitely, Emma, your point is a really important and valid one, that different countries with different legal frameworks and different sort of starting points in terms of how embedded human rights are in the legal system, in the culture, I think are really important things to consider at the outset as well, and we shouldn't sort of make any naive assumptions around all being in the same place on those sorts of points. So, yeah, thank you.
>> LUCA BELLI: Thank you very much, Oli, for this. And I think with the final word of the regulator, we may go through towards the end of this session, as we are already three minutes out of time. I think we had a very good discussion with a lot of very interesting elements that will be shared. I hope the video recording will be accessible soon, and I also hope, although I really like Nic Suzor, I hope there will not only be his picture in the recordings because it may be more entertaining also to watch the entire video recordings of the session.
I think some other points that for next steps that we might find very useful are also engaging in the UNESCO Conference. It was mentioned by Rachel. UNESCO has been doing great work on these issues, and also, really feel free, both Rachel, Guy, and other friends from UNESCO, to use the document, the outcome document we have elaborated this year, because it is exactly for this kind of purpose that we have elaborated to have it read and used as much as possible by partners, friends, and whoever may be interested. So, I would say that maybe our next meeting, hopefully, might be in the context of the UNESCO Conference in February.
I thank everyone, especially my dear colleague, Yasmin, for her excellent co‑moderation, and the great panelists that we have had today. It has been really fantastic discussion, and I wish you an excellent continuation of IGF 2022. Bye‑bye.
>> YASMIN CURZI: Thank you all. Bye‑bye.