The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> EVELYNE TAUCHNITZ: Wonderful. Good morning, everybody, here from Addis Ababa. Thank you for joining us today for this session. We're going to explore basically four topics. It's about rights and obligations. It means the right to access the Internet, but also, then, the question: Do we also have the right not to access the Internet? Like, in the sense of is it only right to access or is it even an obligation nowadays in our information society? Like, are we obliged to be connected, or are we obliged to do certain services online? Like, for example, filling out an enrollment form for our son's school or health insurances and so on. So, is it a right, the right to Internet? That's also going to be discussed by the way of the main session on Connectivity and Human Rights on Thursday morning.
We're also going to discuss if there is an emerging human right to access a free and universal Internet. But I'm not sure if that is already discussed in the broader IGF context, if that is not an obligation. Because we have to be aware, if it is turned into an obligation, we might have new forms of inclusion and exclusion, because some people, they might still not be able to join the Internet and be excluded with the digital divide steadily growing. Some people may even choose by themselves, I simply don't want to do the services online, I don't want to be part of the information society. So, that's about free choice there and how we handle that and let's see. Disconnection and connection is also a topic. I mean, if we don't have connection, we don't even have to talk about this thing, if there is a right to a free and universal Internet or if there is not.
And finally, that's really about meaningful access. If we see that we have access, or are there still topics of inaccess there? So, it's really about this Internet contrast that we are talking today. Like, on the one hand, the possibilities, but on the other hand, also asking, are these rights turning into an obligation or is there also a right not to be connected or part of this information society? And how do we handle this, like topics of inclusion and exclusion and the digital divide.
So, I'm going to invite our first speaker, who is on site. That is Dariusz Kloza from the University of Gent in Belgium. Welcome, Dariusz.
>> DARIUSZ KLOZA: Thank you. Do you hear me? Yes. May I ask for putting my slide? Yes, thank you. Thank you, Evelyne. Ladies and gentlemen, what was more or less ten years ago a right, a freedom, an entitlement to access the Internet, has become probably an obligation. This is particularly visible in aspects such as e-public administration, e-governance, but recently the public health crisis. Those on matters that are important to individuals, on the values they cherish and the values that the democratic society has been built on. And therefore, those developments can be also seen as a matter of human rights importance.
Therefore, it asks the question: What human rights can do in order to protect one from an obligation to access the Internet? And the answers are at least two. One of them is to introduce a completely new human right not to access the Internet, provided it is sufficiently defined, realistic to implement and enforce, provided it touches upon important matters for the society and for the individual, and provided that it is consistent with existing human rights. This is, perhaps, a Herculean task, knowing how the system of human rights functions.
The other option is to interpret freedom not to use the Internet, rights not to use the freedom, and entitlement ‑‑ the choice ‑‑ from existing human rights. Human rights has been conceived as a flexible tool, responsive to changes in the society. And for example, we can interpret freedom not to use the Internet from certain existing freedoms, like freedom of speech, freedom of assembly association, and freedom of religion.
In European Human Rights Law, which I am the most familiar with, for example, freedom of expression can be used to safe guard someone's choice not to express one's self at all. You can remain silent. You may not have an opinion. You don't have to express it in certain situations. European Human Rights Law shields people from an obligation to join, for example, a trade union in order to practice regulated profession in certain circumstances, such as taxi drivers. European Human Rights Law protects individuals from revealing their own religious beliefs or a lack thereof equally.
Human rights to privacy and personal data protection are there to shield one from revealing their own personal data, from controlling their personal data, which I am most familiar with. European Human Rights protect private choices, like what to do with one's body, what to do with one's identity, what to do with one's name.
Last, but not least ‑‑ and I already underline that the catalog is open ‑‑ last but not least, freedom from discrimination on the grounds such as literacy, age, income, and especially in these days, computer literacy can be equally invoked to protect one from obligation to use the Internet. Obviously, the catalog is long and not probably exhaustive.
One of the aims of this session is to look what all other human rights can be invoked to that end. However, this is not without limitations. Assuming that there is a new human right or existing human rights are interpreted in a way that actually we have choice not to use the Internet, only few human rights, and especially protection from torture, freedom ‑‑ human disregarding treatment ‑‑ are absolute rights. Other fundamental rights are subjected to certain limitations in order to accommodate other interests. And this is done by the technique called proportionality, when one human rights imbalance versus another.
What I'm trying to say is that right not to use the Internet sometimes might be balanced against other human rights or other important constants. So, overall, not an absolute right but right subjected to certain limitation. But even from the brother perspective, this is not to say that the right to access to the Internet ‑‑ so, the positive right to access the Internet ‑‑ is something bad. It actually ‑‑ the right not to use the Internet was conceived as an idea that in question in general is ‑‑ (unmuted speaker) ‑‑ we have to do something to protect from these types of obligations. At the end of the day, I believe both of them ‑‑ right to access the Internet and not to access the Internet ‑‑ should work together. Thank you.
>> EVELYNE TAUCHNITZ: Thank you very much, Dariusz, for this first presentation of our session. Next will be Anriette Esterhuysen from the Association for Progressive Communications, please. Thank you.
>> ANRIETTE ESTERHUYSEN: Thanks. I'm actually speaking on the second topic. Let me stand here. Otherwise, you can't see me. On inclusion and exclusion.
I think ‑‑ Well, first, I want to pick up a little bit on what Dariusz said about the right not to use the Internet. I think that is a very legitimate right, but I think we also have in Africa and in many parts of the Global South people who don't have the Internet. And I think what we are sometimes overlooking is that this emphasis on digital inclusion, which is why we are all here, always puts the emphasis on the digital, rather than on the inclusion. And what is overlooked, that in fact, you really have meaningful choice, whether you use it or not, about digital tools. You need inclusion. And in fact, if the inclusion isn't there, then that capacity to use the Internet, to enjoy more human rights, to access public services, to interact with the state, political participation, to watch soccer online, as our techies were doing earlier ‑‑ you don't have that right. And I think what is also overlooked is that there's emphasis on Internet‑related human rights and on digital inclusion, overlooks the fact that the more that we embed or require access to digital tools and the Internet to exercise our human rights, the more we actually exclude people who don't have access to it.
And what we see ‑‑ and there's quite a lot of literature on this ‑‑ in developing countries is what one of my colleagues, Alison Gillwald, calls the Digital Inequality Paradox, because the more high‑end digital services are available online, the more high‑end of a device do you need, the higher bandwidth do you need, the faster access do you need. So, you might be in a country like Ethiopia, as busy at the moment, really expanding its Internet infrastructure. There will be more than one operator soon. But if you are in a rural area and you have a very poor connection and you have to have a smartphone to register a death certificate of one of your parents, you can't actually do it, because even though digitally it's possible, you don't have the devices; you might not have the skills; and you might just not be able to afford the bandwidth.
So, I think that when we look at human rights, at inclusion/exclusion, and the role of the Internet, the choice to be able to enhance experience of rights, also the choice of states to exercise their duties in promoting human rights through digital, we always have to consider inclusion, and we also always have to consider the most excluded. And if the most excluded people in our societies are going to have to access the Internet, have smartphones or computers, electricity, and a reliable Internet connection to enjoy their human rights, are we really making progress in creating more rights‑oriented societies and cultures?
>> EVELYNE TAUCHNITZ: Thank you very much, Anriette. I mean, this is a really important point, that we don't want to leave anybody behind. We need to be really careful about how we actually design these services. Also to note, you mentioned the digital paradox. I mean, yeah, that's a really interesting point of view which I think merits more discussion in this respect.
We're going to have our next speaker, Paolo Passaglia from the University of Pisa, who should have actually been the first speaker, because he also talks about the right to access the Internet, but I think he's more in favor of it, but let's see. Thank you. Paolo. He's joining us online. I don't know if that's working. But yes, thank you. Paolo, are you there?
>> ANRIETTE ESTERHUYSEN: He's muted.
>> EVELYNE TAUCHNITZ: You're muted. Yes, you're muted. And camera's also off. Paolo, are you there? Maybe you can unmute yourself? Yes.
>> PAOLO PASSAGLIA: Good morning. I had problems with connection.
>> EVELYNE TAUCHNITZ: Yeah, we can hear you now, I think.
>> PAOLO PASSAGLIA: Good. We tend more and more to define the access to the Internet as a right, but I think that the main problem regards the kind of right that we are talking about. I think that there are three possibilities. The first is the right to access the Internet as a freedom, such as in the Reno case or the judgment by the French Constitutional Council; or the second is the position according to which the access to the Internet is a means to enable rights; and the third is I think the most important one, the access to the Internet as a social right. This definition is the most important since it obliges government to take action in order to allow everybody to access the Internet, of course, if they want.
The problem is that these actions are quite founding. Therefore, plans to develop the broadband, for instance, can have a negative impact on other social rights. As a result, while implementing the access to the Internet, it is mandatory for government to make sure that the Internet really complies with the purposes for which the Internet has become crucial. Otherwise, one could suggest not to spend money on the broadband and save it for hospitals or schools; namely, for example, the Internet must be a means to acquire knowledge, to freely develop one's personality, to have access to the highest number of sources of information.
Now, against this backdrop, my question is whether the existence of gatekeepers is consistent with these purposes of freedom, and ultimately, are we sure that the Internet, as currently is shaped, deserves the implementation of a job tree as and access as a social right, thus it is better to implement the access to the Internet rather than protecting other social rights. I think that it is a matter of how the Internet is structured and it is protected the freedom inside the net. Thank you very much.
>> EVELYNE TAUCHNITZ: Thank you very much, Paolo. As we were already a bit back in time, we'll go directly to our next speaker, Rosanna Fanni from the Center for European Policy Studies in Brussels, also, I believe.
>> ROSANNA FANNI: A mic, thanks. Thank you. I will be speaking about the contrast and the use of digital identification systems.
To kick us off, just a general definition. Digital IDs are a collection of electronically captured installed identity attributes that uniquely describe a person within a given context or that are used for electronic attribute, such as names, age, gender, and also biometric data, such as fingerprints or iris scans.
We know from the World Bank that approximately 159 countries already use some form of a digitized ID system, and they claim to seize opportunities for businesses, the state, and also for citizens. So, naturally, when more citizens would have access to one nationwide ID, one online public and private services would be more easily accessible and providing access to banking, health care, remote work, education, and so on.
For example, in India, the system has provided access to social welfare benefits to over 1 billion citizens that did not have a passport before. For example, in Estonia, digital ID systems can be used to vote, and the country even established an e‑residency program, which attracts people from outside and numerous non‑EU businesses established in Estonia with an e‑residency.
Of what I mentioned, those 105 countries that have implemented some form of digital identification systems, approximately 103 of them use biometric data. And so, that's what I mentioned ‑‑ fingerprints, iris scans ‑‑ and that's about two‑thirds of all digital ID systems that are already in place. And this poses certain challenges.
Even countries already with advanced data protection legislation, like the EU, Canada, and the UK, grapple with the enforcement and the implementation of those legislation and meaningful data protection legislation, which is really crucial for the processing of such data, in particular biometric data. Also, it's a key issue, for example, in the upcoming EU Artificial Intelligence Act and the processes of bending biometric data actually is widely debated.
Next, protecting the privacy of our citizens and partners also to note that biometrics, surveillance in public spaces is a threat to freedom as we know it, but already used by authoritarian governments to seize power and control. Also risks of fraud, cybercriminality and identity theft occur due to the large value of the data produced.
In addition to that, governments increasingly have disproportionate control over those digital identification systems. Also, limited public accountability and volatile political agendas after elections make digital IDs really a system prone to abuse.
So, as we now know, digital ID systems are gatekeepers to access an increasingly large amount of goods and services that are increasingly becoming essential for citizens. But especially when a person's behavior or body doesn't fit a predetermined notion of identity by the government, by the systems, then digital IDs, unfortunately, structurally exclude individuals and communities, putting human rights starkly at risk.
Additionally, the lack of knowledge and tools to really implement human rights protecting policies and principles disproportionately affect children, elderly, and also less-literate citizens. In at last five countries, we already know that digital ID systems have been invalidated by courts because they fail to protect data appropriately and there are numerous concerns over privacy and security, and as well as structural exclusion of minorities. These countries are Tunisia, Kenya, Dominica, Rwanda and India.
So, to summarize, what do we do with that? We know there is growing enthusiasm for digital identification systems in Africa, but also across the world, and there is a need to more closely examine the impact on human rights, the rule of law, and the people who will be included or excluded from those systems. If implemented well, everybody would benefit, but if not, the digital identification systems could continue to create structural power imbalances between the state, businesses, and citizens. Thanks.
>> EVELYNE TAUCHNITZ: Thank you very much for bringing in this new aspect also of digital identities and discussing them from a really critical perspective. I think you raised some really important points when it comes, for example, to data protection, but also as you mentioned, that the government has control over these ID systems. And yeah, the question really how do we assure accountability in that case.
So, now we will go on again to a bit of a different topic, which will also touch upon Internet shutdown, something we have not discussed yet. And I invite our next speaker, who is online, Giovanni De Gregorio from the University.
>> GIOVANNI DE GREGORIO: Hello. Can you hear me? Fantastic. Thank you so much. And of course, good morning, everyone. It is a pleasure to see you. I will be really, really brief, just include the topic and the connection, the dual connection between connection, again, and disconnection. Because, I mean, another important topic to address is kind of the connection issues, you know, that I think about most of you are quite aware of that, you know.
But one of the big problems is also disconnection. Is it about the challenge posed by not only the, you know, famous I will say nowadays, problem with Internet shutdowns, but also by problems that are not just related to, like, switching off the Internet ‑‑ those problems are related to network disruption. And network disruptions exactly means problems related, for example, to, you know, traffic, Internet traffic that, for example, starts with law that does not allow you not only to enter the digital space, not only to use the Internet, but also to use the Internet in a way that you can definitely use services on the Internet.
Because one important point is that nowadays, the attention is focusing a lot on the problem of Internet shutdowns, and it is absolutely relevant, but we should not forget that there are different tactics and techniques to disconnect, so there is not just about switch‑off. There is also the possibility to discriminate traffic, for example, or doing other possibility, like censoring spaces, the digital space, or online spaces like social media spaces or other spaces.
So, what is particularly important to understand is that the strategy and status of these connections are really becoming more complex than before. And what we are seeing is that it's a proliferation of these tactics in many places in the world, you know, not only, of course, Latin America and Africa, but also in Europe we have different experience. And the problem with that is about not only the impact of these measures on human rights, but it's also about understanding why these measures are used or are implemented. Because sometimes they're used, for example, to protect exactly rights, for example, to take off the spread of disinformation or hate speech, but also are used, of course, to justify the taking of this problematic, harmful content to achieve like other purposes, you know?
And there have been plenty of cases. You don't need just me to tell you about that. You can just go online exactly, you know, and search for this. There are many out there, you know. There are also NGOs that have been working very hardly on mapping these attempts. So, the problem of connection/disconnection is actually paradoxically is one of the big issues that still we are addressing in the digital age. And again, this is a call not to think that the discussion of Internet shutdowns is just a black‑and‑white discussion, because it's much more complicated. There are a lot of nuances about the tactics and techniques that can be used to perform a shutdown, but it's also important to understand which are the consequences.
Also, at the moment, there are not so many data that underline whether a shutdown is effective or not, for example, to tackle hate speech. Because usually if I say that the shutdown is, for example, to tackle hate speech or disinformation, the problem is that when I switch on the Internet back, the question is that no one has data on whether this shutdown, this network disruption has been effective or not. So, this actually is the situation. And it's important to understand the discussion about accessing the connection is not just about a discussion around freedom of expression, but it is a discussion about all human rights, but also about the possibility to use the Internet architecture to achieve also the network access.
So, this is the way we actually need to frame the discussion. It is very complicated. It's very important to protect freedom of expression, but it is also important not to forget the big picture and to understand that the switch‑off is the only way to perform a network disruption, because there are so many other possibilities, technical possibilities, that increasingly are invisible. So, this is actually just to start the discussion on this topic and also to connect with all the others. Thank you so much.
>> EVELYNE TAUCHNITZ: Thank you very much, Giovanni, for also discussing all these nuances that you mentioned. It's certainly not a black‑and‑white picture. And we will go on now to our next speaker, Olga. Let's see if I can pronounce it correctly. Olga Gkotsopoulou from the University of Brussels, who is going to talk about accessibility and inaccessibility. Olga.
>> OLGA GKOTSOPOULOU: Hello. I hope you can hear me. Thank you very much, Evelyne. Yes, I just would like to say that accessibility ‑‑ actually, I'm from the Research Group on Law, Science, Technology and Society in the Health and Ageing Law Lab in Brussels and my work is mainly focusing on accessibility, but in another context. And by that, I mean when we speak about access, very often we have in our mind access to the Internet or access to information or access to knowledge or access to literacy. But sometimes we tend to forget, and it's something that's also become obvious in the comments, in the chat, in the online chat, at least, that access for whom. And by that, my specific -- for instance ‑‑ my research in a specific context is in the context of data protection law when we speak about protection of our personal data. How do we actually receive this information in the context of persons with disabilities, meaning that if this information, if this knowledge about how one can exercise their data subject rights, for instance, in the European Union context or in international context, depending on the data protection law or the privacy law applicable there, can exercise its rights if, for instance, the information provided to them are not readable by a screen reader, if the person has visual impairment; or if it's an elderly person who does not have the necessary media literacy for that.
And of course, we are all aware of dark patterns and other difficulties that can cause hurdles with respect to accessing this information. So, this is what I would like to very quickly add, that we also always need to contextualize when we think of access and accessibility in the Internet context. Thank you.
>> EVELYNE TAUCHNITZ: Thank you very much, Olga, for pointing out the issue of inaccessibility and accessibility. We will go on now to our next speaker, Quito Tsui from the Engine Room in London. Quito, you're also online, I believe.
>> QUITO TSUI: Yes, hi. Can everyone see and hear me okay? Okay. I assume people will let me know if they can't hear me. I'm Quito. Thank you for having me. I want to look at the duality of accessibility and inaccessibility through the lens of biometric use in the humanitarian sector.
A quick background for those of you who aren't familiar. Many humanitarian organizations are increasingly employing the use of biometric information, and that is iris scans, fingerprints, facial recognition and so on, in the process of registration and service distribution for refugees. During that process, they've also created vast databases of highly personal information, and there's a deep irony or twist in the use of biometrics in the humanitarian sector because its deployment is a company of access belied by new inaccessibilities that can see it is initially simple. There's a desire to know who is receiving aid and to be able to track this. But who is gaining access in this process and what are they gaining access to?
In many ways, it is not truly the impacted individuals, sometimes called beneficiaries or recipients. In fact, it's all too frequently aid organizations, funders, and in certain cases, host states and even malicious actors. Being able to access biometrics, they are also able to access highly sensitive, physical information that's immutable and utterly unique, information that marks you out from everybody else. The argument for biometrics involves in part about increasing access ‑‑ faster and more efficient registration systems and facilitating the use of cash transfer, which is a more unburdened way of accessing highly sensitive (inaudible) it's easy to accept these claims wholesale, but they need to be examined more closely to consider what is lost, what is no longer accessible, and in what ways are these systems themselves fundamentally inaccessible.
There are three main ways I want to highlight today, but there are many others, and I think that these help give us an understanding of the different manners in which we can understand inaccessibility and the complexity of thinking through access when mediated through digital technologies.
The first is physical inaccessibility. Another speaker discussed this briefly earlier. There is a reliance on everyone possessing the same physical characteristics from which data points can be extracted. For instance, in the case of fingerprints, those who have engaged in hard labor, elderly individuals, and those who have cooked extensively, frequently ‑‑ women ‑‑ do not have fingerprints or do not have sufficiently legible fingerprints to be recorded. And that means that they cannot access or need to access in a different way these systems, right? And if the point of these systems is for faster or more efficient registration or to facilitate the use of things like cash transfer, these individuals no longer have the ability to access those particular mechanisms.
The second is individuals who are swept up in the intricacies of the system. In Kenya, Somali Kenyans who are registered when they were children as refugees within the UNHCR system, in order for their families to access aid, find as adults they are unable to access their rights as citizens due to this dual registration. Correcting this has proven immensely challenging. And the point of the system itself was to give access to certain individuals, and it has fundamentally denied access in other regards. Obviously, the loss of citizenship or the inability to access citizenship is a huge trauma and carries immense consequences for those who are facing it.
The last is, biometric systems frequently executable to those who are subject to them. The technical literacy required to understand not only biometric technology itself but also the surrounding web ‑‑ protections, the possible risks, and therefore, the ability to give meaningful consent in practice ‑‑ remains elusive. The inaccessibility of this knowledge, therefore, and the limits this places on consent, excuse me, calls into question if we should be really using technologies, the understanding of which is premised upon access, right, access to time, resources, and information and the full extent of information, too. It is highly unlikely that is going to be the case in many instances, not just in the use of biometrics, is but across different technologies we use. We need to understand and complicate our notion of what it means to, therefore, agree to the use of them.
Fundamentally, claims of accessibility have to be mapped out against more insidious realities of inaccessibility. In the case of biometrics, where the consequences of inaccessibility are so acute, like the loss of citizenship and possible exclusion from aid systems or immense challenges in accessing them, or asymmetry of information preventing true consent, it means however we understand accessibility, it has to be tempered by the sobering realities of how technology increases the barrier to meaningful access and places immense conditionality on some of the most vulnerable individuals who are attempting to access their fundamental rights. Thank you.
>> EVELYNE TAUCHNITZ: Thank you again for sharing these wonderful insights with us on accessibility and inaccessibility. We have now already heard different speakers around this topic, which really shows that it is a key topic to be discussed.
And we now go on with our next speaker, Georgios Terzis. So, you're going to be talking a bit about a different topic again, or at least from a different perspective. We're going to take ethics now into the bigger picture. So, our two last speakers, they are going to look at these topics a bit from an ethical and human rights perspective as well. So, Georgios.
>> GEORGIOS TERZIS: Hi. Hello, everybody. What I will present today ‑‑ and I'm sorry for the slides. No one told me that you're not supposed to use slides. But it's an ethical issue. It's a bit philosophical. So, it might be useful. And I understand I only have five minutes, so if anyone wants the presentation, I can send it to you later.
What I will present you very briefly today, it's a literature and policy review from the ethical lenses. I understand it is a first‑world problem, the issue of inclusion/exclusion sometimes, but I think if one considers the opportunity costs, as they were referred before, it becomes a problem for the rest of the world as well. So, I'll just see if this thing works. And it doesn't. Can you move the slides with this? Yeah. Can I use this to move the slides? No? Yeah, anyway. Can you hear me? Yes.
So, the six different lenses that I wanted to look at the approaches. The first is, when you look at the literature review and the policy review from the utilitarian perspective, there are three rather false premises. The first one is, you know, the greatest happiness for the majority of people. But of course, this approach doesn't take into consideration the misery, if you like, of a minority of people who don't want to be part of the information society. My mom, who is 88, does not want to be part of the information society in Greece. She lives in Greece. And she's forced to. It also doesn't take into consideration the long‑term consequences. So, a lot of these policies are rather short‑term.
The second, of course, is the contractualist ethics lens, and there are slides, by the way, for this, but I don't know. Can you try to move on? Also, from the contractualist point of view, right, the social contract, a lot of policies take into consideration ‑‑ we can move to the next slide. Next slide, yeah.
So, from the contractualist point of view, we take into consideration and we think that life is necessarily better in the information society, or life outside the information society does not exist, cannot exist. Like, information inclusion is for life. But from the contractualist point of view, if you take this ethical approach, consent is necessary. And here we don't really have consent. My mom never consented, actually. And you know, I dedicate this to my mom, actually. This is how the inspiration came. My mom never consented. My mom never signed a social contract for to be included, and she was very happy. And the quality of life before the information society was quite okay, to be honest with you, for her and for me as well. Next slide.
Well, this is the approach which, as we know, you cannot always get what you want. But the problem here is, if you take the ontological approach, then even when you think that, you know, information society can be very harmful ‑‑ so, basically, the case of surveillance, the case like, the way that people live, especially during COVID, as it was expressed before ‑‑ you cannot escape, because then it is basically a ontological approach that everyone, no matter what the consequences, have to participate. Next slide.
From the discourse lens, the approach will be that, you know, everyone, as long as they accept this and as long as it is not under coercion, it is okay. But can we actually really, decently sit in here and claim that even the IGF is really inclusive in that everyone's opinion has been included, including those who don't want to be part of the information society? Because we are in here preaching to the converted, right? The people who don't want to be part of the information society are not represented here. So, even with this approach, I will say that the right to be excluded should be there.
So, we think that ‑‑ and this is part of my colleagues' approach as well ‑‑ we think that, actually, we should approach this issue of inclusion as exclusion from virtual ethics, which is the middle ground. So, you have the deficiency, the digital divide, you have the access, which is the obligation for the inclusion, and there must be somewhere in the middle.
And of course, care ethics ‑‑ and my time is up ‑‑ which is the last approach, that we should have approach ‑‑ we should approach, actually, inclusion/exclusion through care ethics. And I will stop here ‑‑ well, I will be very happy to share my slides and I can send you, actually, a brief. But it will be in the report as well. Well, care ethics means that only if you want, and you should take into consideration, basically, that you know, negative side effects if those are physical, cyber bullying, surveillance, et cetera. So, I'll stop here because my time is up, and I think the next speaker will complement, actually, the ethical approach. Thank you very much.
>> EVELYNE TAUCHNITZ: Thank you very much for adding this new perspective. Very interesting. I mean, we talk about what something which stays in our mind, we always talk about inclusion. Do we also include the people who do not want to be part of this information society? I mean, that's a great question to ask. And yeah, I would like to go on to our last speaker for today, Peter Kirchschlager from the University of Lucerne, Institute of Social Ethics. Peter, please.
>> PETER KIRCHSCHLAGER: Hi, everyone. Good to see you all, and thank you so much for having me today. I would like to share my slides, if that's possible. If the online host ‑‑ it would be wonderful if that could be done, that I could share my slides. Perfect. Thank you so much, Anriette.
>> EVELYNE TAUCHNITZ: Peter, I think you should be co‑host already. The technical staff ‑‑
>> PETER KIRCHSCHLAGER: Yeah, but unfortunately, it doesn't work so far. Well, then I'll do it without the slides. I think it's better not to waste your time. So, what I would like to start with, that of course, from an ethics of human rights perspective, that will be the perspective I will take in my short presentation. We have to face the fact that on the one hand, of course, digital transformation, Internet access can provide us with positive potential so that we can better actually fulfill the human rights of all humans. But on the other hand, we have also to be aware of the fact that we have to struggle with ongoing digital human rights violations. And so, we can see that the human rights standards, which are in place and enforced offline are not enforced well in an online. And how could we tackle this challenge?
I would invite us to consider the idea of human rights‑based digital transformation and human rights‑based database systems. So, in order to have the same respect, the same protection of human rights offline and online, that we already in creating, designing, producing and using database systems and digital means, using the Internet, we respect, we make sure that human rights are respected and are in place. I just got the information I should try again, if it's working now. Unfortunately not, but that's okay. I'll do it just this way and then I will share the slides with you, if you are interested.
So, we have to ‑‑ with that reality, and from my point of view, more rigorously, so that we ‑‑ I think it's not enough to have just a beautiful declaration, it's not enough to have beautiful recommendations. So, I think we preach too much on Sunday and we don't do enough during the week. So, what I would suggest is that we respond to this digital human rights violations with stronger regulatory mechanisms, and I would ask for and I would call for the creation/establishment of an international database systems agency at the UN, so an agency who would play the role of regulatory authority in the field of digital transformation, in the field of the Internet, in the field of database systems, in order to supervise and monitor what's happening in the digital sphere. I think that's so that we don't think, well, the ethics professor's now getting somewhere into some naive illusions or Utopia.
I think that acknowledging to nuclear technology. Simply put, nuclear technology, we have done research; we have created the bomb; we have dropped the bomb several times; and then we have understood as humanity that we have to do something in order to avoid diverse. And we have created the International Atomic Energy Agency at the UN.
Of course, I am aware of the fact that this is not a perfect regime, it is not a perfect solution. I'm also aware of the geopolitical implications of that agency, but we have to acknowledge as well that we have been able to avoid the worst. And I would argue in a similar sense for an agency in the field of digital transformation database systems in the field of the Internet, because it's not me, but even tech people like Elon Musk saying that, you know, artificial intelligence, digital possibilities, are more dangerous than nuclear weapons. So, I think it's time to act, and I think it's time to act on a global level in order to address the ethical challenges in this area, but also that we can benefit better from the ethical opportunities. Thank you so much for your attention. Looking forward to the discussion.
>> EVELYNE TAUCHNITZ: Thank you very much, Peter, for bringing in, again, this new perspective, a bit of a mix between ethics but also governance, like the question of what we could be doing to manage these risks and regulate these technologies.
We are now at the end of our speakers' presentations and would like to invite both our online audience and our onsite participants to join the discussion. And if you have questions to the speakers, that would be the time to pose them. We will start with the online participants and collect some questions there. Our online moderator, Caitlyn, do you ‑‑ Caitlyn, finally, I think, will take the online questions. Caitlyn, please.
>> CAITLYN McGEER: Yes. So, we're actually just still collecting questions, so it might be better to move to ones in the room first.
>> EVELYNE TAUCHNITZ: Okay, sure, we can do that. So, please, any questions here in the room, any points of discussion that participants would like to add? Of course.
>> ANRIETTE ESTERHUYSEN: As a human rights activist and a non‑academic. I think that the speakers all really highlighted the nuanced way of looking at this and the risks of assuming. You know, that, as I said, as we've been saying, that digital inclusion is a synonym for inclusion, you know, that online rights gives us human rights. But if you have a big‑picture perspective, looking at the Global North and the Global South, and you can respond differently from them, has increased digitalization and Internet access taken us further towards more rights‑based societies and governance or not?
>> EVELYNE TAUCHNITZ: I would have had actually quite a similar question. I'm really glad that you posed that. Who does want to answer that? Or who has input for that? Any of the speakers? Peter? Yes, please.
>> PETER KIRCHSCHLAGER: Well, thank you so much for this contribution. I think you're absolutely right to ask the question, because it's not that obvious. So, I think what we need to be very ‑‑ we have to be very precise on identifying, okay, what exactly are prerogatives from a human rights perspective, thanks to digitalization. So, I think about possibilities in the area of e‑governance, possibilities in e‑voting, access to information. I think you can name a few in the political sphere. But at the same time, we have to recognize in certain economic spheres and domains, we have to tackle business models which have in their core human rights violating practices. So, not talking about negative collateral effects; we are talking about business models built upon human rights violations.
Take, for example, the matter of former Facebook case where you have racist hate speech happening in social media leading to people really killing each other on the street, and the company not doing something against, now even firing up that hate discourse, that racist hate speech, in order to keep people on their platforms, and therefore, being complicit of killing on the streets. So, things like that in the economic damage need to be tackled, as severe as they are tackled usually on the offline space. So, we cannot just accept or be indifferent to human rights violations happening in a digital sphere. We have to tackle them as severely as possible, even if they're happening offline as we do online.
>> ANRIETTE ESTERHUYSEN: Can I just have a reaction to that? Thank you. It's Anriette again. Peter, thank you for that response. And I think Giovanni also talks about this. And I would just add one reflection on that, having worked in development and social justice. I think probably, yes, there are more human rights in some respects, but I think what this digital inclusion and human rights debate has done, it's actually made civil and political rights hegemonic, and it's actually shifted the whole discussion of human rights towards being concentrated on civil and political rights, whereas in the 1980s, the 1990s, when there was a strong pushback from developing countries in the post‑structural adjustment phase for more social and economic equality, the human rights‑based approach was accepted by the UN and economic, social, and cultural rights were emphasized, even though some states have never signed that treaty.
But I think since we've started talking about the internet and human rights, we have really lost that focus on all human rights being interconnected. And we really just, at the IGF and many other spaces, we talk about civil and political rights and not social and economic rights.
>> EVELYNE TAUCHNITZ: Yes, we have another question from the audience.
>> AUDIENCE: Thank you. I come from Ethiopia, where only 25% of the population have access to the Internet, and it has always been a luxury for us. So, when I heard that the right not to access Internet is entertained as a human rights issue, I'm surprised! What a contrast, I say. So, it's an interesting part. Would you like to comment? Is it something reflective of the North, or do you think the right not to access is also reflected in the developing world?
My other question relates to, you know, Internet shift down and all that, you know, in countries like Ethiopia, it has been customary to shift the Internet of during (?) and that's understandable, because otherwise, people would share answers via Telegram and all that. In fact, recently what happened is students were taken as hostage to universities where they had no access to the Internet for about a week, totally disconnected from the ‑‑ in pragmatic sense, I understand that. But what do you think of this, in light of human rights violation and all that? Thank you.
>> EVELYNE TAUCHNITZ: Thank you very much. Due to the proceeding time, please, short comments and reactions also to the questions. And does anybody have an urgent question, urgent comment? Maybe online people, one last question or comment we can take also? Okay. So, does anybody want to react to ‑‑
>> I can make it super quick, concerning your question. It's super interesting. I see both the right to access the Internet and not to access the Internet as complementary. So, one does not exclude another. They should actually work together everywhere around the world. That is the shortest answer due to the time constraints.
>> ANRIETTE ESTERHUYSEN: I'll give you a short answer as well. I mean, your question, absolutely, I understand it. We are still fighting for rights. But I think the principle is that we really need people to have human rights, you know? And 25% of Ethiopians have access to the Internet. How many Ethiopians have full access to other human rights? Right to clean water, to public health, to education. And I think it's kind of a conceptual point, but I think it's an important point that we remember that having human rights and really having them be meaningful in people's daily lives requires more than Internet access. That doesn't mean that the Internet access should be there for everyone in Ethiopia. It should be.
>> EVELYNE TAUCHNITZ: Okay. Thank you very much. We have to close our session because the next session is already going to start in a couple of minutes. Thank you very much, both to our speakers and also to the audience here in Addis and also online, of course. And I think these are discussions that will be ongoing, like we don't only need more Internet, but we also need like a free and universal Internet that promotes civil and political rights but also, of course, as Anriette alluded, economic and social rights, because human rights, they're indivisible, they're universal. So, let's continue work in that spirit. Thank you very much.