The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> SHOHINI BANERGEE: I will start by sharing my screen. Is that possible to do that?
Can the host enable the screen sharing?
>> SMITA V: Can you enable the screen sharing for the participants?
>> SHOHINI BANERJEE: Thank you so much.
A brief background on the Dynamic Coalition. The can Dynamic Coalition on Gender and Internet Governance seeks to ensure and improve gender equality and inclusion within Internet Governance spaces dialogues. Among other focuses, the coalition wants to promote visibility on women and marginalized genders at the IGF and related foras to ensure that there are effective linkages between local, regional, global initiatives on gender and Information Society.
As part of that, every year, what we do, we collate data to monitor and assess the level of gender parity and inclusion in the IGF sessions both in representation of participants and topics that reflect on gender within Internet Governance. This data is then presented in the gender report cards,, and I will share with you the data from Internet Governance Forum 2021.
This was hosted in a hybrid format with the overarching theme of Internet United. When looking at the participants, 48% of the attendees identified as females and 1% identified as others.
51% of the attendees were male.
For IGF 2021, while there was no data available on session organization ‑‑ session organizers and responses to the question on whether the session reflected specifically on gender issues, we did infer the data that's presented here from session descriptions in the session reports that organizers submit after the session.
From the session descriptions, it seems like 10 sessions, 5.8% had a direct engagement on the topic of gender and Internet or reflected on gender issues as a whole.
17, 10% had a partial focus and most of the sessions, 84%, they had no engagement or reflection on gender issues. A thing that we also collate, it is the gender diversity among the speakers and participants in the session. For IGF 2021, we were not able to do that.
There was no district data available on the question on reflection on gender issues in each session. So that is a quick overview of the Dynamic Coalition and the gender report cards.
>> SMITA V: Thank you so much. The reason we started with this presentation on the report cards, it is a key, you know, intersessional activity that the Dynamic Coalition on Gender does, it is an important, very visible measure of, like, how gender is spoken about, how many women are in the room and what more do we need to do to improve this.
The recordings, it is in the binary and we want to improve on this to be inclusive in understanding gender in the IGF and Internet Governance spaces.
With that, we will dive into the session.
So I'm Smita V, I work with the Association for Progressive Communications in the government rights program.
I have fantastic speakers with me and others joining us remotely from Rising Flame.
Our context to this session itself, why are we doing a session on who is watching the machines at this point? Right. Why this session? Why now? Why here?
The key, the overarching theme for the IGF is looking at building resilient shared, sustainable Internet. I'm really happy that, you know, some of the teams really looked at addressing emerging technology and AI, what happens when we have Internet Governance and tech and this type of work, sometimes we're stuck in the current and with technology that's dangerous. Technology moves so fast, if you're not already talking about emerging technology, how that's going to impact the communities and countries, sometimes you get left behind from the very beginning.
Why now? This is a very pivotal moment in many countries, a lot of the governments, they're suddenly paying a lot of attention to technology. They're also paying a lot of attention to emerging technology to see how they can use it, particularly surveillance technology and, you know, digital systems, visual, facial intelligence, artificial intelligence. As people are looking at Internet Governance and digital rights, it is essential that we start looking at it more carefully and also bringing, you know, our experiences, our knowledge, the learnings from our communities right away right now.
Why here? You know, yesterday in the Opening Ceremony, the Opening Ceremony went on for about 90 minutes. In the 90 minutes women and LGBTQ persons were mentioned once. Right. In building a resilient, shared, sustainable Internet if you mention women and LGBTQ persons once in the Opening Ceremony, what kind of resilient Internet are you building and who is it sustainable for? Right. This is why it is important to speak about it here.
With that ‑‑ one last thing, the reason we're brought together speakers from different spaces and backgrounds, it is also because when talking about emerging technology, more often than not, even now, the onus of making it perfect is on individuals which is not the way it should be. We need to shift the onus of doing better regarding technology and, you know, the intersection with rights. We need to shift that onus on to systematic change, structural change. Right.
With that, let me ask the first question to the great speakers we have here.
What are some emerging technologies that you see impacting your communities in your region or country in the near future? Liz Orembo, can I start with you?
>> LIZ OREMBO: Thank you so much for that great introduction.
I'll talk about our work at KICTNet and the research work.
I'm a fellow, a Research Fellow as well. Our work at KICTNet started with training women on digital security and then we have started realizing that they don't engage, women politician, women are not represented in governance, especially political when they try to try to seek political seats and more of them have been trying to seek political seats. A thing that's been described, it is the algorithms. They are mostly attacked online. Also because of patriarchal norms that exist in society and so as much as individuals are the ones who post on social media, social media amplifies these kinds of posts. When attacked, women are attacked, they're also amplified online.
Why so? I think it is because of the business model, advertising model, social media companies are trying to create algorithms showing what people like and engage in. It just happens to be also women online, especially people trying to seek political seats is one of the content that are actually amplified. This really, really affects how people or how women and other genders actually seek political office because they're impeded by this, by these attacks.
So, what you're trying to seek is a change of this, discouraging such business models, explore other business models where there is no harm actually. Especially on minority of the population, population that's actually disadvantaged. What we're trying to do, we're trying to do more research, try doing more campaigns about this. It is actually just a pinch of the ocean, it is just work that is starting.
As you said, this is emerging technology and we just notice it, especially this recently concluded general elections.
>> SMITA V: Thank you.
>> CHENAI CHAIR: Thank you for the question. When I think about the future from where I come from, locating myself in Southern Africa, also being mostly interacting in East Africa as well, it is a realization of potential increased bias based on more and more technologies, specifically for surveillance cameras, you know, in the case of improved personal security and state security. We see a lot of surveillance cameras deployed in countries but the question is who owns the data. This is a question that's actually raised in the previous panel. The main conversations around who is ‑‑ where is the technology being procured? There is concerns around transparency because it seems as if we are getting technology in our regions for the sake of training the technology to be able to better recognize black faces. It is important back for someone that's great, works better, doesn't deal with bias.
Another point of concern, it is also increasingly in research projects that I have done personally looking at how African feminists are engaging with data. There is serious concern on how does my gadget know where I am. I love the convenience. Why is it that I feel like I know consistently someone is watching me and talking about responding if I have a conversation, the next thing, it comes out on my Instagram page.
This is the realities of people dealing with the fact that there is increased surveillance as they navigate spaces. We know that surveillance is always a tool of control, particularly from a patriarchal perspective, if you don't fit the box, there is increased watching in terms of what you're doing and then they may actually make policy and regulation put in place for more control and more moderation.
An example of this, it is oftentimes what you're saying, it is considered in line with the issues of hand. So another thing, with work done by Mozilla, thinking about privacy not included, so some of the apps that are put out there as way for people to respond to, say, for example, to mental health issues and then you find out that actually the app you're trusting with your most personal information does not include privacy. You are unaware because as much as we're all experts in this room, people with very knowledge of information, how many of us actually read through the terms and conditions and really understand how they address these issues.
That's my intervention in terms of the concerns coming up. Thank you.
>> SMITA V: Thank you.
Srinidhi Raghavan, can we come to you.
>> SRINIDHI RAGHAVAN: Sure. Thank you.
Really excited to be here and to listen to all of the other conversation.
I think coming from the context of India, just to think about this specifically with regards to disabled people in the country, we are seeing a lot of conversations around systematic and creation of ID cards allowing for easier, streamlined access to health service, disability pension, to other things that would make, accessibility within a school system, et cetera. These are conversations around what's seen as the universal disability ID card that's being contested at present or being introduced in this conversation.
One of the big things around this has been around disability organizations really asking questions around where is this data being kept, who is ‑‑ I mean, how is the system wired against or able to detect or create spaces for disabled people to even access these systems in the sense that ‑‑ and because all of this, all of these access to all of these service, telecommunication often linked to the extremely contested biometric ID card that exists in India and that system which was built keeping the non‑disabled body in mind, it doesn't necessarily take fingerprints of folks with cerebral palsy, who have retinal detachment, who have perhaps one, two less finger, amputated, so the system itself was wired to see the person who was receiving the ID card as a non‑disabled body.
Then, when the disabled body interacts with this system, we see that there is an immediate hinderance of access to services. So in many way, the technology itself that is being presented as "savior" for disabled populations is the technology that we're pushing against. That we're fighting against because we're not able to access the things that are needed.
It takes up to sometimes over a year for somebody to access a disability card which means then that they don't get pension for that year or they don't have access to the educational system which would give them ‑‑ which would mean that they get further delayed in comparison to non‑disability people. In many way, when it comes to disability technologies offered as a solution, as a way to bridge the gap between disability and non‑disability and what interaction we're seeing, it is further hurdles because the technology created, it was created using or envisioning the non‑disabled body mind and not really putting the disability in front and centre in the system.
I think some of these are emerging conversations of how then that would impact disabled peoples access to health or health insurance which is a system that automatically removes disabled people from it as non‑eligible for health insurance because of our existing conditions and lived experiences.
I think these are some of the things emerging from within this context and I'm sure it is similar in other contexts as well.
>> SMITA V: Thank you. To you.
>> SHEENA MAGENYA: Good morning. I work with the Association for Progressive Communications, Sheena Magenya. Thank you.
Also something, mine is to say something small after everybody has said a lot of profound things. It is important in a space like this to talk about the fourth industrial revolution which is a narrative pushed in many policy spaces that presents technology as this big equalizing moment for especially Africa that the biggest solution to all our problems is more technology, but also less oversight, less accountability and that the more Africans have access to technology, the more we develop.
Many women rights, feminist scholars have already disrupted the language, the ideas of development for who? So when we push new technologies in contexts that are void of accountability, void of equity, void of representation, we're widening the gap, widening the lack of access.
For instance, in the previous panel, held by policy, there was an intervention from the floor, somebody said that, well, it's kind of pointless to come to these spaces and have conversations about AI in Africa where we do not have enough AI engineers but without investigating who are the AI engineers which bodies are they holding, which biases keep getting built into additional technologies? So mine is to bring it down to interrogating further some language that we bring into the spaces that we push, think that technology is this equalizing thing in a deeply unequal society.
>> SMITA V: Thank you. Thank you for bringing up the comment from the previous session.
I also think that a lot of times when you talk about technology, if we wait for everyone to understand it, to have a conversation about it, the hierarchy forms in which only those that understand it in a technical centre, only the ones who are allowed to talk about it.
Right. It will never be that everybody understands it completely for you to then go about to have an open conversation which is multistakeholder, which is from different countries, communities.
My next question actually is related to this one, which is what are some of the lived experiences of marginalized communities when engaging with new tech and AI, how does this shape our rights, freedoms, voice, and, you know, I know that in current context it may seem like I'm leaning towards it is always bad but it is not. Sometimes it can also be good. Also to recognize that. Right.
The reason I'm asking this, it is because sometimes when we talk about technology, the lived experiences of people who are being impacted by the technology, it is often the last acknowledged and addressed in it. For example, in India, right now, there is a lot of movement by a lot of money put in by government, state and central governments towards facial recognition softwares and this is, of course, for the protection of women and children because, you know, they both have to be protected together. They never asked the women if they want to be protected and if this makes them feel safe. In fact, research shows that when the camera is present in a locality, women feel further unsafe and they feel like their free space is now taken over by surveillance. When you're seeing camera, facial recognition as a method for safety, who are you leaving out, who are you throwing under the bus? Right. Facial recognition inherently has a risk of a bias and in our country, color, it has different context, color has contexts of safety. Right. It is because of patriarchal structures that exist in our countries.
So when using a flawed technology, which has bias baked into it, you see that this is for safety and this is used for judicial processes, who are you throwing under the bus? Right. Increasing research shows that facial recognition mostly misidentifies black women.
Even more than that, in recent research, it shows that nonbinary, trans identities, they're most unidentified 100% of the time. A nonbinary person who is dark skinned, they are just a criminal always according to the same system. Right. This intersection, it is important to observe and to see that this technology is being used for your safety as the government tells us all the time.
Similarly, what are some of the experiences that you have encountered when working with communities?
>> CHENAI CHAIR: For me, what is quite significant, looking at the work we do at the Mozilla Foundation around opensource technology and responding to the reality of voice technology.
I'll use that example of the bias that exists within current voice technologies. We oftentimes, the lived reality is, if I was to speak in the way, if I was to speak in my home language, if I was to speak in the way that I speak that, my device would not be able to recognize me. Oftentimes we find that most ‑‑ well, if not all of the current popular voice technologies do not recognize different accents that are not American or refined European, they also really are biased towards women. Even if you do have an American or European accent, if you speak to your device, you know, it won't recognize you. If you have got a partner with a deeper voice who may happen ‑‑ most likely is maybe male ‑‑ it will respond better. They have sort of had this reality of voice ‑‑ voice technology is seen as a solution to disrupt the technological space and also to increase interaction with devices so that people can have access to information.
We're seeing a lot of solutions in the space that I exist in where voice is being pushed forward as a way in which it overlay solutions to access information.
A way then, you know, thinking about the positive aspect, being reflective of the biases, existent discrimination with current voice technologies is through our common voice project we create opensource datasets where one is actually where we track for representation, for gender, inferred from male, female people who do sign up for the project and we also are currently working on building up East Africa languages. A thing we have really been intentional about, because I think whenever you're building up a new technology, particularly emerging technologies, it is really to be intentional and have people from diverse backgrounds in the room so that it is not just the machine learning, not just the person writing policy, people who are able to then work in communities and come back with points of concern.
So privacy, it is often raised by people, particularly African women with a question, what are you going to do with my voice. We really rely on people contributing the voices to the open datasets and then technologies are built up that are responsive to different accents and ways of speaking that people have.
So we have really been working to address the question around privacy in terms of being transparent of where the data goes, who is going to be making use of it, how we can actually continuously think about how can communities who contribute to the open voice datasets benefit in the long run in addition to answering the question of privacy.
Just to sum up, I think when you think about emerging technologies, there really is a concern around the harms of bias and discrimination emerging than when we think of using the solution through an opensource perspective, there needs to be intentional about addressing concerns of transparency and privacy, but also future long‑term thinking of how can marginalized or communities actually benefit from the data or use the data in the way that they see best fit.
>> SMITA V: Thank you.
Liz Orembo, you wanted to respond to this?
>> LIZ OREMBO: Yes. Thank you.
A good thing about artificial technologies, the more you use them, the more they learn your preferences and how you work with them and they adjust to your patterns of usage.
The other thing, access. You know, access also affects us along gender lines and we actually have disadvantaged communities along with challenges of access. When women and minority genders do not use the technologies, maybe because of cost, affordability, maybe because societal norms, it means that there is no data to actually learn what's the preference, what they are, how they use technology. That means future development of technology, it is not going to take care of how they would like to use the Internet. So really we're leaving them behind.
So that's one point about usage that is really coming up important even as we talk about the biases. There is also a bias that will come about because we really don't have data about this other population.
>> SMITA V: Thank you, Liz. It becomes like a continuous loop: You don't have data, it doesn't work, because, then you're biased.
Thank you. Over to you, maybe I can focus the question a little bit more, because I know that you work on Freedom of Expression and civic spaces. You have some lived experiences of communities that you have seen in relation to this particularly or broadly?
>> LIZ OREMBO: I don't know if this is the right place for this intervention.
It is also important to ‑‑ I think I'm jumping on what Liz just said on how the more you use certain kinds ever technology, the more ‑‑ the more the machine learns (Sheena Magenya), there isn't enough room to refuse to use. When we think about COVID and how you had to move around during COVID, you had to fill out the digital documents that track your movement, say where you're coming from, where you got the COVID test, and there was no choice. There was no ‑‑ there is something about that that I think fundamentally for very many communities who have criminalized identities, for example, if you are a trans identifying person, an LGBTI person in a political, national context where your identity, your sexual gender identity expression is criminalized, and this is some information that they try to get out of you, this limits the ability to participate in certain spaces or you have to lie, you have to say yes, I am, you know, a cis, a heterosexual women even though that's not how you identify, that's not how you live your reality.
There is something also about how the technology is forced on communities without understanding the complexity and the nuance of our lived experiences that is quite problematic, and therefore you could argue that some people choosing not to help the machines learn is an act of resistance because you would be lying anyway in a political context where you cannot be yourself.
>> SMITA V: Definitely. Especially if the ‑‑ like you said, the effect of helping the machines is going to be long‑term, like for example in India I know that persons who are HIV positive are refused to sign up for digital ID system in India because there was news that they were going to bring in a health ID system which then would be merged with others and it means that the identity is linked to the HIV status, something that they could not risk getting out to their families. This meant that they could not vote because the government tries to force you to have that ID card, they cannot access vaccines when the COVID pandemic happened because they asked you to submit this again and to make it intentionally more difficult when you don't submit it.
Over to you.
>> SRINIDHI RAGHAVAN: Really taking from what was being said around what this means in terms of what we disclose and don't disclose. This is very pertinent when it comes to disability, a lot of the emerging technologies, AI that are there right now, they're meant for us to ‑‑ meant to ‑‑ I mean, meant to find parallel, being able to quote, unquote detect disability in others. We know this is a fact that there is widespread disability discrimination everywhere and that automatically detecting disability is going to have harmful impacts on others.
It just comes back to this conversation of what it means for a person, like you were talking about the universal health ID that's being pushed in the Indian context. One of the things, it is that they want to streamline our health services, but this also means that, say, people living with psychosocial disabilities or mental illnesses, they are often denied medical insurance because of the fact that they're not deemed as fit or able to access that service.
Now, because of the streamlining of these system, being able to detect when somebody has a disability, using standard ways of understanding how disability presents, like if it is like I'm not able to make eye contact, I have difficulty phrasing certain sentences, et cetera, resulting in people providing a diagnoses for me through the system that is existing and this is becoming popular for not just governments to do but private organizations to do, et cetera. What that means in terms of me maintaining privacy, deciding when I would like to disclose my diagnoses, but also other implications of accessing services and the requirements which will automatically ‑‑ which where I'm automatically deemed unworthy because of the system having a prior set of classifications of who is fit to access these things.
In many ways, it falls out because having to disclose disability, it is at the core of being able to access reasonable accommodation and accessibility rules of the community, but when that disclosure is recognized or when it is against us by excluding us from these pieces but also what ‑‑ who gets to make that disclosure and when is shifting rapidly because of the presence of technologies and creation of new technologies that's able to "detect" disability or find who is disabled, right, and what harms that would cause. It reminds us that it is going to help us create a more inclusive world and we do know that as a society there is heavy stigma against disabled people, is it going to create more inclusion, create systematic exclusion?
>> SMITA V: Thank you.
I'm going to pause for a minute and open up to the room.
Are there any questions from people in the room?
>> AUDIENCE: Thank you so much. Apologies for that.
So I'm coming from the observant, we're responsible technology institution and we're trying to look at the issues that you have mentioned.
I would like to ask what do you think are the challenges and issues that you face when it comes to holding technology companies accountable? These are companies with a lot of revenue, manpower, they could just hire thousands of smart people to address those issues. What do you think the stance is and what can we do for a better Internet? Thank you.
>> SMITA V: Thank you.
Any other question that we can take a couple of questions and then ask the speakers to respond.
No other question. Who would like to respond? All of you. Okay.
>> LIZ OREMBO: Thank you for the question.
Challenges for the technology companies, holding them accountable. Money. We're a Civil Society, I think, you know, we just act out of passion and sometimes you're limited in funds, on the other side we have someone who is making profits and can do whatever they plan to make the profits. That means they can also hire a big workforce like you said to do whatever they want.
The other hand whiff, lobby, we can lobby the government from the citizen point of view, saying that the technologies are harmful and they do not make these kinds of legislation or make this, but they also use money. Especially some of the countries, the politicians love money, and those legislations just go.
The understanding among the public, it is sometimes that we try ‑‑ we try to make the companies accountable by empowering the citizenry and I can talk about some of my experiences campaigning for privacy, respecting technologies, and we started this work as early as 2007, around there, talking about privacy. When you start campaigning for it, people start asking you if I'm a good citizen, what am I hiding, things like that. It took some time for people to understand what privacy is, what is it, what it means for them, all of that. When people don't understand it, it also becomes hard for Civil Society to actually achieve this accountability. Yeah.
Other cans add.
>> SMITA V: Thank you.
>> CHENAI CHAIR: Adding on to what Liz said, it is important, geo location dynamics.
The South African government summoned WhatsApp, a company last year and this year, they rolled out the new policy privacy, they didn't come to the table, they do have offices in the country, but, you know, they took their time because where they were located, they're like do we really need to go? Is South Africa a big enough market to listen when polled, and again, when you frame it from that geo location perspective, what people get under GDPR when traveling to Europe is completely different from what you get even if your country has a data protection law. Because there are 33 countries in Africa with data protection law on the continent, you recognize that structurally we exist in a world where they still have the Global North and global majority dynamic, responding where the money is, the market is, right, even if there is political rule. Because oftentimes, so many conversations in these spaces around capacity building, policymaker, regulator, the reality is that they have political will to be able to hold companies accountable. But going to respond to a political representative from Europe who they think is likely to be able to give them a lesser fine because the fines are quite high, if they're fined by an African government they do the quick conversion from dollars to rands, you know, small change, we don't care.
For me, it is accountability challenges around the geo location that as long as they feel like they don't have to respond to the laws of that country, they're located in different place, it will be a long journey and so collaborative work is the only way we can get to have these companies accountable.
>> SMITA V: In relation to that, another thing, sometimes when we're trying to hold companies accountable, often it is Civil Society doing it alone. Tech companies and as more and more technologies are being used to influence elections in different countries, politicians, governments, they don't have the incentive to support citizens. Right. They have more of an incentive to support the tech companies, rather working, going, getting ‑‑ going along with the lobbying, you know, convert that into words rather than ‑‑ it almost feels like in Civil Society, it is challenging the tech companies and the governments simultaneously, they blend into one symbiotic entity and that's a challenge I think today because the tech company can give them something that we cannot. Votes.
Any other questions?
>> AUDIENCE: I'm sitting here in two hats, so one question, from our UNECA Youth on social media pages, the second one, coming from me directly.
The question from the member is saying as Africa is as a multistakeholder, strong enough approaching these issues? As we all know, we're coming from different countries, our approaches are different, and we have quite a silo approach when it comes to dealing with tech companies.
The second question, it is as I said, comes from me, I come from Africa, coincidentally, South Africa as well and we have been robust in engaging the tech companies that were mentioned and one of the things that happened, it was in our recent elections we actually got the tech companies to come forward when it comes to combating misinformation during elections. The government, Civil Society, tech were working together. It was for me a first step of obviously a very long way to go. My realization in that, it is when we're all coming together, engaging on such a matter, coming from a perspective of decolonizing the Internet when looking at race, gender, et cetera, coming from a point of the data protection, which is now coming in my privacy, my data as an individual, what's happening in terms of simplification, for example, those have been the discussions that have been happening around this.
My interest, our strategic thinking as the different stakeholder, there is ‑‑ Civil Society, obviously, coming from an activism point of view, Human Rights‑approach point of view, government, it may come from a different perspective, obviously it is about revenue with tech, everyone sitting here is almost like a moneybag you could say as much as we are in a platform that is there to exercise our Freedom of Expression. However, end of the day, we do need platforms, we do need them to be able to reach out on the global scale and to be able to interact with one another.
Yeah. Just wondered your thoughts around that.
>> CHAIR: In terms of thinking of multistakeholderism in the African continent, the reality, it is like a slow‑paced change. I have been part of the Internet Governance space since 2015. A lot of things I have seen emerge in terms of policy cohesion and response to trying to hold tech companies accountable has been that long, slow journey to then be able to have companies taking ‑‑ tech companies in the spaces to have a conversation on these issues.
Now putting on my Mozilla Foundation hat, corporation hat: As a technical company working towards good of the Internet, trustworthy AI, for us, we recognize the importance of community right from the beginning. So, yes, multistakeholderism, it is stemmed with power issue, stemmed with people being able to access spaces, but they are companies such as Mozilla where we try to facilitate spaces to have collaborative Working Groups.
One of the things we have every year, it is the Mozilla Festival, next year we're excited to actually have a festival and in that space we try to bring in builders, the tech community, policymaker, Civil Society to have a supported, collaborative conversation in terms of what is innovation, interventions that we can put together.
To respond to that, the question that emerges, emerged, there are multiple spaces with multistakeholderism has been actually able to shift the conversation. I think the importance of that, it is to be able to weigh what point are you getting, where are you trying to get with the conversation? Taking slow journey.
Then, on the last point that you reflected on, I think it is quite ‑‑ it is really important then to use the multistakeholder platform which is what it is designed to identify, that people have different starting points and at the end of the day this year's topic around ensuring a resilient Internet is an end point that as we decide that we're going to put interventions around a resilient Internet how do we recognize what is the government’s position and how to hold them accountable, support them and locating that in the political context of sometimes stakeholders have their own agendas. At the end of the day, it is really kind of, like, what do we get from each other and not try to, you know, turn a government entity into a feminist organization because they never will do that but for the Canadian who is decided they'll have a feminist policy. They have been able to influence other government stakeholders to think about gender from a feminist perspective. It is really trying to have the strategy of who can help on the different agendas that we all old.
>> SMITA V: Thank you.
You wanted to respond?
>> SRINIDHI RAGHAVAN: I think also with regards to large tech companies and disability specific, I think one of the hurdles, it is that we're often as a community stuck with access because often accessibility needs are not often met. We are not able to go beyond that in terms of conversation and deepening concerns that may be present. How do we envision consent when it comes to disabled peoples access to these spaces, et cetera.
You know, one of the things is also, I mean, it is true for all of our communities, but within disability, because of the diverse set of needs and disabilities, and the fact that context play as huge role in access to many things like Indian context, we struggle to talk about disability in technology because of the wide‑spread poverty resulting in limited access to technology and especially the Internet.
I think that the conversation would be ‑‑ it would be important for us to move beyond the conversation and use not just accessibility as an entry point but to think of other ways in which we can imagine how disability is impacted in these situations.
I was reading a study recently, it was done in the U.S. around using AI to decide whether somebody straight for jobs or not. A big problem in that, it was the existing working system had such few disabled people that it deprioritized hiring of disabled people or used matrix of productivity and other things which work against disabled people like how long it takes for somebody to navigate a platform which may be very slow for somebody with cerebral palsy or high‑support needs. This type of markers that are defined in the system and within technology companies, they are often going to work against disabled people because of the lack of conversations around these things.
I think moving away from thinking about it just from the lens of accessibility is essential for us right now. Technology is developing further and beginning to work against the community in very not good ways.
>> SMITA V: Thank youth let's get back to the panel.
In terms of Freedom of Expression and civic spaces and spaces where discourse is taking place, how is this effected by emerging technologies? You had mentioned some work around this, if you could expand more on that and other who is want to add in on this particular topic.
I think when we look at technology, one of the key things that's often talked about technology, this is where everyone is equal, everyone can speak. This is a very easy sort of idiom to throw around. Is it actually true? With technology progressing, at break‑neck speed, how is this being affected, how are the rights online especially terms of expression and association.
Who would like to start?
>> SHEENA MAGENYA: Thank you for that question.
It ties in closely to the work that we do at the Association for Progressive Communications in the women's rights program.
Earlier this year I had the privilege of attending a gathering in Zambia that was looking at decolonizing the Internet. What was profound about this meeting, it was I guess meeting and interacting with a Civil Society organization that looks at whose knowledge exists online. I came across an interesting factoid that ties into this question, for instance, Google estimated in 2010 that there are about 130 million books published online in at least 480 languages that we know of.
In a world of 7 billion, then, now we're 8 billionage, we have been busy, speaking nearly 7,000 languages and dialects, we estimate that 7% of the languages are captured in published material. When you think about just that, the fact that, a large amount of information online that exists excludes a considerable amount of the world's population which would primarily be Global South, it presents a very skewed understanding of who holds power to shape these narratives online and it plays out in digital spaces when it comes to a more local context.
We look at say Kenya, and in Kenya you find that the majority of the people online are likely to be cis, heterosexual men who have the resources, who have the resources in terms of the technology, who have resources in terms of their ability to urn money that can buy enough data for them to go online, and insult you. Then you have a situation where you have a woman, a ‑‑ I guess a visibly, an out gender non‑conforming person taking space online, expressing themselves, saying just about anything, yeah, this attracts virtual, violence in the digital space, very much like the offline space, the violence that non‑conforming bodies experience on the online space, so when you look at Freedom of Expression online, it is severely limited.
When we were speaking about the community standards on certain platforms, who is the community? Whose standards? By and large, it is going to be whatever the decided conservative norm is. That's the standard. The community that's enforcing the standards, it is going to be people that conform to whatever the norm is. This creates just an endless cycle of the continuation of violence online.
And I would be remiss if I didn't take this opportunity to, for instance, look into Special Rapporteur's report on the promotion, protection of Freedom of Opinion and Expression Online, which is a groundbreaking document. It brings into sharp focus the particular violence that non‑conforming queer women identifies and bodies online experience. It is an important piece of thinking and coming together and I guess multistakeholderism. It is multistakeholderism in practice when we have barriers creating space for Civil Society and people who are living experiences of violence online to come together to say that the violence that my black Kenyan queer tattooed short haired body would experience online is very different for somebody that isn't me in those spaces. It is important to recognize that.
>> SMITA V: Thank you for that.
In fact, yesterday after the Opening Ceremony, there was comments that came in my DM which were saying, you know, you are in our country as a guest, you should not complain. I was like why is this in my DM? Why not tweet this to me.
Would you like to respond?
>> CHENAI CHAIR: I guess the DM is safer.
In what Sheena Magenya had said, really thinking about the reality, it is one of the strategies that people have employed, it is as we set out the community standards, the question you raised, the ability for people to then use the community standards to silence people who are gender, non‑conform, people that do not fit into the narrative of women, other LGBTQI people. A lot of times you find mass reporting is used as a tool to silence people on the platform. In American context, recently someone posted, discovered a WhatsApp or telegram group where people collected I think democratic, what they called democratic Twitter platforms and they were planning to mass report. So obviously the Telecom group has 2,000 people in it and the plan was when we have them, we shut them off the platform, and that raises an issue of who is actually doing the content moderation at the back earned and are they looking into the context to nuance, why is this particular page getting this much reporting and looking at it in that context of what's happening.
Oftentimes the algorithm should be correct, if you report it, if we get 2,000 triggers of a mass report, not sure, you know, I may stand corrected, we take the page down, put you in the sin bin until you're okay and we bring you out.
What we have seen, the mass reporting, again, effecting a lot of people who identify as feminist, people that go against the grain, and then having their pages taken down and that also then raises a challenge that a lot of us, because of the affordability issue, engages in social media platforms, where our discourse happens, we don't have resources to be thinking about maintaining a website where discourse can happen and maybe because of the structure of the website doesn't allow for the conversation to happen. Where do we go to find the community? Social media. The social media community standards meant to protect us, actually end up harming us.
That's the thinking around Freedom of Expression and the current platform dynamics that exist.
>> SMITA V: Thank you.
I want to add two short things.
One, in this ‑‑ because this session is also on YouTube streaming, I know that for a fact if I, you know, if I I'm ‑‑ if I pose about this session on Instagram, say that hey, this was a session where they mentioned Palestine, they mentioned Internet shutdowns, immediately the number of views I get on this post would be infinitely more than it otherwise would be. When people are ‑‑ as more and more people are relying on online spaces and social media places for livelihood, income, are they allowed to speak about it and without economic repercussions on themselves. Right.
Another, one of the most recent examples of how technology was used to silence people, it is what happened with Muslim women in India. A lot of vocal Muslim women and online journalist, photos and identities were taken without their consent, put on an app hosted on GitHub and they were auctioned. This is used as a way to shut them up because they were speaking up against the government, they were speaking up against extremists, ideologists that are more and more prominent in the society and this was shut down the first time and then within barely two months, another source came up from GitHub because they didn't think it was important enough to take it down. Talking about intersectionality. It is important. Just because somebody is, it doesn't understand necessarily who the opensource app is effecting. After this, the cutest thing, most of the writing around this came from Hindu upper cast women but not from Muslim women themselves. The conversation which was happening with Muslim women were happening in closed spaces. This means that there is also, you know, after this invasion on their Freedom of Expression when they tried ‑‑ the experience that they felt in going and complaining to police, getting a complaint filed to how they dealt with this mentally in terms of mental health, what were the impact on how this led to further self‑censorship and silencing online to avoid this incident again. All of this, it was not recorded by Muslim women themselves. That's a problem because then who is speaking about whom? This is a very big element of Freedom of Expression, which is not spoken about as well.
Hopefully now there will be more research coming out around this by Muslim women themselves which I think adds an important voice to any sort of work and knowledge produced around this.
When we look at Big Data, any conversation around emerging technology, it is incomplete without talking about Big Data. Right. Data processes, do you think that the binary codes, the data right now is able to hold the multitude of identities that we share, the intersectional identities we share? How do we avoid this binary-cation that happens when data is collected? On one hand, it has to be horizontal data collection to make sure that policies and laws, and schemes that are influenced by data collected is inclusive. But on the other hand, data, they want to put you in boxes, how do you deal with this dichotomy that exists?
>> LIZ OREMBO: This is a very important question.
Sometimes you want to be represented, but sometimes the process of that presentation, it becomes characteristic lengthening for minority populations. For example, in countries, for example, for lack of good example, UK, most of the time, you're reminded that you're a black, American and you are filling out a questionnaire. It is important for them to get the data. On the other hand, when you seek medical service, services where you don't understand why your identity comings in, it makes you question why should I reveal this identity and would it affect me if I give this kind of data.
Yeah, there is that balance. I think we need to think critically how we're going to implement it without actually hindering or effecting what you really want to get out of such processes. Yeah.
>> SMITA V: Thank you.
>> SRINIDHI RAGHAVAN: I think it is also a complicated question. As activists working in this space, a big thing we realize about disability, it is that we don't have enough data around it.
Like we were saying, that data often collected, when it is not ‑‑ when it is not dealt with in terms of privacy, we don't take into account how it features in our perspectives, one of the problems then becomes, it is that it is recognized against ‑‑ weaponized against groups that we're collecting data about. Right. For instance, there was a study that was done in the U.S. which basically saw that there was a company that was collecting user data around disability. A thing that it ended up doing was that ‑‑ in gender data around disability, a thing it ended up do, it would be prioritized disabled people's applications during hiring processes because it didn't fit the ideas around who gets to be hired. Right. Who fits the roles that are most productive, "productive" most valuable.
All of these implications means that the data being collected often is not in the best interest of the community; and if it is not in the best interest of the community, how are we perceiving the larger space around who ‑‑ I mean, how we are able to use this data.
In addition, one of the things, we don't often know the data is being collected about us. When we don't know the data is being collected about us, it implies that there is a power in play in order to be able to challenge whether this data is actually there. Right.
For instance, the AI system at this particular company, I think it is called Hire View, it was challenged by the user, it is seen as the employer, so the employer has all of this data about disabled people, who works, who doesn't work, what are the characteristics that make them better fit for this organization, et cetera. The disabled person doesn't know this information is being collected, one, it is collected within the system. Two, how do you challenge when you don't know that the data is being collected or what is the data that's being collected against you? Right. The only thing that's clear for us, it is that the data is of course built on stigmas that exist in society. We already know the biases that disabled people are burdens and we're not going to be productive it in work spaces. By default, you know this is being built into the system.
So that's primarily the struggle for data. It is that we would like data to be able to build an inclusive society, to have better systems that support and help disabled people in work places but in reality that we're aware that the data that's being collected about disabled people is going to be used against them in these spaces. It complicates our relationship to data.
Who is collecting it? What are the implications of that?
>> SMITA V: Thank you so much of.
Next, let's look at policies. How do we think about policies around emerging technologies? Do we want to reject certain technologies? For example, if I'm not mistaken, there is a city that says that cameras with facial recognition are banned in public spaces. There is ‑‑ I want to bring in another question which came from an online participant Caroline who asked do you think there is a need for a transnational Human Rights instrument that would be legally binding for multinational corporations such as tech corporations? What other policy solution DOS you think would be needed to make sure that the development of AI is inclusive and non‑oppressive?
>> CHENAI CHAIR: Yes. Thank you for that question.
I think when we think about policy especially as I have been reflecting around the policy space with the work mentioned around digital ID, thinking about AI strategies the previous paneled about, focused on developing strategies for selected African countries, a lot of starting points we should start with, it is why. Right. The why then determines the kind of policies we'll get. A lot of the times, especially with the conversation, the fourth industrial revolution, the why starts from an economic growth perspective. The point, it is that you have a strategy, you respond to AI, you develop the necessary policies in order to regulate emerging technologies so, that you can be seen as converting the famous World Bank code of 1% of investment is 10% growth, vice versa, that's a popular statement around if we invest in technology, this is what's going to happen in the GDP perspective.
Secondly, it is also a lot of the times, policies, they're put in place, strategy, from a perspective of research and development, pushed by academia in the computer sciences and the computer science department, fantastic, without a doubt. A lot of the times, it is kind of like so how do we innovate, clearly build these robots walking around the streets, love them, not giving them gun, grenades, and actually I read a tweet this morning on those cute dancing robots from M.I.T. There is that innovation of research and development and usually connected to state security, military points, also policies are put in place for the point of participation in global spaces. Right.
A lot of the work that's been done around data protection on the African continent was in response to the general data protection regulation from the European block. It wasn't necessarily driven by the contextual needs of protecting privacy and data respecting privacy and data within African states but really was if you want to treat it as if you want to hold our people's information, show us your privacy ‑‑ your data protection policy and then we can talk.
Clearly when we think about policies for emerging technology, there is a need to locate the why. Then if we'll have policies and regulations that are a way of harm, in place, and issues that need to be addressed, they really need to take on that multistakeholder, being pushed by Civil Society as what we have seen with online gender‑based violence policies put in place and current law, it is because there was a need to recognize that tech does not ‑‑ online space, it is not an equalizer, all of the current policies we have around gender‑based violence, they also need to recognize the online platform and that's how we saw non‑consensual image sharing put into policies connected to cybersecurity, data protection.
The regulation, it is important, the question becomes where does it come from and how to ensure that it does not replicate harms as we have seen with the facial recognition challenge.
>> LIZ OREMBO: Apologize for that. Now I'm becoming the Chair.
So, responding to why don't we have, our country has some kind of international treaties to bind technology companies to respect Human Rights, I think there are some initiative where is we ‑‑ where we have tech companies actually coming together for some kind of ‑‑ is it guidelines, something like that where they follow.
How they assess one another, some examples, I think they're kind of inadequate. This brings me to a question of then who puts them into accountable when they come, and if some of them, let's say don't respect Human Rights, what kind of action should be taken, and who should take this action? Again, this multinational, they're registered in countries and they follow country laws and where also these guideline, let's say a country has ‑‑ doesn't have a right respecting law or might put the company in a position where they will have to let's say do things like enforcing an Internet shut down, such rights abusing actions, then what kind of balance should they follow because if they don't initiate ‑‑ follow let's say government requests to Internet shutdown, they're going to be deregistered, on the other hand, if they do that, they go against the laws. Who comes first, the one who are registered under or this other treaties.
I think the best approach, it is actually to work with Internet governmental organizations to actually come up with these laws so that nationally then it should shut down actually to hold them accountable as these treaties are domestic indicated. I think that's the right approach. It is not a single one solution.
>> CHENAI CHAIR: That's inspired.
>> SMITA V: We're playing musical Chairs.
Thank you for that.
I'm really glad you mentioned who ‑‑ when saying asking for a multinational body, who is the people then? I live in India, I'm very scared of asking for more policies and laws always. This is a chance to oh, we can speak in a other surveillance thing in here, use this to control how people are speaking about us online.
I personally believe asking for legal reforms based on what's existing right now is also a good point to start when working with governments. Also because in India, what they do, also they take offline regulation, they're very lazy about it, they copy, paste it in an online thing and put it under the information technology act and say this is the law for technology. It means that, you know, anti-sexuality law, talking about obscenity, things, they're replicated in the Freedom of Expression online legislations. It is a big problem.
We should hold on to the European words for that.
The last question to the wonderful speakers is, when we talk about advanced technology, what happens, sometimes we forget that we already have done a lot of work, yeah, it is not that everything has to be reinvented, the wheel doesn't have to be reinvented. Based on that, what do you think are some thoughts, learnings, principles, that we can take forward from ancestral technology, from the work that we're currently doing looking at rights, technologies, how do we embed pleasure, care, rights into how we're reimagining, thinking about emerging technologies? I think sometimes particularly for women, queer people, Persons with Disability, technology is seen as only for practical use. You use this for examinations, for applying for jobs, doing your job. You forget that, you know, the Internet is not only for practical purposes, it is also for pleasure, for joy, it is for connecting. Right. It is for talking. How do we make sure that we hold on to these value, thoughts, what are some of the things we want to carry forward when working on emerging technologies.
Can I come to you, first?
>> SRINIDHI RAGHAVAN: A thing I was smile, thinking about it, also a lot of the ‑‑ the whole thing happening with Twitter, we could see a lot of disability on Twitter, large groups of people who are disabled, organize online, feeling really upset and scared about what it implies for community for many of us, especially because other forms of organizing often may not be accessible, may not necessarily be able to have in‑person meetings and engage in things that are more possible.
One of the things that really stands out, it is that apart from the fact that the Internet, emerging spaces does provide for different forms of community, organizing, but also that often when we imagine especially the conjunction of disability, technology, the premise, it is almost always to fix the disabled person, to make them as non‑disabled as possible so that they can exist in society. You know, the attempt, it is always to use technology to make that possible. Right.
One of the things that we have ‑‑ one of the things that is helpful for us to think about, it is where that is coming from, what that means and we infuse this idea of using technology to meet certain ends but also to remember that at the core, there are still people and how do we enhance community and care. Just to give a small example, there is a lot of AI now that is being deployed on images, especially on Meta platforms like Facebook, Instagram, the image automatically creates an auto image description for blind, low vision, screen reader users, and it is a way of making that space accessible and it has its own benefits and learnings from.
It is also about the fact that we are in many ways trying to replace the human aspects of that. Of making spaces accessible, of creating spaces where we are all welcome and present and I think it is important for us to think about why we would like diverse societies and where everybody feels happy, welcomed. It is not that they can participate in society better, which is a great end, but also that we can all be together, learn together, enhance our understandings of each other and grow together. I think just thinking about how we think about technology is not really as a replacement or trying to fix a problem of some kind, but more so about how do we build more collective spaces for us.
>> SHOHINI BANERJEE: Thank you.
>> SHEENA MAGENYA: I like that we're ending this conversation shifting the focus from what's wrong, terrible things, we're suffering towards another of what do we want more of? I think in Civil Society spaces, especially like feminist ones, it is something that we do a lot of, we try to shift the focus away from this is all that's horrible towards this is what we want more of in the different spaces.
I think it is also important to remember that technology and innovation is not new to the human experience.
We have been, I think for as long as human beings have existed, we have been innovating, we have been doing ‑‑ trying to ‑‑ we have been developing technologies, yeah, and I think that the speed with which current, you know, tech shifts and changes and the uptake, you know, a new thing comes out, everybody is on it, it is testament to this, there is an inherent need to want to do more with more people better, but the issue here then becomes how this inherent interest in participating in the development of technology and the use of technologies then manipulate it by corporates, states, towards limiting people's access and people's ability to participate actively as who they are in these different spaces and I guess that's what brings us to this space to say where is the choice? Where is the option to participate so far in a particular technological advancement. Yeah. Yes, we love tech, what if I only want to participate so far? Where is the right to choose how far I want to participate in particular spaces and where is the agency of my being in these spaces to be able to do that?
Also, I would be remiss if I didn't take the opportunity to talk about young women, especially young women and young queer gender non‑regulatory conforming people because this language, it is not going to be used in many spaces, we're taking as many opportunities to throw the language in this panel. Yeah, they take up space, facing incredible amounts of violence online by simply posting a TikTok video of them dancing, a posting joke, a posting of ideas, posting just anything on the Internet that doesn't streamline with what society had ‑‑ very patriarchal old fashion society decides that we're supposed to live as good people, good women, whoever or how people are to identify online.
Young people, they're truly at the battle front online, yeah. And are sometimes fodder, the attack, they're fast, furious, there is no solidarity, recourse, nowhere to go to find voice and support. It is important to say we see you, and it is important for the rest of Civil Society to recognize that they're defending an inherent freedom for all of us, whether or not you agree with what a lot of young people are doing online, but there is an inherent freedom being protected in the spaces that is important to talk about.
>> SMITA V: Thank you so much.
>> LIZ OREMBO: Thank you.
Thank you for making me Chair again.
A lot has been said about tech for public good and I would like to emphasize that. We had talked about the drivers, things like that, and I think it is oftentimes we start interrogating drivers for bad tech also. Of course, economy, political economy, it has to be researched wider because privilege making, it is a big motive.
There is another one for tech as an incitement, innovation also as an incitement, there is a time where we had IoT, the Internet of Things, technically connecting basically everything, including pencils, if you could connect food, if it was practical, you could have connected food also. Why don't we have these technologies? I think it is because the market didn't receive it well. Who's the market? I think it is us. If we understand our rights better, I think we'll be able to drive what we want also through what we buy and what we use.
>> CHENAI CHAIR: Adding on to everything that's been said, exciting about emphasizing feminists, queer, patriarchy, all of this, it is important, building on the drumming, it is really that importance of looking at innovation, looking at technology as a space where we can have fun.
Two examples on the fun, when people looked at how people of the global majority were making use of the Internet, people found that they were making use of the Internet to date. They had access to the Internet, going on websites, finding love. There was a lot of sense of oh, wow, why are you not using the Internet to get a job from the academia space because of the utility of the government. We found people have innovate and made tinder and others because they have recognized the pleasure aspect of the platforms and created and innovated in that response.
It is important for us to, you know, so that's one example.
Another example, when TikTok came, a lot of people who were Facebook folks, a lot of existing technical companies thought TikTok wouldn't last, A, you didn't need to sign up on the platform, you still don't, to see the content that was there. You create content at the beginning, where you could do anything now, I know there is a problem with the content creation algorithms. They thought it wouldn't last wrinkles is this business model where we don't capture you on the platform. What did we see? Instagram tried to recreate TikTok. At the end of the day, it is really important for us to especially in the global majority, a large number of funding, innovation drive responds to the Sustainable Development Goals, you think of how you're going to solve issues for the grandmother in rural areas, my grandmother just wants to have fun also in the words of Cyndi Lauper. At the end of the day, it is ‑‑ I'm happy we're ending on the note of pleasurer, thinking of how to ensure that there is increased participation of people to be aware of the digital rights and doing it because of a measure of I don't want you to steal my joy, how do I make sure that the innovation you're giving me allows for me to have as much fun as I can and at the same time, call out my member of parliament about to get a loan I don't agree with.
That's my words.
>> SMITA V: A friend of mine says any app can be a dating app if you try hard enough. It is like on that.
On TikTok, I'm glad you mentioned that. India has banned TikTok right now.
What is important, it is that when TikTok was still allowed, it really supported who was creating the videos. You see suddenly people who are low economic classes, those that don't speak English creating the content. They're just enjoying, having fun, right. When it was banned, there was no uproar about t it was funny. If Twitter was banned, there would have been an uproar in the country. That tells how you think about Freedom of Expression and the intersection that have with class and cast and language. Right.
Why was TikTok not seen as a space of Freedom of Expression when Twitter is seen as that? That's an important question for us to also reflect on as ‑‑ and this is what I would take away from the work that has been happening so far and from ancestral technology, I think that we have to be really careful that we don't create the hierarchy in what is good Freedom of Expression, essential Freedom of Expression and what's not? When it comes to ‑‑ if we create this hierarchy, political Freedom of Expression is completely ‑‑ everybody say this is important, we must protect this, religious Freedom of Expression, must be protected, sexual Freedom of Expression, how do you send nude photos and say it is Freedom of Expression but it is. That's what I carry forward in whatever technology we are bringing forward, you know, you need bad and roses. Don't say that this technology helps you live better? Is it making me, allowing me to live happier? Helping me to connect with people? That's what I would carry forward.
Thank you for being here in this room. I know that we are about 6 minutes over time. We started late.
Thank you to the panelists and speakers.
I know we're very close to lunch time. If there is any burning question, comments that anyone wants to share, we can take 2 minutes for that.
>> AUDIENCE: Thank you.
My name is Florence. I'm a fellow for Persons with Disabilities in an action network. My comments to the disability discussed, that Persons with Disabilities, we're willing to be involved in the designing of new tech from the design stages so that the output is something that works for us. Otherwise, we risk once again losing the independence and going back to social exclusion which is something we have worked so hard against.
>> SMITA V: Thank you so much.
>> AUDIENCE: Good afternoon.
I'm with the South African Human Rights Commission. I'm joined behind me by colleagues from the Uganda Human Rights Commission. Thank you for the discussion, we stepped in a few minutes late as we ran over from another discussion.
Just to highlight, the Human Rights institutions, the role they play, there is a lot of discussions around Civil Society, and NHRIs, they're a key interlocker between Civil Society, the public, the government, and there are 86 ‑‑ rather 83A status NHRIs, that means they have speaking rights at the UN Human Rights Council. I would really encourage tech company, policymakers, government representatives, to ensure that there is a seat at the table for NHRIs, we do engage on the ground, many of us have an equality portfolio dealing with issues around Persons with Disabilities, older person, Human Rights, LGBTQI plus categories and in relation also to there was a comment made about a treat Y a possible tech treaty, these kind of mechanisms, they take so long, at least minimum five to eight year, if not a decade and longer to get a treaty off the ground, we need short‑term solutions.
What I can say, perhaps there are ongoing discussions around looking at Human Rights and perhaps that is an entry point, this currently is the forum happening, I found it bizarre it is happening at the same time as this, because it was a pull of what forum to go to, but we have to look at how to do this with technology and at the same time bring in the NHRI voices who are engaging and that can give a lot of insight from a different perspective and that can use the platform if we're looking for some kind of traction at a UN, international level, the NHRIs have a voice to speak at the agenda items on the Council level. That could be a very good route to elevate your country, your issues that you want to bring to the attention of the Council or any of the thematic mandate areas.
>> SMITA V: Thank you so much.
Thank you for your comments. That was ‑‑ I'm really glad we could hear you.
Okay. Let's close the session and go to eat. I hear rumbling stomachs. Thank you very much for being here. See you around.