IGF 2024-Day 0-Workshop Room 9-Event 174 Human Rights Impacts of AI on Marginalized Populations

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: All right, good afternoon, good evening, everyone.  We're going to get started.  Forgive us for some technical difficulties.  Can everyone hear us?  Hopefully, everyone can hear us, as well.

Thank you all for joining us both in-person and online.  I'm going to try to talk as loudly as I possibly can, because I know it's been a bit challenging for folks online today to hear every session.  It's my pleasure to be here on behalf of the United States government where I serve as Deputy Assistant Secretary of State for our Bureau of Democracy, Human Rights and Labor in the State Department.

Before we get started in today's session, we have the esteemed honour of welcoming virtually a number of our special envoys and representatives from the U.S. government, representing various different marginalized populations and they wanted to send their greetings, as well.

>> AI offers incredible potential to advance equity by increasing access to healthcare, education, and economic opportunity for those who need them the most.

However, too often, marginalized populations bear the worst harms of AI.  There's vast evidence showing how AI systems can reinforce historical patterns of discrimination that disproportionately impact people of African descent, indigenous people, Roma people and other marginalized racial identities.  And the risks of harms are the most pronounced for the people who experience multiple and intersecting forms of discrimination.

>> That's right.  AI tools are aiding the creation and dissemination of technology facilitated gender-based violence, especially against women and children.  This, especially pernicious form of harassment and abuse is already threatening the ability of women and girls to participate in all these spaces online and offline and has grave consequences for democracy.

>> Yes, computers might be binary, but people are not.  Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically.  However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals' lives.

Nuanced data about LGBTQIA+ data with appropriate privacy protections can help ensure that recommendation algorithms governing, for example, our shopping acts or content we consume on social media don't entrench harmful social stereotypes or sensor this beautiful diversity of humanity.  AI is an exciting set of technologies that have the potential across all sectors to help us consider and integrate diverse perspectives.

Humans are the future of work and freedom of association and collective bargaining are central to safeguarding workers' rights and standards.  Amid the rapid expansion of AI technologies, including and in particular for marginalized populations.

Unions play an essential role in advocating for practices that can increase the meaningful representation of women and diverse groups of marginalized populations in AI.

They advocate for safe work environments, limiting invasive and unsafe workplace monitoring.  They ensure fair employment practices, secure equitable compensation and ensure the benefits are shared.

>> AI is a reality of our present and future.  But also, what is a reality is that a lot of times, AI is built in a way that leaves us behind and when I tell us, I'm saying the disability community.  We need to ensure that AI in development and design and the testing and implementation is accessible for everyone, including the disability community, and not just on the specific technology side of things, but for all technology.

>> Thank you very much to all of our special representatives and envoys.  I think as you heard here at the start of our session, our shared goal in the United States government is really to harness the opportunities of artificial intelligence, whether that be on economic growth, increased access to quality education and advancement in medical care while mitigating the risks.

And we know all too often, some of AI's most egregious harms fall on marginalized populations and those experiencing multiple and intersecting forms of discrimination, including algorithmic biases, increased surveillance and online harassment.

We're witnessing, around the globe, an unfortunate trend of governments misusing artificial intelligence in ways that significantly impact marginalized populations, such as through social media monitoring and other forms of surveillance, censorship, and other harassing information manipulation.

To counter these abuses over the last four years, the United States government has taken several steps to encourage safeguards at the national level.

We've introduced a number of executive orders and memos into our government's systems to safeguard the use and deployment and development of artificial intelligence for human rights.

And at the international level, we're working closely with the Freedom Online Coalition and other key like minded partners through the U.N. and other multilateral systems to lay the groundwork for continued international and multistakeholder collaboration for years to come.

But there's more work to be done, and that's where today's discussion really comes in.  The only way that governments can work to ensure that marginalized populations aren't disproportionately harmed by technology advancements is in partnership with you all around the room, around IGF and those that are online.

We're focused heavily on ensuring that we can easily create safeguards in our systems to ensure that we do not see the dissemination of disinformation and harmful synthetic imagery that can harm populations, how AI systems can exacerbate existing and real world divides, how they can reinforce stereotypes that further stigmatization, especially when these systems are not accessible for all their users.

So, we're quite fortunate today, joining with an esteemed panel of experts that have really gathered to work to fight back against these worrying threats and trends.

We have a number of our panellists online and we're fortunate to be joined here in the room by two of them, as well.

First, we'll hear from Dr. Nicol turner Lee, a leading voice on the intersection of race and technology and the quickly divide and is a recognized expert on issues of digital equality and inclusion.

Her work ensures that all communities, particularly marginalized ones, benefit from technology advancements.

We're also joined by Amy Colando, a lawyer with deep expertise on the intersection of technology and human rights.  As the head of Microsoft's responsible business practice, she leads a team dedicated to advancing Microsoft's commitment to human rights norms and a responsible value chain that respects and advances human rights.

Our friend and partner is a globally recognized lawyer and advocate of women's rights and digital privacy.  She's the founder and executive director of the Digital Rights Foundation, which focuses on issues of online harassment, data protection and digital security for women and marginalized populations in Pakistan and is a member of Meta's oversight board.

And we have a prominent human rights advocate and researcher at -- her work has highlighted the systemic discrimination faced by LGBTQIA+ individuals and her efforts have been instrumental in bringing international attention to these issues and pushing for legal reforms.

So first, I wanted to start out a little bit by setting the scene in terms of the risks and opportunities that come from AI and the threats to marginalized populations.  Let's kick things off with Nicole.  I'll turn to you first.  Some of those issues benefit from extensive international conversations, from the recognition in the engineering community over the past decade that it is critical to address harmful biases in AI, to efforts to curb the misuse of artificial intelligence and generative AI tools for image-based sexual abuse.

Help us set the stage.  Where do you think important progress has been made over the past several years?  And what are current challenges that you think need to be addressed or elevated on the agenda, particularly as we're all gathered here this week at IGF to address critical internet governance discussions?

I think it's really important that you help us think a little bit thoughtfully about where the current gaps and opportunities exist that we can leverage.

>> Thank you so much for the kind introduction and also thank you to the IGF for hosting this conversation.

Before we start, though, I do want to say -- (Audio echoing) Disconnect between opportunities of technology and those who are marginalized or impacted by it.

So, I'll lean into this conversation on where we have seen some opportunities and where we've seen challenges.

And in particular in my few short moments answering this question, I do want to point out that one of the opportunities that has become most prominent is our ability to engage in artificial intelligence given the distributed compute power that we have.

I think it's really important to have in this conversation, because it also lends itself -- the -- what we are seeing is the ability to distribute networks, because we are building compute power that has capacity -- 30 years in terms of technology and accessibility by people of colour in particular and we have not seen this very distributed network evolve as it has done today with chips and power.

The other thing that has been an opportunity of AI has been the way it's been integrated into a variety of verticals.  At the Brookings Institution, we started an AI Equity Lab that allows us to workshop journalism in AI, healthcare in AI, criminal justice in AI.

And mile we do that, f by putting the name of the sector and then the AI, is that we've seen an incredible influence of technology tools on these verticals that in essence determine quality of life, on the social welfare side, as well as the economic opportunity side.

And so, I think we've come a long way, for example, in healthcare.  We're actually seeing personalized medicine.  We're seeing more efficiency among doctors when it comes to personalized medicine and the management of healthcare.

We're seeing a lot more contemporary action and quick reaction, we saw that during the COVID vaccine development, when it comes to pinpointing things that would have taken a very long time in our intellectual discovery are now happening through AI.

I think another area where we've seen a lot of promise has been in climate where we're seeing able to use drone-enabled surveillance to look at where we have thermal outputs or throughputs that have potential danger for natural disasters or wildfires, while also seeing agriculture, many of these are very intersectional, the ability to look at climate as it relates to planting times or crop development.

So, I want to put that out there, because I often sound like a pessimist, which I will sound like now when it comes to AI and marginalized communities.

So, where we've seen the efficiency growth first, one of the areas that we're seeing a lot of bias as has already been indicated is when it comes -- (Audio echoing) I'll close with a couple of thoughts.

Obviously, there's demographic bias.  In the United States, that demographic bias is defined by race and ethnicity and gender has become more of a human rights concern.

In other countries outside the United States, class has found its way into the demographic biases, as well as both the United States and outside the United States, geography have become biases.  Where you live, who you are and what you do matter, because it is reflected in what we would call at the Brookings Institution, the traumatized nature of the data training these models.  It comes with those historical biases and those historical biases are often traumatized, meaning if there are systemic inequalities that point to the unequal access to education, for example, they will show up in the training data and as a result, have a consequential outcome of either greater surveillance or less utility for students that may be in that category impacted.

The other area where we actually have challenges is not just by who's commoditized by AI, but who's creating them.  The lack of representation of who sits at the table to design models -- the people who are impacted by them creates I think an over judgment of power that has consequences that can foreclose on the economic and social opportunities that AI models can provide, the ones that I just spoke about.

For example, when we think about who is developing models for the health of Black women, people may not understand that the lack of participation of Black women in clinical trials may mean that they may not show up particularly when it comes to breast cancer diagnosis in training models.  This was actually recently put out by the Journal of American Medicine that Black women disproportionately experience breast cancer, because their data is not represented.

That actually shows up in AI because AI is not divorced of the market-based data that is actually training these systems.

The other thing when it comes to the challenges that we have with AI is the fact that as has been mentioned and as my book suggests, we have a digital divide.  We're creating AI systems, and, in many respects, we haven't closed the accessibility divide.  That creates its own set of challenges as to who will be able to benefit.

And when we think about the global majority, we do a lot of work at the Brookings Institution on how these systems show up not only in terms of marginalized populations in the U.S., but all over the world.  In the African Union, for example, we know there's a digital language divide, that generative AI is primarily English-based, and it is not necessarily trained on the plethora of dialogue that comes out of a variety of majority countries.

As a result of that, we see challenges when it comes to representation not only in training data, but whether or not populations actually see themselves in these tools, particularly generative AI that is meant and designed to be a lever for economic and social mobility in those areas.

The rights of workers, who is taking those jobs to be able to annotate the data, I could go on and on, but there are so many structural, behavioural, as well as output or consequential outcomes that occur when you don't have the right people at the table, who we continue to commodes it subjects to be marginalized populations to fuel the AI models that we're developing.  We don't interrogate these models.

And I'll just say this, we don't interrogate them for bias, or whether or not they should be used at all, or decisions should be automated in the first place.

I will stop here and look forward to this conversation.  Hopefully, I gave you enough to talk about as we go into the next speakers and thank you so much for having me.

>> MODERATOR: Thank you so much, Nicol.  I think you did a phenomenal job first and foremost plugging your book, which I encourage everyone to buy, but also, both laying out the real tangible opportunities that we see from AI, everything from journalism, healthcare, addressing the impacts of climate change, and then laying out in detail some of the tremendous risks that we see for marginalized populations.

You addressed issues around the accessibility divide exacerbating -- in our societies through use of big data.

You talked about who gets a seat at the table in the design, deployment and use of these technologies and beyond.

So I next wanted to turn to your organization, has really been on the front lines of documenting some of the risks that Nicol just laid out, the exact risk to marginalized populations, whether that be women and girls and you've done a lot of work on impacts to human rights defenders and religious minorities.

I'm hoping you can build off the broader risks laid out and give us some tangible examples where you've seen both the benefits and risks of AI tools to marginalized populations and then really, because we do have many different stakeholders at the table this week in the IGF sessions, whether from government or the private sector, where you think there's gaps that require more attention in our international discussion.

>> Okay.  So at Digital Rights Foundation, we have been doing a lot of work around addressing tech facilitated gender based violence, and I feel that talking about AI or AI tools is an extension of what we have been talking about years around digital tools or digital rights, and all the harms that we are now connecting with in AI is an extension of those harms with the usage of AI.  It has become more sophisticated and advanced.  That's the same case with the tech-facilitated gender-based violence where we are now seeing how deepfake images of women and young girls are actually creating more risks for them, specifically when they are from the regions and cultures which are more conservative, where the honour of families or the society is connected to women's bodies.

And one challenge that we are witnessing is basically whether these deepfakes are actually real or unreal, that was not the case before AI-generated content when it comes to images and videos.

I think another challenge is that regulating this case, tech companies really have to do a lot.  And I'm sitting on Meta's oversight board.  We actually framed our own experience as a board in terms of what companies like meta can do to deal with automation on their platforms.

When it comes to the governments, there's a huge gap of governing AI, even while sitting at the U.N. Secretary-general's high-level body, the concentration of these conversations is very much concentrated in some global north countries.

And in the past, we have seen how technology that is being developed, designed, built, mostly like dumped in our regions, and we have no say into how these technologies are designed for the marginalized groups in our regions.

That exactly is the case with the AI tools, as well.  There are some benefits where it's also being used in the healthcare in monitoring climate change and AI powered translation tools are also breaking down language barriers for marginalized groups, but I feel that all these opportunities are still connected to the entire cycle of how AI is being developed, processed and deployed.

I think there are a lot of things to say, but there is a huge responsibility on AI companies, on tech platforms where all these harms are being increased.

How can we bring more accountability and oversight into the regulations that they are framing without including civil society voices?  And without having a conversation on human rights violations when it comes to AI tools.

>> MODERATOR: Thanks so much.  I think you raised a really important point that I suspect we will have a lot of additional conversations this week at IGF about, which is if we don't protect this multistakeholder model of internet governance, a multistakeholder model of conversations around the regulation and governance of AI and emerging technologies, then we will be missing an entire part of the conversation, which is how are these tools being deployed and used in ways that are impacting the whole of society, not just the governments and the people representing them?

I think that's a good pivot over to you, Rasha as you've done a lot of work looking at the impact of AI tools from government misuse of these technologies, and I know you've done an incredible amount of work documenting the ways in which autocratic governments have used technologies to repress marginalized populations, particularly LGBTQIA+ persons.

I'm hoping you could share a little bit of insight on how policymakers and AI developers should be thinking about these issues in relation to the governance and regulation of artificial intelligence, reflecting on the years of research that you've done.

>> Thank you so much.  Thank you for having me today.  In 2023, we published a report on the digital -- (Audio echoing) Particularly in Iraq, Lebanon, Egypt, Jordan and Tunisia.  What we've found is that governments are using monitoring tools, usually manual monitoring, not sophisticated tools, to target and harass LGBTQIA+ people and the significant finding that we had is that these abuses do not end in the instance of online harm.  In the sense that they are not transient but reverberate through individual lives in ways that often ruin their lives entirely.

In our report and in our follow-up campaign, which we published in 2024, we urged particularly technology platforms, such as Meta platforms, Grindr, same-sex dating apps, to address the issues that are related to content moderation, related to biases that facilitate and allow for these abuses to take place, especially when they are in the wrong hands.

So especially when they are exploited for malicious purposes, such as government-targeting of LGBTQIA+ people in contexts where they already face criminalization, whether it's the direct criminalization of same-sex relations or other laws, such as cyber crime and indecency and debauchery laws that are used to target LGBTQIA+ people, simply for expressing themselves online.

In developing this work, I also want to acknowledge that we are building off of work that Article 19 has done for many years on the specific issue, as well as the framework that was introduced, designing from the margins.

Specifically in technology and AI systems being able to design technologies with the interests, interesting and rights of the most marginalized in mind.

In some of the recommendations that we aim for, we really want to strengthen protections, while acknowledging that technology can also be used for malicious purposes.  There are many ways that regulations and addressing biases in algorithms, for example, can help mitigate some of these abuses that take place offline as a result of online targeting.

For example, AI systems often amplify historical biases, embedded in the data that they are trained for which leads to discriminatory outcomes for LGBTQIA+ individuals.  So to mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policy makers should also prior independent testing of AI systems for biases, particularly when deployed in public-facing tools.

Incentives for inclusive algorithm design that incorporates the inputs of LGBTQIA+ advocates and civil society experts should be central in requiring and enhancing system to better protect the most vulnerable users.

When it comes to content moderation systems, we saw and investigated that automated systems frequently misidentify LGBTQIA+ content as harmful or inappropriate, especially in languages other than English, such as many dialects of the Arabic language as we found in our reporting, which inadvertently silences advocacy around LGBTQIA+ rights, especially in contexts where advocates, activists and community organizers resort to technology in order to empower and connect and build community around their rights when public discourse and any offline organizing around gender and sexuality is either prohibited or could lead to criminalization and arbitrary harassment of these activists.

So particularly in content moderation, there must be a training of moderation algorithms on inclusive data sets that recognize diversity of LGBTQIA+ discourse.  And incorporating human oversight, particularly for sensitive content, ensuring nuanced understanding of this content, and finally establishing appeal mechanisms that allow for an effective remedy for users to challenge automated decisions of moderation that unfairly remove LGBTQIA+ content or otherwise leave content online that could be harmful and lead to the arbitrary arrest, harassment, torture and detention and other abuses of LGBTQIA+ individuals that, as I said, before, reverberate throughout their lives.

Finally, I definitely think that this should happen with the privacy and data security of individuals in mind, and enforcing robust data protections that allow for penalties for misuse of sensitive data, especially when it comes to the outing of individuals who are LGBTQIA+ people on public platforms, online harassment, doxing, and the resulting discrimination and violence that people face offline in their daily lives.

As I said earlier, cantering voices in design of AI tools is extremely important, to understand the unique needs and challenges of LGBTQIA+ individuals and protect platforms to prioritize creation of these inclusive digital spaces that actively counter discrimination and harassment that could also happen in tandem.

Human rights impact assessments are extremely important.  We already know that comprehensive evaluation of risks associated with content moderation with government surveillance and other issues is incredibly important in informing the changes and upgrading of these tools to be able to safeguard the human rights of those most impacted by these technology-facilitated harms.  Establishing accountability platforms, both for government, for developers, and expecting -- mechanisms for individuals and groups affected by AI-driven decisions is central to beginning to address these harms and the offline consequences of these harms across the globe.

Thank you.

>> MODERATOR: I think you gave us some really tangible recommendations on bias audits, I heard human rights impact assessments, grievance mechanisms, a number of the recommendations you raised are actually expectations set out in the U.N. guiding principles on human rights.  And earlier this year, the United States government led, and the full U.N. General Assembly agreed to a resolution on safe, secure and trustworthy artificial intelligence, which encourages and calls for increased implementation of the guiding principles.

So certainly -- in terms of expectations both for governments and private industry, private sector.  And so I think that's a good fit over to you, Amy, as we've heard some really tangible recommendations that Rasha has laid out building off of some of the risks that both Nicol and Nagad outlined and I'm hoping you can share a little bit of self-reflection from Microsoft's perspective, what do you think that companies should be doing more of to mitigate the harms that have just been laid out by our speakers?  And then also if there's particular steps that we should be taking in terms of the industry to help promote these steps or actions?  I think that would be quite helpful, as well.  Over to you, Amy, and thank you for joining us.

>> Thank you so much and thank you so much for having me -- (Audio echoing) I'm going to keep on talking and we'll see if it works out.  Thank you so much for inviting me and I'm learning a lot already in terms of our engagement, these multistakeholder conversations are incredibly important to shine a light on our practices to help us think of additional steps we can and should be taking to commit on the promise of AI.

So let me start a little bit with sharing some examples from Microsoft with the understanding that these are just examples, and the sacred process is incredibly important in terms of getting that feedback and scrutiny in terms of areas we can do better.

My team coordinates Microsoft's human rights due diligence including human rights impact assessments under our commitment to respecting human rights and providing remedies under the U.N. guiding principles.

That process includes and is very intentional about interviewing marginalized populations and allows us to understand the needs of diverse groups of our users, our supply chain and our employees so we can enhance our respect of the rights of marginalized populations.

Turning to AI, we recognize our particular areas of promise, potential, as well as particular areas that might exacerbate existing divides.

AI at its foundation requires infrastructure and connectivity and we have established our global data centre community pledge, which commits to building and operating infrastructure that addresses challenges and creates opportunities for communities.

This forms the basis of how we engage with stakeholders during all steps of the data centre process including -- (Audio echoing) In Australia, this meant weekly meetings over an -- practices into our design process.

Through engagement we introduced the project to gather insights to help us inform our data centre design, respecting our neighbours and the environmental resources around them.

Next for the development and deployment of AI, Microsoft's office of responsible AI has partnered -- to bring a diversity of voices from the global majority to the conversation on responsible AI through our global program.

The fellowship program convenes a multidisciplinary group of AI fellows from around the world, including Africa, Latin America, Asia and Eastern Europe across a series of facilitated activities.

These activities and the fellows that take part are intended to foster a deeper understanding of the impact of AI in the global majority, exchange best practices on the responsible use and development of AI and form an approach to responsible AI.

To combat the societal biases in AI systems, we employ a variety of approaches and are constantly learning from dialogues exactly like the ones we're having here.

In 2018, we identified our six responsible AI principles, including fairness.  Our policies are designed to clarify how fairness issues may arise and who could be harmed by them, and we take active steps to implement them into controls and code of conduct.

For generative AI systems, we've leveraged the U.S., national institute of standards and technology risk management framework, to develop tools and practices to map, measure and manage bias issues, which include the risk of generating stereotyping and demeaning outputs.  We have made significant investment in red teaming to identify areas of harms across different demographic groups, manual and automated measurements to understand the prevalence of stereotyping and demeaning outputs and mitigations to flag and block those outputs.

We look forward to working with governments, multilateral institutions and multistakeholder processes to continue to develop these frameworks, including through due diligence conversations, to help a consistent and aligned approach to improve the offering of AI and the potential to serve marginalized populations.

For our own generative AI services, we've established a customer code of conduct, which prohibits the use of Microsoft generative AI services for processing, generating, classifying or filtering content in ways that could inflict harm on individuals or society.  Customers must register for these services, a process that includes defining proposed use cases.  It may not use the service for other use cases, and we institute technical controls for abuse monitoring and detection.

The classifier model detects harmful text and/or images in user prompts, inputs and outputs.  The abuse monitoring system also looks at usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse.  Detected patterns consider, for example, the frequency and severity of which harmful content is detected.

These prompts and completions are then flagged through content classification and/or identified as part of a potentially abusive pattern, are subject to additional review processes to help confirm the system's analysis and determine actioning decisions.  We have a feedback loop with customers and that in turn includes improvements to our own systems.

Finally, I would like to close on a theme that has been identified by my fellow panellists in terms of the need for more representative data, and to ensure that we are bringing forward marginalized populations to be able to see themselves in the promise of AI.

Recently, we identified that our services, our generative AI services, could be improved in terms of representation of people with disabilities, a 1 billion population around the world.  We then partnered with Be My Eyes, a service and app that generates -- to actually visualize items that they're looking at.

This license to the Be My Eyes content allows us to ensure and advance the representation of people with disabilities in our service.  I appreciate the opportunity to hear from others on the panel on how we can improve our services and continue to work with government and civil society to advance AI.  Thank you.

>> MODERATOR: Thanks so much, Amy.  I know I have a bunch more questions for you all.  We just heard from you, that work that Microsoft is doing to develop effective -- (Audio stopped) Amongst industry in this space and what we have heard is that there's real challenges in terms of developing effective safeguards.

You talked about the need to also ensure that we're not concentrating a lot of these discussions in specific regions or specific countries, or specific companies.

And I think all of you across the board talked a bit about ensuring that we have more representative data that we're recognizing that is exacerbating biases and discrimination that occur in our society or in the case of online harms, things like tech-facilitated gender-based violence, it's exacerbating gender-based violence that exists in our society already.

I do want to go to the audience for questions, and it's helpful for us to hear a little bit more about your recommendations on how we overcome some of the challenges that we're seeing in developing effective safeguards if we have time.

But let me go over to the audience.  I know we have folks online right now, if I could just ask our IT friends to pull up any questions, please put them in the chat.  If there's any questions in the audience -- I see folks are having problems hearing as well so hopefully, you can hear us.

But if you have any questions, please put it in the chat and any questions in the room.

>> Thank you so much.  Amy just mentioned infrastructure and data centres, and I have a question.  As the U.S. government is integrating AI more and more into public systems, what is the government doing to ensure that patterns of environmental racism and issues with pollution and things that have affected marginalized communities in the U.S. will not be replicated with more and more AI use?

>> I can jump in.  I think that's a great question.  The type of power generation that's going to be required for data centres is definitely going to in many respects lead us into areas where there's either more land or less respect for the dignity of the land that some people have.

I like the way Amy talked about it with Microsoft -- and some values on where we decide the data centre is because in the United States, the type of gigawatt power that is required not just to keep these systems operating, but to keep them cool will have a disproportionate effect on communities that are either of colour or indigenous or communities in which we used to have this term a long time ago in economic development, brownfields where there's the possibility to go in and exploit the land for the purposes of the type of potential nuclear reactor generations that would be needed for these data centres.

I urge more conversations in this, because it is an area that is becoming increasingly important as compute power becomes more distributed and hope we can find the same type of reputational as well as harm reduction that we've spoken about today in terms of the model themselves and how we deal with visual infrastructure.

>> That was such an excellent comment.  Just recognizing the kind of continuing trends that we see.  In other words, it's not like AI is a brand-new issue.  There are many new aspects to it, but the trends in terms of power and discrimination continue.

Like many aspects of AI, I would say there's advantages and disadvantages.  We are using AI to develop new types of concrete that are less impactful on the environment.  We have our own sustainability pledge.  Other companies do as well, of course.

We are continuing to uphold our pledge on carbon outputs that we made prior to the advances of AI in the last couple of years and we'll continue to uphold that as we move forward and look forward to carbon-free sources of power.

>> And under the Biden administrations, we have rolled out a number of new policies, executive orders, memos from the White House that are really focused on ensuring that as our own government is purchasing artificial intelligence systems, it's using automated systems, is deploying AI in different ways, and is also providing AI to other governments, that human rights is a core element of sort of that risk assessment that we're doing, and that is a component in a lot of the new actions and regulations that we have rolled out.

One of the things that I will note is we are working currently in the Council of Europe As Government on a New Convention on Artificial Intelligence, AI, human rights rule of law and democracy and this is the first-ever legally binding treaty on artificial intelligence and one of the key things that that process is doing is also building out a risk assessment framework that has human rights at its core.  So as governments, we have a framework that we can actually look to that helps us assess what the risks are, whether that leads to environmental rights, environmental defenders, or other fundamental freedoms, freedom of expression that that is core to everything that we've working on.

this is a key piece of a lot of the work we're doing as it relates to safe, secure and trustworthy ai in the U.S.

and I want to make sure we're not missing those.

>> Thank you very much for an insightful discussion.  So my question specifically around AI use in the military and war.  And I think around the world, we've seen increasing use of AI in facial recognition technologies, in conflict and in war, but we're seeing that a lot of these conversations totally skip the use of military AI as an acute human rights impact.

I'm wondering what can governments and companies do to have more conversations around military use, and what safeguards they can put in place?  Because currently in conflicts, we're seeing bad consequences for civilian populations.

>> Thank you.  I'm with the Meta oversight board.  I bet you we will get the answer that you do all these checks, human rights impact assessments.  Our challenge here is transparency.  So what is preventing you from publishing at least a portion of these reports so people who are affected by AI technologies, especially either clients of Microsoft -- (Audio cutting out)

>> MODERATOR: Maybe I'll turn it over to the panellists first.  But we had two questions.  One is how do we better address AI use in military settings with recognition that quite often as we're having conversations around safeguards, around automated systems, we're excluding the defence sector from those discussions.

So, what more could we be doing there?  And second question in terms of transparency reporting.

And I saw another question back here, maybe I'll turn over to you online first, colleagues, if anyone wants to jump in.

>> Sure, I can jump in a little bit.  And this is an area on which I welcome feedback, because one of the cornerstones of how I think my team operates is commitments to accountability and transparency in terms of how we uphold Microsoft's responsibility to respect human rights.

At the same time, of course, there are confidentiality commitments to our customers, those customers are the same regardless of any customers.  Let me put that out there as a way, it's the cornerstone of how we operate.

I mentioned briefly during my opening remarks that we divide our AI services into potentially sensitive AI services, including facial recognition and voice.

For those services we do require defined use cases, regardless of customer.  And we review those defined use cases against our own responsible AI commitments which are grounded in respect for human rights.  We are endeavouring to increase transparency.  So, for example, during the last year my team worked directly on updating some of our transparency around data centre operations and the types of services we offer in data centres.

But there's more we can do; more we can do as an industry and more we can do in terms of the kind of industry-accepted level of due diligence.  I think that could be enormously helpful so there's this floor rather than a race to the bottom.  It's a race to the top in terms of how private sectors can work with government and with civil society to ensure that we're upholding universal human rights.

>> With regards to that question, the challenges that we have with AI is that we have a militarization when it comes to human rights and civil rights.

I like the way that the audience member talked about this, is integration of a variety of technologies embedded for the -- we're seeing facial recognition embedded into other AI enabled technologies that are being used for force.  We're seeing less accountability and appearance around that integration in many respects.  And I think for the United States in particular, and other countries who have an ongoing race with AI and China, these create certain vulnerabilities and national security concerns that we have to pay attention to.  That's the first thing I want to say.

The other thing I think is really important and I love the way we're talking about particularly at the United States government integrating diplomacy with human rights, and AI security is I once heard someone say and I'll share with this group that in the absence of data privacy or an international data governance strategy, we actually also contribute to a national security concern.

So really thinking about ways in which we're not handling data privacy like Rasha spoke about really lends itself to greater militarization, because it allows for governments to obstruct the type of transparency and accountability we need when it comes to these types of systems.

So, I think we probably are going to see a shift to more national security conversations in the United States.  The national security memo is an example of that.

And I think across the world I was just in Barcelona at the smart city’s expo, a lot of conversations about embedded militarization of everyday tools and how they can be reversed for that type of conversation.  It's a conversation we need to have, and the U.N. needs to commit to.

>> MODERATOR: I will just say on the really important question in terms of how we address use of automated systems in our military apparatuses and not just use, but also development in design.

There are two things that we're working on, at least in the U.S. government context.  First, we completely agree with you that we can't -- (Audio frozen) (No audio) Test, test.

Apologies, all.  First and foremost, I think we agree with you on the importance of these conversations.  It's why we started a political military declaration to actually start our global conversation on use of AI in the military and we would encourage governments that have not joined that declaration not just because it's important to the declaration, but it's important to the policy conversations to do so and we're happy to talk to any governments that are here at IGF and beyond.

The second piece is our national security memorandum on AI use in our national security system.  We fully recognize that we can't actually look at how to address everything from human rights impacts of AI to actually how our government is designing and deploying AI itself.