IGF 2022 WS #240 Pathways to equitable and safe development of AGI

Time
Wednesday, 30th November, 2022 (12:00 UTC) - Wednesday, 30th November, 2022 (13:30 UTC)
Room
CR2

Organizer 1: Veronica Piccolo, Internet Society Youth SG
Organizer 2: Bruce Tsai, Internet Society
Organizer 3: Theorose Elikplim Dzineku, Ghana Institute of Journalism
Organizer 4: Nicolas Fiumarelli, Youth IGF Uruguay
Organizer 5: Puteri Ameena Hishammuddin, Malaysia Youth IGF, Internet Society Malaysia Chapter

Speaker 1: Umut Pajaro Velasquez, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 2: Bear Häon, Technical Community, Asia-Pacific Group
Speaker 3: Joanna Mazur, Civil Society, Eastern European Group
Speaker 4: Oarabile Mudongo, Civil Society, African Group

Moderator

Theorose Elikplim Dzineku, Civil Society, African Group

Online Moderator

Bruce Tsai, Civil Society, Asia-Pacific Group

Rapporteur

Stella Anne Teoh Ming Hui, Civil Society, Asia-Pacific Group

Format

Panel - Auditorium - 90 Min

Policy Question(s)

Q1 - Promise and perils of Artificial Intelligence. AI is seen by some as one of the most strategically important parts of the third-millennium economy, possibly warranting massive private investment and state support. How can ethically responsible AI developers and policymakers ensure that their innovation and regulations capture the needs of all stakeholders globally, and contribute to shared prosperity rather than exacerbate existing inequalities? Q2 - Uncontrolled pursuit of Artificial (General) Intelligence. “How will socio-economic, geopolitical dynamics, historical factors affect the design and deployment of AI technology?” Q3 - Redesigning AI governance. There have been calls from multiple parties to adopt a multistakeholder model of governance, but its adoption is still far from reality. What could help enable representation, as well as transparent and fair policymaking processes which mediate between the interests of all stakeholders, especially those of the Global South, other minorities, youth and future generations?

Connection with previous Messages: Artificial Intelligence (AI) needs to be developed and deployed in manners that allow it to be as inclusive as possible, non-discriminatory, auditable and rooted into democratic principles, the rule of law and human rights. This requires a combination of agile self, soft and hard regulatory mechanisms, along with the tools to implement them. Other: Adequate enabling environments (e.g. policies, legislation, institutions) need to be put in place at the national, regional and global levels to foster inclusive, just, safe, resilient and sustainable digital societies and economies. Stakeholders have a joint responsibility in ensuring that digital transformation processes are diverse, inclusive, democratic and sustainable. Commitment and strong leadership from public institutions need to be complemented with accountability and responsibility on the part of private actors. Agile regulatory frameworks – at the national, regional and, where possible, global levels – need to be put in place to outline rules, responsibilities and boundaries for how public and private actors behave in the digital space. There is a necessity to strengthen the multistakeholder approach, in order to be truly inclusive and to develop effective policies that respond to the needs of citizens, build trust and meet the demands of the rapidly changing global digital environment. The most powerful stakeholders - governments and private companies - are responsible for ensuring that civil society actors are able to meaningfully contribute to these processes. Inequalities are multi-layered nuanced areas and require dedicated assessments and tailored solutions. Women, girls and gender-expansive people are especially affected. The inclusion process should be designed and implemented in a multistakeholder manner through capacity development, empowerment and awareness raising and building common understanding across stakeholder groups Digital cooperation requires trust, and the IGF can help build that. To adapt to the future, the IGF has to boldly embrace the policy controversies that face the Internet. A responsible use of AI algorithms ensures the preservation of human rights and avoids biases that intensify inequality. Policies to deal with misuses should be developed where needed. 2021 Youth Summit key messages: Existential Risk: Stakeholders should collaborate, implement and pursue a research agenda on existential risk to humanity from AI and recommend solutions to mitigate those risks. Equity Within Nations: To tackle internal inequalities, job displacement, and financial reallocation, national governments should develop robust strategies that promote equitable access to AI, the shared benefits of AI, and a shared standard of AI literacy. The strategies should be part of the structure of national legislation and not contradict international treaties.

SDGs

4. Quality Education
5. Gender Equality
8. Decent Work and Economic Growth
9. Industry, Innovation and Infrastructure
10. Reduced Inequalities
16. Peace, Justice and Strong Institutions
17. Partnerships for the Goals

Targets: This proposal links to: SDG 4 by setting the direction on AI literacy; SDGs 8, 9, 10 by exploring the potential of AI to access decent jobs, improve efficiency of supply chains and innovation for the common good, to reduce inequities and by addressing pitfalls before it is too late; SDG 16 by addressing what are the biases of AI, for these have the potential to exacerbate inequalities and weaken Institutions; SDG 5 by giving space to underrepresented voices, such as women and gender-diverse people; SDG 17 by favoring cross border dialogues and cooperation, exchange of good practices and public/private partnership to develop AI solutions for sustainable development.

Description:

Globally, highly intelligent AI has the potential to be remarkably effective at lifting people out of poverty, finding solutions to existing problems, doing dangerous and risky work, and increasing the living standards for millions or even billions of people. On the other hand, they also have the potential to exacerbate existing inequities, contribute to economic disasters, and increase the gap between the haves and have-nots. AI researchers believe that Artificial General Intelligence (AGI) can occur as soon as in the next 30 years. Depending on takeoff scenarios, whether this first AGI is “aligned” or not can be a matter of existential risk for humanity, due to the obvious geopolitical considerations implicated in the race to advance artificial intelligence, which might likely contribute to widen the gap between global North and global South. The road to AGI is in danger of being shaped in the first-mover advantage prism that characterizes market-driven dynamics rather than by true consideration for how artificial intelligence can benefit humanity at large. Market forces driving the AI race work against the weighing of all the interests at stake, which makes the multistakeholder model especially important. Multistakeholder processes and policy-making like the IGF can go a long way in ensuring that the future of AI is one that represents the interests of all of humanity, and not just those who develop AGI first. How the world navigates the development of superintelligent AI will likely be one of the most important things that happens this century. Against this background, young people are those who will be living with the outcome of the current AI alignment problem and claim to have a greater interest in shaping the way forward for their own generation and those who will come after. How AI governance is shaped today can be hugely influential for other emerging technologies tomorrow.

Expected Outcomes

The primary outcome of this session will be capacity building in the area of AI safety as a governance issue. Ideally AI safety as a global governance issue will be better understood, but initiating/catalyzing global grass-roots as well as general political interest in AI governance and AI safety (as opposed to AI development). Other outcomes would be to prompt additional interest in facilitating information sharing about AI accidents or near misses, investing in AI safety research, developing AI safety standards as well as associated evaluation and monitoring capacity, and international networks/partnerships/alliances on AI safety governance or AI safety research. This session has also the potential to steer academic research and methodology towards unexplored path A longer-term outcome would be to have AI safety incorporated in the upcoming 2030-2045 UN development agenda - highly intelligent AI have the potential to be highly effective at reducing poverty and finding solutions to existing problems, but also have the potential to exacerbate existing inequities and increase the gap between the haves and have-nots. Policy and IGO processes like this can go a long way in ensuring that the future of AI is one that represents the interests of all of humanity, and not just those who develop AGI first.

Hybrid Format: The session is going to ensure interactions through a Q&A and comment section in the online application where the session will be taking place (e.g. Zoom). Both online and onsite moderators will make sure that the questions and comments are not overlooked, but play an important role throughout the session. In this regard, the organizing team is planning to ensure a stable and effective communication between onsite and online moderators. Hybrid. The session aims to facilitate a panel discussion where participants are able to ask questions and leave comments both online and onsite. For this purpose, the session will feature both online and onsite moderators who will have regular communication to keep the participants equally engaged. While the onsite moderator will hear the participants’ questions, physically attending the session, the online moderator will be keeping an eye on the questions and comments that are shared online and bring these into the discussion by communicating it to the onsite moderator. In case the panel will fully take place online, the on-site participants will be invited to take the floor: visuals on the onsite audience will be ensured by the onsite moderator.

Online Participation

 

Usage of IGF Official Tool.

 

Key Takeaways (* deadline 2 hours after session)
Currently there is an inadequacy in terms of policies and regulatory framework in action when it comes to Artificial (General) Intelligence. The technology infrastructure is also dominated by the private sector, usually leaving states dependent on the solutions provided. There is also a lack of representation when it comes to involvement of minority groups or youth in policy making discussions.
Call to Action (* deadline 2 hours after session)
Global standards should continue to be used as AGI is not limited regionally, and global solutions are needed to address the deployment of AGI by global companies; these can be adapted at the local level. Efforts should be taken to encourage transparency, maintain a human rights-based focus in policymaking; while working towards a multistakeholder approach for more inclusive solutions.
Session Report (* deadline 16 December) - click on the ? symbol for instructions
REPORT

Question 1

Promise and perils of Artificial Intelligence. AI is seen by some as one of the most strategically important parts of the third-millennium economy, possibly warranting massive private investment and state support. 

How can ethically responsible AI developers and policymakers ensure that their innovation and regulations capture the needs of all stakeholders globally, and contribute to shared prosperity rather than exacerbate existing inequalities?

  • The question of ethics will always bring a challenge when it comes to any technological development, and AI doesn’t escape from that. Saying this, we need to recognize before of doing any new regulation or ethical framework related to AI if there’s a need inside any country or globally of changing the rules of the game entirely or just we need a set of adaptations and add those new things that appear as challenges and making it into solutions that must have the human rights demand that society ask for and also those economical aspects that allows social progress. And this is where ethical aspects or principles becoming laws or regulatory frameworks became so intricate, a possible solution that I can see to that are the regulatory sandbox and /or dashboards in the differents aspects and principles of AI ethics and Governance, these kinds of best practice could be the one that allows to develop policies with a human-centered perspective and in the same time adapt those regulations according to the digital and economical rights we need to ensure in order of increasing they're already existing inequalities and fulfil that minimal of ethical principles in AIs.  All of this mentioned above under the rules of fairness, accountability and transparency. 
  • Ensuring that the AI will be ethical is the responsibility of the policy makers and regulators. It is unwise to trust that the business will follow any other rules than the ones that facilitate gains and profits. Thus, it is the regulation that can – and should – support the development of ethical AI. Firstly, we should develop the approaches that take into account the fact, that AI-based solutions have to comply with already existing laws. In the context of their ethical dimension, we should think how human-rights based approach can be used for forming the requirements regarding AI. What we need is, secondly, more consideration for the solutions which support transparency. It is important in the light of, e.g., the inclusion of the provisions that limit transparency in international economic agreements (the provisions that limit the possibilities of demanding access to source code). Transparency, as a precondition for accountability, is necessary for the development of the mechanisms that would allow broader public to scrutinise the solutions that are implemented. Thirdly, we have to ensure that the new rights that are introduced would be easily enforceable, both by the individuals and by the organisation that focus on human rights as well as, more specifically, digital rights, protection. Last but not least, the states should be more actively involved in the development of the solutions that could actually improve the quality of life of their citizens and support the achievement of goals, e.g., in the area of environmental protection. AI development is not an achievement by itself: it is what we can achieve using AI and what kind of improvements it can support.

Question 2

Uncontrolled pursuit of Artificial (General) Intelligence. 

How will socio-economic, geopolitical dynamics, and historical factors affect the design and deployment of AI technology?

  • As a person from the Majority World or also known as Global South, I think we can see the effects or disproportions when it comes to AI through the following aspects: First, data colonialism, this mean how the datasets of those technologies we are implementing has the biases of a population that doesn’t represent the Majority World and are presented as universal replicating the time of European Colonialism which brings to problems that affect the quality of life of those in the Majority World and/or force them to improve under implementation of this kind of technologies when it should be widely inclusive by design. Second, the lack of tools to develop by our own or the belief that we doesn’t enough resources to make our own AI design & developments that include not only what the global north is doing but also our own perspectives and solutions and for this to become a reality we more support for our local and regional governments, because we have the people to make the changes, just need to believe in them and break this structure of only being the consumers or the ones that implement something that was design and develop in a context that is not close to ours and see us as another. Third, and most importantly, and ideally we need to come to agreements where everyone's perspective and principles about AI are embedded by design in a way when the model is deploying the biases related to historical, geopolitical and socio-economic factors are minimal or zero. 
  • Firstly, if we do not have requirements which would support the direction of the development of AI which would take into account historical (as well as contemporary) discrimination, my fear is that the use of AI can only strengthen these mechanisms. Thus, the development of the requirements concerning the representative character of data that are used for the development of the algorithms seem to be a good solution. Factors that are broadly recognised as protected characteristics on basis of which discrimination is prohibited should be taken into account when testing certain solutions. Additionally, we can think about the potential that the AI-based tools could have not only from the perspective of avoiding discrimination, but also from the perspective which would actually focus on promoting fairer and more just solutions. However, this will not happen by itself. Therefore, we need to ensure that steps which would lead to the development of such requirements are taken. Secondly, there is a question of the resources needed for the development of AI-based innovations. When these solutions are developed mostly or solely by the private sector, the states become dependent on the solutions provided by private companies. Thus, we come back to the issue of the investments needed for the development of useful solutions that could be implemented by the states in order to achieve societal benefits (e.g., higher level of energy efficiency, better health services). Thirdly, to develop inclusive solutions, we need representation among the people that actually develop AI. Thus, there is a need to develop, e.g., programs which promote and support women who work as developers.

Question 3

Redesigning AI governance. There have been calls from multiple parties to adopt a multistakeholder model of governance, but its adoption is still far from reality. 

What could help enable representation, as well as transparent and fair policymaking processes which mediate between the interests of all stakeholders, especially those of the Global South, minorities, youth and future generations?

  • Right now around the world are efforts to increase AI Governance from a perspective that we all can adopt some set economical and ethical principles from the OECD and UNESCO and implemented on our regulation at a local level as several countries in the Majority World or at regional level as is the case of EU or the African Union. These efforts are showing us that a multistakeholder approach to AI governance is the most transparent, fair, way to get into policies that actually represent the vision of an AI for a country and in the future. As I mentioned before one of the exercises that caught my attention studying the implementations of OECD and UNESCO principles is how some countries not only in the global north but also in the majority world are implementing sandboxes and dashboards as a strategic open to the public to monitor and demand that the design, development, deployment not only policies, but others aspects related to AI actually are in line with what the society needs and human right, and I say human rights because this is a matter of that, the last pandemic left us clear that our digital world and so-called real world are two sides of the same coin that means that is important in any case defend the rights of the all population in both in the same way. In conclusion, if you ask me for a good practice that could serve as a mediator between policymaking and the rest of stakeholders, my answer is AI Regulatory Sandbox and/or Dashboards. 
  • Firstly, I think that one of the problems in this regard is the tendency to treat the digital environment as somehow separate from the offline world. This kind of approach makes it more difficult for many stakeholders that deal with, e.g., human rights more broadly, to be involved in these issues. Thus, I believe that it is important to show that the digital becomes inseparable from the material, and, thus, that if one cares about, e.g., human-rights or environment, the digital solutions could become helpful in regard to the fights concerning these issues. On the other hand, digital solutions can also make it more difficult to protect human rights (e.g., when they are not transparent), thus, there is a need to fight also for digital rights, when one cares about the offline world’s issues. This can be illustrated with the role that short-term rental platforms play: the fact that they are digital does not change the fact that they cause problems in regard to the issues such as housing, which is very material in its core. Secondly, we need more transparency in regard to the law-making. The case of Uber files, as well as other investigative journalism works and non-governmental organisations reports show the scale of lobbying which results from big-tech companies’ activities. Considering the differences in resources possessed by these companies and the resources that, e.g., non-governmental organizations or the unions have, it is impossible to match these kinds of efforts to influence the adoption of legislation. Thus, it should be the institutional and procedural solutions that would, on the one hand, more effectively protect the inclusion of the representatives of the society in the law-making process, and, on the other hand, limit the impact that the companies have.