Session
Assessing Internet governance approaches and mechanisms and fostering inclusiveness: What are the main strengths and weaknesses of existing Internet governance approaches and mechanisms? What can be done, and by whom, to foster more inclusive Internet governance at the national, regional and international levels?
Governance and cooperation for an evolving Internet: How does Internet governance need to change in order to meet the changing nature and role of the Internet? What tools, mechanisms, and capacity building instruments are needed for stakeholders to effectively cooperate, and engage in Internet governance?
Panel - Auditorium - 60 Min
The Freedom Online Coalition (FOC) is a partnership of 32 countries who work together to promote and protect human rights online worldwide. As Chair of the FOC in 2021, the year of the FOC’s 10th anniversary, Finland will host this panel to bring together a multistakeholder community of global North and South actors to explore the role of government in addressing the opportunities and challenges around human rights in the digital context in the next decade, and discuss the future of multistakeholder approaches to Internet governance. A key priority of the FOC is the shaping of global norms through joint action, in particular through developing joint statements, and leveraging their language and key messages globally. This panel will aim to highlight a broader perspective on FOC key priorities - disinformation, digital inclusion and artificial intelligence - as well as provide a detailed understanding of how specific governments translate and operationalise the recommendations of the FOC Joint Statements on disinformation, digital inclusion, and artificial intelligence into concrete actions with tangible outcomes. The panel will also address the importance of developing multistakeholder approaches to Internet governance and protecting human rights in the digital environment, and include perspectives of the FOC Advisory Network Members.
1) How will you design the session to ensure the best possible experience for the online and on-site participants? We will aim to set rules of engagement for the attendees (inform them about the different ways they can interact during the event with the speakers and fellow participants, let them know how they should use the chat feature, help them understand when to stay muted/unmuted, advise them on how and when they should ask questions, tell them who to contact in case any technical issues arise). We would also assign designated moderators for chats and tech support, and prepare compelling content and structure of the agenda while being wary of meeting length. The FOC Support Unit would also prep calls with the speakers before the event.
2) If the speakers and organizers will all be online, how will you ensure interactions between them and the participants (including with on-site participants)? We would aim to give attendees the opportunity to make comments and ask questions in real-time, as well as through the Q&A and chat functions, ensuring that the interaction is not one-way. We would encourage speakers to keep their cameras on.
3) Are you planning to use complementary tools/platforms to increase participation and interaction during the session? We have not had any complementary tools or platforms planned at this time.
Global Partners Digital
Ministry for Foreign Affairs of Finland: Ambassador for Human Rights, Rauno Merisaari
Rauno Merisaari, Ambassador for Human Rights, Ministry for Foreign Affairs of Finland; Philippe-Andre Rodriguez, Deputy Director of the Digital Inclusion Lab, Global Affairs Canada; Swantje Maecker, Cyber Policy Coordination Staff, Federal Foreign Office of Germany; Emmanuella Darkwah, International Cooperation Officer at the National Cyber Security Centre in Ghana; Emma Llanso, Director of Center for Democracy and Technology’s Free Expression Project Edetaen Ojo, Executive Director of Media Rights Agenda in Nigeria
Lea Kaspar, Global Partners Digital
Zora Gouhary, Global Partners Digital
Zora Gouhary, Global Partners Digital
5. Gender Equality
5.5
5.b
9. Industry, Innovation and Infrastructure
9.c
10. Reduced Inequalities
10.2
10.3
16. Peace, Justice and Strong Institutions
16.10
16.6
16.8
16.a
Targets: Building on the FOC Joint Statements on Digital Inclusion, Spread of Disinformation Online, and Artificial Intelligence and Human Rights, the proposed session will discuss opportunities and challenges relating to Internet governance governments are faced with in the coming decade. The session will present the FOC’s call to action to address these challenges and to advance the common goal of promoting an open, free, and interoperable Internet. The Joint Statements reflect the Sustainable Development Goals of reducing inequality within and among countries, as well as achieving gender equality and empowering vulnerable communities, including women and girls, while also promoting peaceful and inclusive societies for sustainable development, and building effective, accountable and inclusive institutions at all levels.
In the Joint Statements on Spread of Disinformation Online, and COVID-19 and Internet Freedom, the FOC calls on governments to: - Abstain from conducting and sponsoring disinformation campaigns, and condemn such acts. - Address disinformation while ensuring a free, open, interoperable, reliable and secure Internet, and fully respecting human rights. - Improve coordination and multi-stakeholder cooperation, including with the private sector and civil society, to address disinformation in a manner that respects human rights, democracy and the rule of law. - Implement any measures, including legislation introduced to address disinformation, in a manner that complies with international human rights law and does not lead to restrictions on freedom of opinion and expression inconsistent with Article 19 of the International Covenant on Civil and Political Rights. - Respect, protect and fulfill the right to freedom of expression, including freedom to seek, receive and impart information regardless of frontiers, taking into account the important and valuable guidance of human rights treaty bodies. - Refrain from discrediting criticism of their policies and stifling freedom of opinion and expression under the guise of countering disinformation, including blocking access to the Internet, intimidating journalists and interfering with their ability to operate freely. - Support initiatives to empower individuals through online media and digital literacy education to think critically about the information they are consuming and sharing, and take steps to keep themselves and others safe online. - Take active steps to address disinformation targeted at vulnerable groups, acknowledging, in particular the specific targeting of and impact on women and persons belonging to minorities. - Support international cooperation and partnerships to promote digital inclusion, including universal and affordable access to the Internet for all. - To promote an enabling environment for free expression and access to information online to protect privacy and to refrain from content restrictions that violate international human rights law. - To commit that any actions taken pursuant to emergency measures or laws be subject to effective transparency and accountability measures and lifted when the pandemic has passed. The FOC also urges social media platforms and the private sector to: - Address disinformation in a manner that is guided by respect for human rights and the UN Guiding Principles on Business and Human Rights. - Increase transparency into the factors considered by algorithms to curate content feeds and search query results, formulate targeted advertising, and establish policies around political advertising, so that researchers and civil society can identify related implications. - Increase transparency around measures taken to address the problems algorithms can cause in the context of disinformation, including content take down, account deactivation and other restrictions and algorithmic alterations. This may include building appropriate mechanisms for reporting, designed in a multi-stakeholder process and without compromising effectiveness or trade secrets. - Promote users’ access to meaningful and timely appeal processes to any decisions taken in regard to the removal of accounts or content. - Respect the rule of law across the societies in which they operate, while ensuring not to contribute to violations or abuses of human rights. - Use independent and impartial fact-checking services to help identify and highlight disinformation, and take measures to strengthen the provision of independent news sources and content on their platforms. - Support research by working with governments, civil society and academia and, where appropriate, enabling access to relevant data on reporting, appeal and approval processes, while ensuring respect for international human rights law. The FOC urges civil society and academia to: - Continue research into the nature, scale and impact of online disinformation, as well as strategic level analysis to inform public debate and government action. - Adequately consider the impact of disinformation on women and marginalized groups who are targeted by disinformation campaigns in this research. - Engage with the private sector and governments to share findings and collaborate on research, whilst ensuring appropriate privacy protections are in place. - Actively participate in public debate and in multi-stakeholder initiatives looking to address disinformation and emphasize the necessity of evidence-based discussion To promote respect for human rights, democracy, and the rule of law in the design, development, procurement, and use of AI systems, in the Joint Statement on Artificial Intelligence and Human Rights the FOC calls on states to work towards the following actions in collaboration with the private sector, civil society, academia, and all other relevant stakeholders: - States should take action to oppose and refrain from the use of AI systems for repressive and authoritarian purposes, including the targeting of or discrimination against persons and communities in vulnerable and marginalized positions and human rights defenders, in violation of international human rights law. - States should refrain from arbitrary or unlawful interference in the operations of online platforms, including those using AI systems. - States have a responsibility to ensure that any measures affecting online platforms, including counter-terrorism and national security legislation, are consistent with international law, including international human rights law. States should refrain from restrictions on the right to freedom of opinion and expression, including in relation to political dissent and the work of journalists, civil society, and human rights defenders, except when such restrictions are in accordance with international law, particularly international human rights law. -States should promote international multi-stakeholder engagement in the development of relevant norms, rules, and standards for the development, procurement, use, certification, and governance of AI systems that, at a minimum, are consistent with international human rights law. States should welcome input from a broad and geographically representative group of states and stakeholders. 3 United Nations, Guiding Principles on Business and Human Rights, 2011. - States need to ensure the design, development and use of AI systems in the public sector is conducted in accordance with their international human rights obligations. States should respect their commitments and ensure that any interference with human rights is consistent with international law. - States, and any private sector or civil society actors working with them or on their behalf, should protect human rights when procuring, developing and using AI systems in the public sector, through the adoption of processes such as due diligence and impact assessments, that are made transparent wherever possible. These processes should provide an opportunity for all stakeholders, particularly those who face disproportionate negative impacts, to provide input. AI impact assessments should, at a minimum, consider the risks to human rights posed by the use of AI systems, and be continuously evaluated before deployment and throughout the system’s lifecycle to account for unintended and/or unforeseen outcomes with respect to human rights. States need to provide an effective remedy against alleged human rights violations. - States should encourage the private sector to observe principles and practices of responsible business conduct (RBC) in the use of AI systems throughout their operations and supply and value chains, in a consistent manner and across all contexts. By incorporating RBC, companies are better equipped to manage risks, identify and resolve issues proactively, and adapt operations accordingly for long-term success. RBC activities of both states and the private sector should be in line with international frameworks such as the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises. - States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination. National measures should take into consideration such guidance provided by human rights treaty bodies and international initiatives, such as human-centered values identified in the OECD Recommendation of the Council on Artificial Intelligence, which was also endorsed by the G20 AI Principles. - States should promote the meaningful inclusion of persons or groups who can be disproportionately and negatively impacted, as well as civil society and academia, in determining if and how AI systems should be used in different contexts (weighing potential benefits against potential human rights impacts and developing adequate safeguards). 4 OECD, Guidelines for Multinational Enterprises, 2011. - States should promote, and where appropriate, support efforts by the private sector, civil society, and all other relevant stakeholders to increase transparency and accountability related to the use of AI systems, including through approaches that strongly encourage the sharing of information between stakeholders - States, as well as the private sector, should work towards increased transparency, which could include providing access to appropriate data and information for the benefit of civil society and academia, while safeguarding privacy and intellectual property, in order to facilitate collaborative and independent research into AI systems and their potential impacts on human rights, such as identifying, preventing, and mitigating bias in the development and use of AI systems. - States should foster education about AI systems and possible impacts on human rights among the public and stakeholders, including product developers and policy-makers. States should work to promote access to basic knowledge of AI systems for all.
In the Joint Statement on Digital Inclusion, the FOC suggests the following in order to advance the common goal of promoting digital inclusion: - The conduct and support of good quality, independent research, on supply and demand-side challenges affecting digital inclusion and digital divides. Research activities should investigate existing and emerging issues related to digital access that may negatively affect digital inclusion by deterring Internet use, such as human rights violations and abuses relating to privacy, online abuse, censorship, surveillance and other cybersecurity methods that limit individuals’ ability to exercise their human rights and fundamental freedoms. Governments should also encourage more efforts by the private sector to publish independent, research-based reviews on their data sets, conducted within an ethical, privacyprotective framework. - Civil society organizations should be supported in their efforts to address barriers and bottlenecks to digital access, cybersecurity risks, and on how to develop policy that drives positive outcomes related to the improved access and use of digital technologies. Moreover, all stakeholders should be encouraged to share best practices on issues pertaining to bridging digital divides, especially in support of community networks, and enabling digital inclusion, and governments should play a supportive role in facilitating this. - Welcoming contributions, and leadership, by the private sector and civil society to promote digital inclusion. Encourage the private sector to ensure that resources accrued for the purpose of overcoming digital divides are used transparently for their intended purpose in line with the UN Guiding Principles on Business and Human Rights. - Encouraging the availability of free Internet access points in public spaces, especially in schools and libraries in economically underprivileged communities. - Promoting open source software, open access technologies, open data, and open learning towards enabling meaningful access, as well as supporting the people who develop these resources. - Enacting digital policies which give special consideration to those who face particular difficulties in reaping the benefits of digital inclusion. Governments should build into their programs and policies safeguards to make sure these persons are able to benefit fully in the push for digital inclusion. These may consist of, inter alia, creating safe and accessible spaces, childcare facilities and specially trained support staff. - Advancing, with the help of public-private partnerships, digital literacy and other technology training in trusted and comfortable locations (libraries, community centers, places of worship, schools, recreation centers, senior centers, etc.) which are tailored for different levels of education and specific needs are supported. - Facilitating, reinforcing, and developing multi-stakeholder models of Internet governance, including growing capacity of civil society to participate in fora like the Internet Governance Forum, expanding availability of independent Internet exchange points, ensuring ability of private sector providers to connect and exchange data traffic directly with one another, and similar inclusive models. - Addressing underlying causes of digital exclusion (economic, social, political and cultural contexts) because technical solutions alone will not bridge digital divides; and, support initiatives at intergovernmental spaces that further digital inclusion.