Session
Panel - Auditorium - 60 Min
Over the last few years, we have deliberated and developed numerous AI ethical principles and frameworks. How do we translate and embrace these articulated values and principles into actions?
We are seeing growing awareness that our existing regulatory frameworks are not evolving fast enough to keep pace with the rapid progress of emerging technologies including AI. With the growing lists of emerging tech related incidents, there is a general distrust that developers could self-regulate effectively. Nation States and the EU have started to formulate their regulatory systems, policies or other legal instruments regarding AI.
Previously, technologies have been deployed more like tools, but as autonomy and self-learning capabilities increase, robots and intelligent AI systems will feel less and less like machines and tools. There is now a significant difference, because machine learning AI systems have the ability ‘to learn’, adapt their performances and ‘make decisions’ from data and ‘life experiences’.
The session provides insights on the ethical practice of AI and upcoming regulatory initiatives, highlights from the developing country perspective, the emerging technology discourses and recommendations on human rights, and call to action to address the risks and challenges arising from the use and actions of AI, autonomous and intelligent systems.
Panel
1. Anthony Wong, IFIP President, Lawyer and CIO, AGW Legal & Advisory (and Panel Moderator)
2. Shamika Sirimanne, Director of UNCTAD's Division on Technology & Logistics, Head of the secretariat to the United Nations Commission on Science and Technology for Development (CSTD)
3. Edward Santow, former Human Rights Commissioner of Australia, Professor and Co-Director of the Human Technology Institute, UTS
Sofera Amanuel
Simon Kwan, IFIP
Oliver Burmeister IFIP Technical Committee 9 - ICT and Society, Professor, Charles Sturt University, School of Computing, Mathematics and Engineering, Presiding Officer, Human Research Ethics Committee
8. Decent Work and Economic Growth
8.2
8.3
8.8
9. Industry, Innovation and Infrastructure
10. Reduced Inequalities
11. Sustainable Cities and Communities
11.4
16.3
16.6
16.7
16.b
17.16
17.17
17.6
17.7
17.9
Targets: How much autonomy should AI and robots have to make decisions on our behalf and about us in our life, work and play? How do we ensure they can be trusted, and that they are transparent, reliable, accountable and well designed? While technological advances hold tremendous promise for mankind, they also pose and raise difficult questions in disparate areas including ethics and morality, bias and discrimination, human rights and dignity, privacy and data protection, data ownership, intellectual property, safety, liability, consumer protection, accountability and transparency, competition law, employment and the future of work and, legal personhood. In a world, that is increasingly connected and where machine-based algorithms use available data to make decisions that affect our lives, how do we ensure these automated decisions are not opaque, appropriate and transparent? And what recourse do we have when these decisions intrude on our rights, freedoms, safety and legitimate interests? What legal and social responsibilities should we give to algorithms shielded behind statistically data-derived ‘impartiality’? Who is liable when robots and AI get it wrong? Historically, new technologies have always affected the structure of the labour market, leading to a significant impact on employment, especially lower skilled and manual jobs. But now the pace and spread of autonomous and intelligent technologies are outperforming humans in many tasks and radically challenging the base tenets of our labour markets and laws. These developments have raised many questions. Where are the policies, strategies and regulatory frameworks to transition workers in the jobs that will be the most transformed, or those that will disappear altogether due to automation, robotics and AI? Our current labour and employment laws, such as sick leave, hours of work, tax, minimum wage and overtime pay requirements, were not designed for robots. What is the legal relationship of robots to human employees in the workplace? How will AI disruption of traditional business models impact society?