Session
APAC GATES and Digital Governance Asia
Seth Hays, Managing Director of APAC GATES, & Director at Digital Governance Asia, Asia region.
APAC GATES and Digital Governance Asia
Seth Hays, Managing Director of APAC GATES, & Director at Digital Governance Asia, Asia region.
Seth Hays
Seth Hays
Targets: In particular, the use of existing privacy rules enhances the rule of law in the regulation of AI, thus supporting 16.3. Additionally, interoperable rules shared from best practices, as will be shared in the presentation from privacy regulators around Asia will build policies that enhance and improve this new emerging technology - thus supporting 9.c.
Lightening talk that will go through practical best practices identified by 5 Asia Pacific privacy regulators regarding AI. Substantive presentation for 15 minutes, followed by audience Q&A and discussion for 5 minutes.
The Asia-Pacific region does not yet have a specific act regulating AI, and global discourse revolves around activity in the US, EU and China more generally. But Asia's privacy regulators are addressing AI on-the-ground.
This session will review activity taken by privacy regulators in South Korea, Singapore, Australia, New Zealand and Hong Kong to address AI harms. We identify best-practices and recommend policies that should be promoted around the Asia-Pacific region and the globe, in particular to regions in the Global South with less traditions of strong privacy regimes. These recommendations should help countries leap frog on AI regulation - and prevent a digital divide in AI governance.
For further reading on how Asia's privacy regulators are using policy levers to take power in the AI policy debate, please see the article Asia’s Privacy Regulators Shape AI Policy On The Ground.
We will focus on how liability in privacy law is holding the AI industry accountable at early stages of development, in particular in smaller states and in the Global Majority. We will share efforts to check the power of AI tools, prevent and address AI harms, with civil society efforts, such as Digital Governance Asia's AI Harm Remedy Network and AI Harm Remedy Tracker, which includes documentation of privacy violations by AI services in the Asia-Pacific region. Additionally, we highlight efforts to monitor how AI policy is developing across other legal issue area domains such as trust/safety, intellectual property, healthcare and finance in Global Majority countries in Asia through the Asia AI Policy Monitor newsletter, in order to better pluralize and democratize the global AI policy discussion away from a few large countries/jurisdictions.
Report
Asia's privacy regulators are taking on AI through various tactics, such as proactive guidelines including common sense recommendations, and ex-post actions against companies misusing AI tools.
Further sharing of best practices should occur to enhance the global AI Governance discourse, in particular in the Global South, Global Majority and Smaller States..
Tools like the Asia AI Policy Monitor newsletter and Asia-Pacific AI Harm Remedy Network provided by Digital Governance Asia are recommended for government, CSOs.
Join efforts to track AI harm mitigation such as the Asia-Pacific AI Harm Remedy Network provided by Digital Governance Asia at http://digitalgovernance.asia
Monitor AI Policy and identify best practices by signing up for the Asia AI Policy Monitor newsletter at https://asiaaipolicymonitor.substack.com
Privacy, Policy, and Power in Asia’s AI Regulations
Presented at the Internet Governance Forum, Riyadh | December 17, 2024
Seth Hays, Director of Digital Governance Asia and co-founder of the human rights consultancy, APAC GATES, delivered a call to action on AI governance in the Asia-Pacific region during the 2024 Internet Governance Forum in Riyadh. Mr. Hays underscored the need to amplify the voices of smaller states and the Global Majority in shaping AI governance, a conversation dominated by the European Union, the United States, and China.
Digital Governance Asia, a non-profit dedicated to promoting innovation and human rights in emerging technology policy in the Asia-Pacific region, has a unique perspective on AI policy with staff presence in Taiwan—home to the production of most advanced AI chips—and Seattle, where AI commercialization thrives. This perspective positions Digital Governance Asia to identify and advocate for best practices from a holistic supply-chain and a cross-border policy perspective.
Mr. Hays presented a detailed analysis of five case studies, drawn from Australia, Hong Kong, Singapore, South Korea, and New Zealand, to highlight how privacy regulators are tackling AI-related risks. These jurisdictions offer lessons for countries without robust privacy protections, demonstrating how AI harms can be mitigated through well-crafted policy and proactive oversight.
The Office of the Australian Information Commissioner provided guidance for deployers and developers of AI in its “Guidance on privacy and the use of commercially available AI products.” Recommendations include sensible advice, such as not inputting personal information into publicly available AI tools, to underlining the fact that privacy regulations are technology neutral and apply to AI services. The OAIC also provides in its “Guidance on privacy and developing and training generative AI models” advice for developers of AI tools to ensure that data used in training models was lawfully used, even if publicly available.
Hong Kong’s Office of the Privacy Commissioner for Personal Data (PCPD) has taken a lead on AI policy in the city, providing documents such as “AI Model Personal Data Protection Framework.” This guidance goes beyond privacy rules, and contemplates a general conception of AI risk. From low risk AI settings, where humans need not to be involved, to high risk AI usages from real time biometric monitoring, to human resource job applications, medical imaging analysis, public service provision of welfare or criminal sentencing to financial services and credit access – where humans need to be in the loop.
Singapore’s Personal Data Protection Commission (PDPC), working with public organizations such as AI Verify Foundation, have formed a basis for an industry-friendly, but rights promoting policy environment. For example, its “Advisory Guidelines on the use of personal data in AI recommendations and decision systems” provides for research and business improvement exceptions, while promoting meaningful consent and privacy protecting tactics such as pseudonymization and anonymization.
South Korea’s Personal Information Privacy Commission (PIPC) also provides for an industry-friendly, but rights protecting business environment with tools such as the “Policy Direction for safe usage of personal data in the Age of AI.” This policy provides guidelines for the use of publicly available data in AI development, guidelines for use of visual data from mobile devices, AI transparency guidelines, guides for using synthetic data, and regulations for use of biometric data.
New Zealand’s Office of the Privacy Commissioner provides tools for privacy practitioners such as the “AI and Information Privacy Principles.” This guidance in the form a multiple questions prompts organizations using AI to understand the ethical and reliability issues of the data used in their tools, making sure data collected for AI is fit for purpose, ensuring data is tracked for auditing purposes and impact assessments, calling out the need to understand the impact on marginal communities, or indigenous communities, ensuring accountable, transparent and explainable business environments deploying AI.
The absence of robust privacy legislation in some jurisdictions presents an opportunity to leapfrog on AI governance by identifying and using the tactics of other regulators in the data privacy sector. However, as Mr. Hays noted, the current role of privacy regulators focuses heavily on ex ante prevention of harms and ex post enforcement against individual actors. While this approach is valuable, it does little to address broader risks posed by AI, such as misinformation, nonconsensual deepfake imagery, AI-driven fraud by organized criminal groups and systemic bias. These challenges demand regulatory structures capable of enhancing resilient democratic systems, human rights, and rule of law.
Mr. Hays concluded by emphasizing the importance of proactive engagement and knowledge-sharing, recognizing that in the near-term concepts from the Responsible Business and Human Rights space, such as the Guiding Principles on Business and Human Rights, need to be robustly applied to AI systems and enhanced for the AI context. Its “Protect, Respect, Remedy” framework should become “Respect, Promote, Enhance” in human rights in the AI context. Projects such as Digital Governance Asia’s newsletter, Asia AI Policy Monitor, and the Asia AI Harm Remedy Network are critical for identifying emerging policy risks and fostering regional collaboration in policymaking. By addressing AI harms before they occur, policymakers can shape effective regulations that promote innovation while protecting democracy, human rights, and the rule of law.