The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> Just to confirm it is channel Number 3 on your devices.
>> We are joined for the networking session on assessing AI risks and impacts, safeguarding human rights and democracy in the digital age. We will be moderated by professor David Leslie, who is the director of ethics and responsibility research innovation at the Allen turning institute. He'll be introducing the Rose of the panel but to everyone joined here today and online. My name is (?) and very proud to say I have supported in helping publish and develop this human rights impact assessment framework that we've done with the counsel of Europe. I turn to David to introduce us to the panel.
>> DAVID LESLIE: Great. Can you hear me? Give me an acknowledgment that you can hear me and I'll keep going.
Okay. So thank you so much. Smara. Aim very thrilled to be here with you. Our team at the. Turing dating back to when the feasibility study that would come to inform what now is the framework convention, the treaty that is aligning human rights democracy and the rule of law with AI. And I'll just also say that really this is the adoption of the methodology, can has just happened this passed month is really a historical month in in a time of change. Where so much of the activities in the kind of international AI governance ecosystem are yet to be decided. And so this is really a kind of path breaking outcome I would say.
And just thinking ab it. Over the years in being at the counsel of Europe plenary is where we've really talked through some of these governance measures. It was early 2021 I want to say where we first took a question about foundation models and frontier AI.
So you can just imagine that that, you know, rich conversation about governance challenges has been going on at the council of Europe's venue in Strasburg a number of years now.
Also quickly say that the Hudira itself it really a unique anticipatory approach to the governance of the design development and deployment of AI systems. That anchors itself in basically four fundamental elements. We've got a context based risk analysis which provides a kind of structured approach initially to collecting and mapping the information that is needed to understand the risks of AI systems.
In particular the risks they pose to human rights democracy and the rule of law. It really focuses in on what we call the sociotechnical context. So the environments, the social environments which the technology is embedded.
It also allows for initial determination of whether the system is the right approach at all. And it provides a mechanism for triaging more or less involving governance processes in light of the risk of the systems.
There is also a stakeholder engagement process which proposes a approach to enable engagement. As appropriate for relevant stakeholders. So impacted communities. In order to sort of amplify the voices of those who are affected. And to gain information regarding how, you know, they might view the impacts. And in particular contextualize and corroborate potential harms.
And then there is the third sort of module, if you will. Or the third element. Is a real risk and impact assessment which is a more full blown process to assess the risks and impacts that are related to human rights democracy and the rule of law. In ways that really both integrate stakeholder consultation but also ask the how questions and try to think of downstream effects in a more full blown way.
And finally a mitt gaiks planning element which provides steps for mitigation and remedial measures. That allows for access to remedy and review. And as a whole, the Hudira also stresses that there is a need for iterative revisitation of all of these processes and all of the elements of Hudira. In so far as as the both the innovation environment. So the way the systems are design, developed and deployed. That is very dynamic and changes.
But also the broader social and legal economic, political contexts are always changing. And those changes mean that we need to sort of be flexible and continually revisit how we're looking at the governance process for any given system.
So with that, let me now then introduce our first panel speaker. And that is Mr. Wael William Diab. Chair of the ISO/IEC JTC. A wonderful standards organisation or set of group of them doing great work on AI standards.
And he'll address the role of AI standardisation in saiflgding human rights dploks democracy as well as cover upcoming standards on the issues.
>> WAEL DIAB: Thank you, David and thank you for the warm introduction. I'd like to thank you also for the invitation to present on this panel. My name is Wael. And as David mentioned, I chaired the joint committee of iso and IEC on artificial intelligence.
So I'm going to give you a brief flavour of what we do. Just to quickly acknowledge, it is not just me that does this. We have a pretty large management team. And we'll make all of these slides available. But in the interest of time I'm going to just jump into just what it is that we do.
And so we take a look at the full ecosystem when it comes to AI. We startberry looking at some of these non technical trends and requirements, application domain, regulatory policy or what happens most relevant here is emerging societal requirements.
Through that we simulate the context of use of the technologies we Cory cover. And we provide horizontal and foundational projects on artificial intelligence. And I'll talk more about examples.
But I want to point out that the story doesn't stop is there. We have lots of sister committees and IEC and, SO that focus on the application domains themselves that leverage our standards. We work with open source community and others. So we are part of the ISO and IEC families. Our scope is we are the focal points for the IT standardisation of AI.
And we help other sister committees in terms of looking at the application side.
We've been crawling quite a bit. So we've publish about 30 standards and have about 50 that are active. We have 68 countries. So the way we develop our standards is by one country, one vote principle. And about 800 unique experts that are in our system.
I would also like to note that we extensively with others. We have about 80 liaison relationships. Both internal and external. And I'll show a slide at the end. We also run a biannual workshop that is complementary.
The way we're structured is we currently have 10 major subgroups. 5 of which are joint with other committees. And I'll show what we do.
So the first thing that's important about understanding AI and being able to work with different stakeholders that have different needs is to have some foundational standards. And this area covers everything from a common terminology and concepts. And by the way that is a freely available standard that we do to a framework using AI.
A lot of the work in this area has also been around enabling what we call certification and third party audit of AI systems. So we believe that it is important to enable this to ensure that we are broad responsible adoption of AI.
Another big area for us is around data. So data is many people know is at the cornerstone of having responsible and quality AI system. This original work started by looking at being data. And we completed all those projects and then we expanded the scope to look at anything related to data and AI.
So we're in the process of publishing a 6 part multi series. First three have been published and the next three should be published this coming year. Around data quality for analytics in the AI space.
Some of the more recent work is around synthetic data and data profiles for AI. Trustworthiness, which is very relevant to the topic at hand. As well as enabling responsible systems. It is probably our largest area of work.
It is a bit the slide is a bit of it is a bit of an eye chart to try and read. And the reason is that the way we look at AI systems is we start from the fact that they are I.T. systems themselves. And yet with some differences from a traditional I.T. system, for example, in terms of the learning.
So what this allows us to do is to build on the large portfolio of standards that IEC and ISO have developed.
And then extend that for the areas that are specific to AI.
So one example of the work here is our AI risk management framework. This was build on the ISO3100 series as an implementation specific to AI.
Other things you might see bolded on this chart are things that you might hear in every day. So making something controllable, explainable, transparent. And what we do is then take those concepts and translate them into technical requirements.
A colleague of mine had put this together to indicate where societal and ethical issues lie in terms of the direct impact versus, you know, things that are further away. And I thought it was a great slide. Because everything in the yellow really maps into what we are doing today.
So when it comes to societal issues and concerns, we deal with them in two ways. The first is through dedicated projects. That are directly around this area. And again, you know, using use cases to translate from some of these non technical requirements down to technical requirements. And prescriptions on guidance how to address them.
As well as integrating it across our entire portfolio. So, for instance, when we look at use cases. We ask what some of the ethical and societal issues are. We don't do this alone. We do this in collaboration with a number of international organisations.
In terms of use cases and applications. So we it is important for us to be able to provide horizontal standards. And as I mentioned, you know, we've collected over 185 use cases. And we're constantly updating this document. But we also take a look at the work from the point of view of an application developer. Whether it is at the technical development side or at the deployment side. And we have standards in this area.
We've also started to look at the environmental sustainability aspects as as well as the beneficial aspects of AI. And a big portion of new work is around human machine teaming.
Computational methods are at the heart of AI systems. And we have an a large portfolio of work here. Our more recent work has been focussed around having more efficient training and modelling mechanisms.
Governance implications of AI. So this is looking from the point of view of a decision maker. Whether it be a board or organisation. And answering some of the questions that might come up.
We do lot of work around testing of AI based systems. This is another joint effort for us. We have a multi part series focussed on testing verification and validation. In addition to existing work, we're looking at at new ideas around things like red teaming.
Health informatics is a joint effort with IEC215 and really taking us into the health care space, trying to assist them in building out their road map. In addition to the foundational project that we've got. We're also looking at extending the terminology concepts for the sector, which may serve as a model for other sectors as well.
As well as looking at the enabling certification for the health care space.
In terms of functional safety and AI systems? This is the work around enabling functional safety, which is essential for sectors that consider safety important. This is being done jointly with IEC, SC65A.
Natural language processing around everything to do with language. And it goes beyond just text. And this is becoming increasingly important in new deployments. And.
Last but not least, wee have started a new joint working group with the ISOCASCO group that does certification and conformity assessment to look at conformity assessment screams.
Sustainability a big area in e terms of look at sustainability of AI and how AI can be applied to sustainability.
One point of allow the third party certification and audit in order to sure broad responsible adoption. This picture shows how lot of our standards come together in order to enable this. So IS So, IEC4201 or built around the same concepts. Allows us to do this.
And just quickly wrapping up just to allow time for my other co speakers. Just to sum up, you know, we're looking at the entire ecosystem. We're growing very rapidly. We work with a lot of other organisations and it is an excellent time to join.
We also run a biannual workshop. That typically look at four tracks. Applications. So one of our recent one was looking at transportation. We look at beneficial AI. We look at emerging standards. And also what some of the emerging technology and requirements are.
With that, I hand it back over to the moderator. Thank you very much.
>> DAVID LESLIE: Thank you so much. That was a brilliant presentation and shows how much work on the concrete side of, you know, how the devil is in the details and we need to really work on the details.
Also just to say we, the Hudira that we've just adopted, this is the methodology. And as we move on, in the next year or so. We'll be working on what we call the "model" which really gets into the trenches and explores some of these areas that you just presented. Thinking also about the importance of alignment and, you know, the kind of standards are aligning with the way that we're approaching this on the international governance level.
So our next speaker is Tetsushi Hirano. The deputy director of the global digital policy office of the Japanese ministry of internal affairs and communications.
And Hirano sensei will offer us his perspective on AI and its impacts on human rights and governance, both in Japan and internationally.
Tetsushi, the floor is yours.
>> TETSUSHI HIRANO: Thank you David. I'm very pleased to participate in this important session following the successful adoption of the Hudira methodology. And I sincerely hope this pioneering work will promote this new type of risk and impact assessment. And facilitate succession of interested countries to the AI convention.
Speaking of Japan and Japan has been developing its own AI risk management framework since 2016. And this year we released AI guidance for business, which took into account the results of the Hiroshima process as well.
And some similarities and differents between the Japanese guidelines and the Hudira. Starting with the similarities. Both are based on common human centred values and also pay attention to the different context of AI life cycles.
While Hudira provides a model of risk analysis of application, design and development. Context. The Japanese guideline differentiates these prospects from the prospective of AI actors. Namely the guidelines provide detailed list of what developers and deployers and users are recommending to do respectively.
According to our analysis, this is one of the futures of our guidelines compared to other frameworks. But despite this difference, the Hudira and Japanese guidelines go in the same direction in their analysis. So we are hoping to contribute to the further development of Hudira technical document from 2024, 2025
from 2024, 2025
.
which can deem as threshold.
And also provides step by step analysis or stakeholder involvement. And I have to admit the stakeholder involvement process presents is there is demanding if some of the steps are to be implemented precisely. But this can serve as a kind of benchmark for continuous development.
And Japanese government is currently working on future framework for domestic AI regulations. And I'm sure that Huderia will be one of the important documents to look at. Especially when developing public procurement risk, for example. Where the protection of the citizens is at the core of the issues.
I would also like to mention interoperability. Document of which also planned for 2025. As we all know there are many AI risk management frameworks.
And reporting framework by here shoam process code of you can't or UAI act (?) risk management framework. And to name but a few.
But the interoperability document may highlight the commonalities of this frameworks, as well as their respective strengths. Which can facilitate mutual learnings between.
In particular, there are documents that only address advanced AI systems. And we will have to think about what kind of impact, for example, synthetic contexts created by generative AI can have on democracy or also in the meetings of the future AI convention. And finally how to direct address the future role of conference parties to the AI convention.
As a pioneering work in this field, Huderia is expected to become a bench Mark. However it is also important to share knowledge and the best practises can concrete examples as this type of risk and impact assessment is not yet well known. Together with the interoperability document will help interested countries around the world to join this convention.
Thank you.
>> DAVID LESLIE: Thank you so much Tetsushi. And I'll just say that the support of the Japanese government across this process has been absolutely essential to the innovative nature and the success of the instrument. So just a real deep thank you there.
Speaking of which, I now have the pleasure of introducing Matt O'shaughnessy. And the past few years have really marked major strides. Might even say quantum leaps, in these approaches that the U.S. has developed. For instance, in AI risk management. And governance. With key initiatives AI risk management framework. And recent White House memorandum on advancing risk migrant. And for agency use of artificial intelligence. I want to ask you Matt if you can talk more about these national initiatives and speak a bit about how they reflect and contribute to emerging global frameworks and shared principles. For AI development and use.
>> MATTHEW O'SHAUGHNESSY: Thank you so much, David. And it is great to be here. Even just virtually.
So you asked at about the NIST AI management risk framework and White House budget on memorandum government use of AI. I'll say a few words and bring overview of each and talk about how they interact and inform our international approach. To AI.
So both of these documents take a similar approach. They are both flexible. They are both very context aware. Directed specifically at how particular AI systems are designed and used and particular con techs. And both aim to promote innovation and setting out concrete steps to help effectively manage risks.
Let's me start with the NIST AI risk management framework. This is the general risk management framework that sets out steps applicable to all organisations. Whether private or government agencies developing are using AI.
So the AI risk management framework describes different actions organisations can take to manage risks of all of their AI activities. A lot of nose those are relevant for respect for human rights. For instance describing more technical and government steps that can help manage harmful bias, discrimination.
>> Mitigate risks to privacy.
Also describes more general actions. Things like how to establish processes for documenting the outcomes of AI systems. Processes for deciding whether an AI system should be commissioned or deployed in the first place. Or policies or procedures that improve accountability. Or, you know, kind of increase knowledge about the risks and impacts the application of that AI system has.
So lot of these governance oriented actions address many of the concepts that are set out in the consul of Europe AI convention. And they help lay the groundwork for organisations to better consider the risks to human rights that their AI activities pose. And also, you know, address and mitigate them.
As I mentioned before, the risk management framework is really designed to be apply in a flexible and context aware manner. That is really important. It helps ensure the risk management steps are well tailored and proportionate to the specific context of use. But also that they are effective and effectively target the most salient risks pose bade particular system and context.
And David in your introduction you mentioned the importance of the Huderia taking a sociotechnical approach. Considering both the social context that a AI system is developed in, deployed in. And that is really core to the NIST risk management framework ski think really important to making sure that AI risk management more generally is, you know, effective and effectively targets the most important risks.
It setsz out steps but most effective when deployed in a context aware manner. And this also supports the development of what called calls "profiles" that describe how it can be used in specific sectors for specific AI technology or for specific types of end use organisations. Whether it is like a government agency or specific private sector entity.
So one example of that that the department of State has developed is risk management profile for AI and human rights and describes specific potential human rights impact of AI systems.
And that, you know, can help developers of AI systems better anticipate the specific human rights impacts that their AI systems could have. And helps them tailor the actions described in the risk management framework to the specific impacts that their AI system could have.
And this is also where tools like the council of Europe's Huderia tool. Human rights democracy rule of law... tool. Lot of key steps the Huderia sets out are similar nose in the NIST framework. But the Huderia provides for me detail in actions particularly relevant to human rights and democracy. Things like engaging stakeholders to make sure organisations are aware of the human rights impacts their technologies could have. Or establishing mechanisms from remedy.
So as tets Tetsushi. Tetsushi. Mentioned. Next year particularly helpful in offering insights for organisations applying risk management tools that already exist but looking for more details reference or resources to help specifically look at human rights impacts in context where those are particularly salient.
Okay so that is our NIST AI risk management framework. Which again applies to kind of all organisations and again kind of a very flexible context oriented tool.
You also is asked about our White House office of management and budgets memorandum on events and government innovation and risk management for agency use of AI. So this is the set of rules, binding rules, for government agencies, covered government agencies, that use AI. And it societies out similarly key risk management actions that government agencies who are developing or using AI systems must follow in their AI activities.
So memo released in March 2024. You can look it up, M2410. And it was in fulfilment of the AI and government act of 2020. And even though it was developed by this administration, it beldz on work started in the previous administration. Such as a December 2020 executive order called "promoting use of trustworthy AI in federal government" so it sets out lot of bipartisan priority.
This memo again sets a broader approach in United States AI governance. It is meant to be tailored to advance innovation. Make sure we're using AI in ways that benefit citizens and the public at large.
But also makes sure that we as the federal government are leading by example in managing and addressing the risks of AI.
This got into alliance with lot of provisions that were set nut the council of Europe's AI combination. I'll just give you a quick overview of some key aspects.
It establishes AI governance structures and federal agencies like chief AI officers or governance boards that promote accountability, documentation, transparency. Pit sets out key risk management practises, especially for AI systems that are determined to be what we call "safety impacts are" or "rights impacts." Including steps with risk evaluation or assessment of quality AI dataset used for training or testing. Ongoing testing and monitoring steps. Training, oversight for human operators. Assessments and mitigations of harmful bias. Engagement for affected communities, rights impacting AI systems.
Again just some key risk management steps that are mandated for government AI systems.
And we see those as really instrumental for managing impacts on human rights. Things like AI systems that are used in law enforcement contexts. Or related to critical government services. You know, determining whether someone is eligible for benefits or not.
All of those things we would label as rights impacting. And apply these kind of key risk management steps that are set out in this memorandum.
So those are kind of our two key domestic policies that set out AI risk management practises.
And in terms of the international implication of these. Both were informed by international best practises, looking to worked on other countries and international organisationst. The NIST AI risk management framework had extensive international multi stakeholder consultations.
And it is version 1.0 right now and intended to be updated over the years. So there will be, you know, kind of continuing conversations between these domestic efforts and best practises set out and developed internationally.
And in turn both of these domestic products inform our international work. So both the council of Europe's Huderia. And recent OECD projects have drown from the AI risk management framework. It is standards developed by organisations. And several countries are continuing to work with NIST to development crosswalks of their own domestic guidelines.
Both of these kind of lay the groundwork for international work on safe, secure and trustworthy AI whether in the council of Europe's AI convention. Whether our U.N. general assembly resolution on AI or our freedom on (?) joint statement responsible government practises for Iowa. And we're looking forward to over the next couple years continuing to see how this conversation evolves in the conversation on AI risk management continues to develop.
I'll end there and turn it back to you David. Thanks again.
>> DAVID LESLIE: Thanks, Matt. And also just to say Matt's press in Strasburg has been a huge boon for the you know as we've tried to, sort of, develop the Huderia over the months and years. And so just also a thank you for that continuing commitment to that process. I think its been really important to have, you know, everybody speak and share insights in the room and at the council of Europe.
So I'd like to now introduce Clara Neppel. Senior director at IEEE Europe. At the froafort of emerging technologies. IEEE is one of the world's largest technical organisations. Has been intrumental for a number of years now. And always had a strong focus on risk management. The IEEE's work on risk management provides practical tools and methodologies to ensure that these AI systems that are being developed are robust, fair and aligned with societal values.
So Clara will Shay share with us insights into the work. And into how it is contributing to our, sort of the broader AI governance ecosystem.
And I think you are there Clara in person. So go ahead.
>> CLARA NEPPEL: Thank you David. Also for the kind introduction.
Yes we were also very active in the council of Europe, as well as in the OECD, and other international organisations. And maybe one of the I think critical aspect here is that IEE, is not only a standard setting association but also association of technologies all over the world. So that permits us to be quite early in identifying risks. And maybe this is also the reason why we were among the first to start working on what we called adequately aligned design in 2016. Which permitted us to come up with some concrete instruments. Like standard certifications. Quite early.
And what I would like to share with you now is really also some practical lessons learned, which I think is important when we are discussing about how to implement human rights.
In technical systems AI systems.
So first lesson learned is really that we need to time and we need a stakeholders. We need even if we think that some of the concepts like transparency or fairness are already quite defined. You might be surprised. So I'm also co Claire of the OECD expert group on AI and privacy. And both, let's say, ecosystems have very clear understanding of what transparency means. Or what fairness means.
But they have this is very different for the privacy professionals, for instance. Transparency is about transparency of data collection. And on AI expert side it is really about how you how let's say the decisions of the systems are made understandable. So this is just one example.
And so let's say one of our most used standards right now. IEEE 7,000 took this time. So it took five years to be developed. In 2021 we had standard published. And since then there are lot of let's say lessons that we would like. Because it was really worldwide deployed.
So second lesson they would like to share with you is that we need skills. The skills that we need is not only the technology, the kills related to technology but also to ethics.
And we were investing in this also right from the beginning. We have not only systems certification but also personal certification programme. Certification of assessors. And we can say now that we have more than 200 assessors worldwide that are certified by IEEE.
We have a training programme, which reaches from Dubai as I just heard today, to South Korea and obviously in Europe. So we have this worldwide network of assessors. That also, let's say, have a certified way of understanding of what human rights and ethics is.
And third, and I think this is the most important one. Is that once we have these standards instruments and we have the skills and the people that can implement it. We can build very strong ecosystems. And I think that without that you are still working in isolation. You need the ecosystems.
In Austria because the European office is based in Vienna. We have starting from if city of Vienna. So from public services to data hubs in (?), for instance. That are based on basis of IEEE 7,000. Which means already the data governance let's say is according to ethical principles.
And then all the applications that are running on this data hub are also required to fulfil the name requirements. And this permits to have the ecosystems which in the end let's say the foundation of what we want to achieve with human rights. I think what the Huderia methodology concerns, the standard is,was a human rights first approach. And this was also acknowledged by the joint research centre of the European Commission. That made the analysis of existing standards with the AI act and acknowledgedded IEEE standards are very close to what is required with respect to human rights.
It is about stakeholder engagement. If you want. So it is about the recipe, about how to engage stakeholders. How to understand the values of the stakeholders. And I would like maybe to bring here also an aspect which I think is very often not seen. So very often we are focusing on transparency, on fairness and so on.
But there are human rights that are not in existing frameworks. Like dignity. And we have in 7000, all these aspects. All these values that are being analysed. And once they are at risk, because it is a risk based approach, then there is a clear methodology on how to mitigate those risks with translating it into concrete system requirements or organisational methods.
So this is about the design phase. And this is complemented by certified. So certification method. Which is also looking to existing systems. And assesses it along the different aspects of transparency, accountability, bias and privacy.
Last but not least, I would like to mention that we are now also in the process of scaling, let's say, the certification system. We are working with VDE from Germany. And positive AI from France. To develop trust label. AI trust label. Which would include the seven aspects of human agency and oversight. Technical robustness and safety. Privacy and data governance. Transparency, diversity, and social environmental well being.
Just to the last one, for the environmental well being, we just started a working group on the environmental impact of AI to clearly define the metrics that are being used for environmental impact. Including also inference cost. And not only in energy but also, for instance, data usage.
We are doing this also together with the OECD. So that is first overview of what ear doing. Thank you.
>> DAVID LESLIE: Thanks Clara. And really important to note here as well that, you know, making these approaches usable for people is such a priority. And one of the things I think that we that lies ahead of us is really making the human rights the range of human rights that are of concern and risk management accessible to people. And being able to translate them out so that people can actually pick up, you know, our you know, the various approaches to risk management. And really, if you will, operationalise a concrete approach to understanding and assessing the impacts on those rights.
So I'll now introduce Mr. Myoung Shin Kim. LG AI research really focuses on innovation in AI that is responsible and that is developed and deployed safely and ethically. And I think, you know, it's important dimension of that is risk governance and addressing bias mitigation and ensuring transparency and accountability.
So Mr. Kim, I'm wondering if you could share LG AI research's perspective on specifically on AI risk governance. How does your organisation approach managing these risks? And what do you believe an ideal framework for AI risk governance should look like?
>> MYOUNG SHIN KIM: Right. Thank you very much for inviting to this meaningful discussion.
Today I'll share how AI research is translating our AI ethics principle in directions focusing on AI risk governance.
Let me begin a brief introduction ab AI research. Established four years ago our mission so to provide (?) and capabilities to (?). One of our landmark achievements is to development of generative AI model capable of understanding and creating contents in both in Korean and English. If on par with global benchmarking demonstrating its competitive edge in the AI landscape. Just last year we list ads open source language model contributing to the development of the AI research ecosystem. Research places a strong emphasise on adhering to AI ethics throughout the entire lye cycle of AI systems.
Five core values. Humanity, fairness, safety, accountability and transparency.
But like more important than principle is putting them into practise. So we influence different strategic pillars to ensure adherence to AI ethics principles. Namely governance, research and engagement.
Let me explain each in detail.
First of all we conduct an AI ethical impact assessment for every project to identify and address potential risks across the AI life cycle.
Consists three steps, first to analyse project characteristics. Setting problem solving practise. And verifying research and documentation.
When risks or problems are identified, we establish specific solutions and assign responsibilities to disseminated personnel and set deadlines for resolving the issues. The entire AI ethical impact assessment process and its outcome are automatically documented and attached to the final report when the project closes in our project management system.
A unique aspect of our approach is involvement of cross functional task force. This brings together researchers in charge of technology, business and AI ethics. Each contributing their specialized knowledge and diverse perspectives.
From a perspective we pay special attention to (?). We cheque groups included among stakeholders affected. And if there is any possibility of intentional or unintentional misuse of the AI system by users.
Additionally we educate data workers about the universal declaration of human rights and the sustainable development course e, providing guidelines to respect, protect and promote human rights during data production or the process.
And generative AI model, sometimes produce inaccurate information. Which might lead to damages to someone's representation due to misinformation. To address this issue, we have developed models, AI models that generates answers based on factual information and evident. Additionally we're constantly researching online techniques to (?) personal information that was unintentionally used during training process.
And considering that AI is ultimately created by human, I think it is also important to assess the level of AI ethics awareness and human (?). AI ethics survey to assess and approve adherence to AI ethics principles.
We, I personally pledge to see the gap between awareness and practise has narrowed this Spring, cared to last year. Additionally we hold on biweekly to boost interest and participation in AI ethics among researchers.
And AI ethics to take a root in our society, I believe citizen AI interests must improve.
Additionally F high quality AI education is not provided, existing economic and social areas may widen. We provide customized AI programme to over 40,000 youth, college students and workers annually for free. And all curricula includes AI ethics to have systems to grow which are user and also like critical watchdogs in the AI market. And our efforts are expanding beyond Korea to the local level. We're corroborating with UNESCO to align. Targeting researchers, developers and policymakers to the final contents will be available hopefully worldwide by 20242026.
Finally we exiled a report for lessons learned from AI principles. Illustrates how we are implementing on the ethics of AI. And South Korea's AI ethics guidelines. We hope this can serve to help others in their approach.
The following report is scheduled to be published at the end of January, next month. And will be available on our home page. So if you have interest, please cheque.
Thank you for your attention.
>> DAVID LESLIE: Thank you so much for all that great information, Dr. Kim.
Now in the interest of time I'm going to go right to introducing Heramb Podar. A member of the policy research group at the centre for AI and digital policy. CAIDP. And it is also executive director of encode India. Advocate for frameworks that prioritise transparency, accountability and fairness in the production and use of AI systems. And it is an organisation that is also deeply engaged in policy analysis and stakeholder collaboration.
Emphasising the need to safeguard human rights and democratic principles in the face of rapid technological transformation.
So Heramb. Given your work with CAIDP. Could you share some thoughts on how NGOs can contribute to creating good governance guard rails for AI?
In particular, what do you see as the critical steps for ensuring that AI systems are designed and employed in ways that uphold societal values and human rights? And you are there in the room if I'm not mistaken.
>> HERAMB PODAR: Yes I am. I hope you can hear me.
>> DAVID LESLIE: Yes.
>> HERAMB PODAR: Thank you for the opportunity to speak. CAIDP has been a very vocal advocate. All the work we do is growned in policies to uphold human rights, democracy, and the rule of law.
Ultimately for NGOs, it is all about. Advocacy with engagement through the due process and publicwise opportunities which might come up and bringing in as much of a public voice as possible. Just a few minutes ago, my co speaker was just speaking about how all rights are not often covered. Sometimes they are contexts which will overlooked unfortunately. So really C Sos and NGOs can be that bridge between the on ground implementation or on ground, you know, risks and how the public is feeling. And the policies of being developed whether at the COE or the NIST frameworks and so on.
Specific ages, CAIDP has taken, wee have been vocal in the council ol Europe AI treaty. Think it prevents fragmentation. And aligns everyone's national policies to global standards. And recently released statements to the South African presidency for the G22 to ratify the treaty, to the U.S. senate to ratify the treaty.
Also bring in advices, one of our key members you are in o global academic network is a youth organisation focussed on AI risks making sure that AI works for everyone and that AI is safe for, you know, any kind of future generations, that they do not inherit any kind of malicious AI that might impact human righted.
Quickly jumps onto specific actions in terms of design and development. There was a very interesting question by the way. At CAIDP we have something called university guidelines on AI. Just recently celebrated the 6th anniversary of the UGOA principles as we like to call it. And what we like so most is clear guidelines in whatever policies governments put out. In terms of use cases not based on scientific validity. Terms of use cases that might be adversely impacting certain groups or impacting human rights, democracy and rule of law.
We see early examples of high risk use case. For example, in the EUI act or (?) so on. What would be exciting to see having impact assessments. Having proper transparency and (?) across the AI life cycle from design to decommissioning.
Ultimately having protections. See increasing kind of risk to developing better and better AI systems. And (?) for certain guard rails and certain protections. So people can speak their mind. And just in specific use cases like autonomous systems having termination obligations. Another one of the cornerstones. So having (?).
We see constantly that lot of states, so rewe release something artificial intelligence and democrat report on annual basis. The world's most comprehensive coverage of like national AI policies. And we rank countries according to their metrics.
Something we saw interestingly was all counts are ratified the UNESCO . But countries reallily so in implementing them. And global dijtd divide. Particularly playing catch up. Countries are not getting to (?) methodologies to the UNESCO which is our key indicator for implementation.
So again, you know, NGOs, coming back to the original question of have a role to play in making sure that country, company, other different sectors, they not only make these commitment. But they also follow through with action and action that is meaningful and not just rooted in words which might, you know, have interpreting or differences. And like actually having some sort of grounded principles or grounded metrics.
Yeah and I'll end this here.
>> DAVID LESLIE: Thank you so much Heramb. That is a really, really great to hear that this is a kind of multi it needs to be a multi lateral effort. And NGOs need to play a central role as we sense, cheque and develop the governance instruments.
I'll just say it has been amazing to hear about all of this innovative work that's been done in the standards development organisations, at the state level. The work of the council of Europe I think is, you know, its been out ahead on many things. And hearing about all this innovation, innovative work really just reminds me that actually, you know, we talk a lot about move fast, break things. Right?
But on our end of things and hearing about all this work we need to think about move fast and save things. We need to be out in front of some of the way these technologies are developing.
So to close here, I want to just maybe turn back to Smara. And ask if you had any closing observations.
>> Yes. All I would say is it is so fantastic to hear from everyone who has joined us here today. I think so many excellent points about stakeholder engagement, the role of civil society being a part of it. Being ahead of the curve. And identifying some of those risks. Skills development as well was mentioned.
I think all of this develops a really good and strong ecosystem. And using tools like the Huderia methodology in the space to identify risks and introduce impact mitigation measures. Then we can see as you said, David, move fast and save things.
So I think on that note, I'll send it back to you.
(Scheduled captioning ends in three minutes)
>> DAVID LESLIE: Wonderful. Just again. One more thank you to all of our speakers. We are striving to finish on time and thank you so so much to all of the important comments and information that were shared today.
So hope, I wish you well from the southeast of England. And I hope you have a nice time who are physically there in Riyadh. At the rest of IGF.
Take care.