The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: For people who are here take one of those microphones, headsets. And choose channel number 5. I think it works, right. I can hear you. Can everybody hear me? You need to choose channel number 5 and check the volume on the side. So to have the volume on.
(Speaking Non-English Language).
Dominic.
(Speaking Non-English Language).
Permission to Dominic Regester. Okay. Can you hear me now, yeah? Steven? Okay. Is it okay? No? Can you later me now. Good morning, everybody. My name is Vicky Charisi. I'm here to moderate this session on Governance for Children's Global Citizenship and Education. It seems there are not too many people in the room. So probably we have make it a little bit more interaction. We can have discussions, et cetera. Before that -- okay, Dominic is also here.
I will wait for Steven also. Just one moment to check his headphones. Okay. Yeah. So topic of Global Citizenship Education and National Intelligence is power.
We know it's a powerful tool for children all over the world to develop skills for cross-cultural understanding and collaboration in order to solve global challenges throughout the future.
At the moment we have education focused mainly on local challenges. So children are prepared to understand their own cultures, et cetera. And we -- with our project and with this workshop we are trying to investigate how global citizenship might help future societies.
Of course to do that you need AI applications that are challenging it in an appropriate way. And how it's go governing especially when it comes to topics for children.
So I would like to introduce our online moderator, Dominic Regester. He is a director of education at the Salzburg Global and Center for Education Restoration. Good morning, Dominic. Can you hear us?
>> REMOTE MODERATOR: I can.
>> MODERATOR: I would like to introduce organizers, chief -- Steven Vosloo and Roy Saurabh from UNESCO. And later we will have international conversations and also some representatives from youth in order to hear their own opinion, and the work that they are doing at the moment. And how they see global citizenship education and Artificial Intelligence when it comes to children.
So first I would like to introduce our first keynote speaker. Satoshi Shigemi. He is the president of the institute in Japan. He is one of the leading figures globally when it comes to robotics and one of the first that introduced many, many decades ago. He went in Japan. And Satoshi Shigemi, the floor is yours. And will he present a project on a platform that is currently under development by HUD.
And the children's collaboration and children's engagement. Thank you. The floor is yours.
>> SATOSHI SHIGEMI: Thank you very much for the kind introduction. Slide. That's okay. Good morning, everyone.
I'm Satoshi Shigemi. I'm from the Honda Research Institute in Japan. Today we will talk about research on the robotics technology for the focus across cross-cultural understanding. First I would like to look at the points throughout my talk, I will be talking about HRI, it means Honda Research Institute. It's not the same as Global Human Interruption.
And then I will talk about the overview. HRI is a resource center in that group. It looks at the weight and the people in Japan among the global race.
First of all with this diagram, HRI has three research institutes in Japan, Germany and United States. The HRI global organization was established and contacted companies world-wide. Each company focuses research for customers for unique characteristics of each region.
In Germany, the research is focused on the optimized on changing commuting. And in the United States, we are focused on research on the behaviour automated driving. In Japan our research is focusing in the human understanding and the system observation. To develop a prototype to these three elements to be done.
This is a HRI policy. HRI is exploring new and challenging technology areas into the Honda technology as fast as possible. Our mission is grounded to the concept -- it's based on the concept of the innovation through science. With the goal in society which is shared by humanity. We believe that it's on the edge of cutting technology and requires a strong foundation of scientific knowledge. And we are conflicted. And that is in this area, where it leads to the science innovation.
I believe that the next generation AI is AI for society, plus we focus on the society loop to become more accessible. We propose a target over the society for a group rather than the individual intelligence. Robot, car, device. This integration is operated -- not individual with a continuous run. We believe that the integrated machines are capability of functioning (?) not only for the complex issues but also that makes a decision. And the benefit was not just one person. But the groups or the people that surround them.
Let me explain from the Honda vision. We conducting research for 2030 Honda vision. This vision centers around the idea of serving people, the joy of expanding their life's potential. It's desirable to have a society in which relationships between the individual, the groups and the countries are well balanced. To be a harmonious hybrid society. To achieve this it is essential that support the human development -- that's the human growth. Tolerance and respect and respecting humanity. And diverse individual groups and different cultures. This includes systems for a number of new members of society. Understanding is essential. Let me introduce a brief video.
(Inaudible)
As you saw in the video be in a harmonious hybrid society, the next big step for the AI systems is to the -- from being the tool to becoming the partner. And to achieve that we aim to the development of futures and provide the long-time support in the cycle of care. Harmonization and COVID-19 contributed to the human relationships in the community. To this issue we needed a system that helped humans 24 hours a day, 365 days a year. And be able to be social enablers. In other words, the role is to help people give freedom in their lives. Currently the Honda product is used only a few hours in the day. I want to change it. And I would like to use new technology research -- new technical research. So that's Honda's no duct 24 hours a day.
Next slide introduce a how-to robot, which is exciting. As one of the ways the design -- for a harmonious society. Haru is a mediator. It provides psychological support for AI technology to the group interaction. And relationships. It's to understand the social and group dynamics towards the intergenerational and intercultural harmony to support in diversity and to avoid the social conflict and division. The values are to understand and the creators have dynamics that provide the nuclear understanding of gene building.
Now we want to use the robot for a harmonious fulfillment 24 hours a day. To achieve this goal transformation to be a partner with a corporate AI we need to explore activities. To be the human being in a social way the human being in a social way to foster active trust between the human and society. It's to promote good communication and interaction that divides countries. In the partner we have started the experience in a high school in Japan.
The second focus is in a hospital in Spain. Especially I want to bring the scenario 2 to children for the introduction of Hara.
Let me briefly explain how it's utilized in scenario 1. As a social robot Haru serves to bridge the cultural gap between the student from the different countries that provide the information, and that's unique to each country. For example Haru is different in school events such as a school sports day and the cultures as well. And school (?) these activities help children in understanding and respect the diversity. Please show the video showing this activity.
(Inaudible)
Next I will show the developing social robot for cross cultural mediations. For cross cultural mediation. We contacted the barrier steps not just to ensure the meaning for technology, and that makes it seem? But also a responsibility that the safeguard children in the process.
I will just show the three most important steps. The first step around three years ago we collaborated with UNICEF to promote a piloted study on AI for children. Adopting the UNICEF guidances in designing with our systems and technologies that it used. For example the guidances number 4 that we built took two weeks of protocol ensuring that data. And the process are related within the global systems. The kind of research we have is working right now. It's the question. Hence we need to do the work with some kind of framework that these are on the way.
Step number 2 we contracted the barriers in creating the education for students in the kind of interaction and the content they want the robot to provide. For example, we provide the children -- for the programmes. The robot's behaviour. And the kind of language they want to use. And we put this in other systems. In this case we ensure the children are participating in every technology we designed.
And the last step we gather the step in the field in social science. Humanity and education among adults. This provides cultural giving tours for children and well-being. This is useful input to look forward to the cross-cultural system for the education with a diverse culture for the children with the use of the robotic systems. I will show one more video and this activity.
(Inaudible)
Thank you for your kind attention. Thank you very much.
>> MODERATOR: Thank you, Satoshi, for the conversation. We understand this is an ongoing project. We understand it's not on the market yet. But we see how a company engages while they are developing their product engages with current policy guidances for children or engages -- includes children in their design process, et cetera, which is something that we appreciate. And of course I understand that this is something that will come later.
The connection with global citizenship here in this project and why we invited this work to be presented in this workshop is we understand this cross-cultural introduction for children as quite important for global citizenship education. Often teachers work in their local environment. So we want to see -- also we explore things that are in different settings, and different cultures, different socioeconomic settings. Et cetera. Thank you for this.
I think we have time for one question, if there are any from the audience. If you could introduce first -- now we need to give you the microphone. There. You can keep your headphones if you want. And if you want to introduce yourself first.
>> Hello. My name is Tony. I'm from the small island nation of Samur. I was wondering I see you guys are touching a lot of bases in the big nations I was wondering is there any interest or any way that small island nations could be involved in this project. Thank you.
>> SATOSHI SHIGEMI: Thank you very much for your comment. I think we will expand to different countries and to bridge the cultural divide. Thank you.
>> MODERATOR: Thank you. We will go over to Steven, I think. Steven can you introduce yourself.
>> STEVEN VOSLOO: Thank you. Can you hear me okay? Good morning, everyone. Thank you for joining us. I'm Steven Vosloo from UNICEF in the research and foresight. Thank you for the opportunity to be here.
I will speak a bit later, but now I'm introducing Athanasios Mitrou. He is a student preparing for university studies. He just finished high school. He is interested in studying digital technology and engineering. So Athanasios uses AI as a student and also interested in the ethical aspects. So we looking forward to hearing more.
>> ATHANASIOS MITROU GEORGIOU: Thank you very much, Steven. I really appreciate this workshop and dukes. And my thoughts for AI for young people. As some of us know AI is changing the world in ways that are impossible to ignore. For young people AI brings exciting opportunities but also big challenges. First I will talk about how AI can shape our education choices and career paths by providing access to information and secondly how social media keeps us connected with teenagers from all over the world.
On the bright side for those of us who have just finished school like me, AI opening up new areas of studies and careers in the fields such as data science, machine learning and AI ethics. It provides us tools that make our work faster and more efficient, whether analyzing or solving problems or creating and exploring our own ideas.
AI even empowers young entrepreneurs to build their own businesses with fewer resources make more innovation than ever. But we have to keep learning and adapting. For example now that I'm preparing for university studies often use chat go to help me code better and give me feedback on my coding and that would be way more difficult without ChatGPT, however AI raising important ethical questions like how to not have bias in the systems and protect privacy and jobs in society as a whole. It's up to us to face these issues. And that means developing skills AI can't replace such as creativity, emotional intelligence and critical thinking.
And by staying curious and resilient we can turn AI into a tool that work for us and not against us.
Social media another huge part of our lives also shapes the world, special reply with platforms like Instagram, TikTok connects us to movements we are interested in. It makes it easier than ever to advocate for change and be part of important conversation. But they have their downsides too. Algorithms often trap us in effect chambers reinforcing what we already believe and making it harder to see other points of view. And on top of that misinformation spreads fast and the pressure to react quickly can lead to shallow thinking.
To make the most of social media we have to be critical of what we can use responsible to amplify positive change. This in combination with measures taken by governments and big tech to keep social media safe for all of us. AI's social media to do more to meet the needs of young people. Imagine if AI had features designed especially for us like learning tools adapted for our age and interest. Or maybe safety modes that are able to block harmful content while encouraging creativity and curiosity.
These tools could teach us to evaluate information critically, keeping us safe and informed while we explore and grow. Finally empty us are privileged to learn how
(Audio Difficulties)
In combination AI could be very useful to prepare young people to shape better societies. AI is a powerful tool. And it's up to us to decide how we use it. With curiosity, creativity and a commitment to making a difference we can ensure that AI shapes a brighter future for all of us. Thank you.
(Applause).
>> STEVEN VOSLOO: Thank you very much, Athanasios.
So let me get the clicker. So at UNICEF we are very interested in children's rights. Either to develop them and protect them or how do we protect and empower children.
This is an area I lead. And a few years ago I lead project called AI and children, or AI for children. And we developed guidances, which I will touch on in a moment, on how AI can be developed in a way that upholds children's right, protects and empowers them.
So we started -- we engaged young people around the world and this was a workshop in south Palo in Brazil. Like Athanasios was saying. How do you use AI. And children -- anyone under 18. Children and youth are the biggest online group out of any age group. And they use technology probably more than anyone else. But technologies aren't really designed with them in mind. And that needs to change.
So we developed -- there we go. This guidances on -- like I said on how AI can be more protective and empowering for children. And there's the link. And I really encourage you to use it. It's an English, Spanish, French and Arabic and there is resources for parents and teachers and caregivers even.
Let me go to the next slide. That rescue point in the guide apps. You will recognize this in the AI world. Things like fairness or nondiscrimination or children's data and privacy. These are not new issues in the world of AI but we really wanted to focus on what does it mean for when we talk about a child's data, which is different to an adult's data. Or how do we provide transparency and explain accountability for children different from adults.
And of course as we know children's data is different. Children's understanding as they develop, their cognitive development is different to that of adults.
So thing like AI explainability, even for adults it's difficult of for children it has to be much simpler. And we need much simpler ways to provide AI interactions and experiences that are at the level of children and their caregivers.
So we work closely with agent organizations around the world -- eight organizations around the world, including the Honda Research Institute and the GOC commission working with Vicky. And we learn from that and we appreciate that collaboration. But we also work with companies and with governments. And all of it sits on top of children's rights which are basically to protection and provision and to participation.
So we published that in 2021. We are at the end of 2024. What has changed since then? I raise this because -- for two reasons, one, we need to constantly be aware of a changing technological landscape and a social landscape around that.
And secondly, we are thinking of if we had to write this guidances today or if we had to release the guidance, what would we do differently? When do you something like this ethical guidance you do it as future-proof as possible. In many ways principles have not changed. Your data still needs to be protected. You still need your privacy and you still need to be included in the process. But how has that changed. What are issues we should be thinking about. I will list a few today. I would love for you to think about what you know self should focus -- what you think UNICEF should focus on or others should focus on if you were to guide coming into 2025.
Quickly what has happened since 2021, Generative AI, ChatGPT. We know about. That huge developments like governments and companies. The minister yesterday gave a great opening speech about the global divides and those who have AI tune and power and those who don't.
Saudi Arabia itself, I was reading they invested $100 billion AI centre. AI advances from creating podcasts to clinical diagnosis to climate modeling. Thing are moving quickly. And we also see a focus on governments. In the U.K. and Seoul and Korea and February in Paris and focusing on responsible AI.
So let me quickly just include some statistics or some -- sorry some findings from a recent research survey done in the U.S. at the top.
This was with 1,000 teenagers 13 to 17. And this is interesting. I fear I won't have a job when I'm old enough to work at age 17. So when we consulted children around the world in 2020, none of them spoke about jobs expect those in South Africa where I'm from, where there's high youth unemployment. Those in the U.S., those this Sweden, those in Chile didn't talk about jobs.
Now we said coming up in the U.S. But the riot is interesting. I never know if a pick I'm looking at is AI or not. So the issue of trust is something that is changing for all of us as we experience more AI Media.
Information comes up 59% that teens are concerned about this. And almost half of teens use AI tools several times a week or more. This is the U.S. So this is not the same for all countries. But some of the data that we are getting is that children -- even if global south countries, and I will talk about that in a moment, are also using 40% or so, also using AI systems once a week. So it's not just a rich country or developed world phenomenon. That's what we see.
The last point -- the bottom stat is interesting it's by a study done by FOSI last year with teenagers in the U.S., Germany and Japan. And they said what are the top two ways would you use AI in the future? And half of teenagers in Japan said for emotional support.
So this is very interesting. Depending on what you think -- you may have different views on. That but it's a very interesting stat. This is how some teenagers will look to AI.
So I will just quickly run through some of the issues that we. This means where guidance is needed and engagement is needed with young people for how we shape AI for children. So the skills has come up again and again. This is not new. We covered this in our policy guidance but the world is changing in terms of what kind of skills do you need today and in the future? Life skills, skills for work. We don't know what the future looks like. So how do you we better prepare or anticipate what those skills are and therefore change education systems today?
How do we teach responsible use of AI? We can't debate any more whether children should use AI or not. It is happening. How do we teach responsible use and provide protections and empowerment as needed?
And how do we use AI to support education.
So the second one, AI generated child sexual abuse material. This was something that was not on the radar at all in 2020.
But it is on the rise. And deepfakes non-consents wall intimate images and videos being created and shared. The numbers are still quite small but rising quickly. It is a real problem for the victim, but it also is a real problem for law enforcement as they try to identify real victims who now are being mixed one the kind of manipulated images.
AI relationships is something that is interesting. And it's been coming up more and more. And I'm not saying there is anything wrong with AI relationships. But we are seeing new stories of AI relationships gone wrong in a sense. And there are two cases in the U.S. now where families are suing tech companies who -- they are alleging that these AI interactions caused the children -- either suggested the children do harm or caused the children to do harm to themselves. So it's something to really watch in terms of what kind of protections we need.
Environmental impacts of AI. This is something again that was not on our radar. When we wrote this again in 2020, it was really just to say AI has an environmental impact but really AI can help combat climate change. But we have seen with the data centers that consume a lot of energy to build, to maintain the minerals that go into AI systems and servers and the E-waste that gets produced. This is something that we really need to watch in the future.
I just raise this because you know several works on climate change and children really shows that climate change impacts children more than it does adults. And so we really need to watch this. Children also have a right to a clean, sustainable environment.
And lastly this mis/disinformation point. Again we did not look at this. Just three years ago it came up in the quote earlier. But the use of AI for mis/disinformation that AI is causing to mislead or distrust is on the rise. I will just do two more quickly.
The AI supply chain. Again this is not something we will focus on in a big way. But it's something that keeps coming up. That we need to improve the working conditions. These are digital products. And like all products that children use, we need to look at the supply chain and the labour practices. And there are stories of children potentially being used for content lit race in not good conditions. So that obviously has to change. Sorry, let's stop. Basically just to say thank you for this.
I would love to hear from you about what you think -- how we can create a space that -- where AI is more child centered. We have some data coming out in the next few months from a project we are doing called disrupting harm. Where we asked children in 12 countries -- and not the usual U.S., U.K. In Morocco and Colombia and Mexico. How they use AI and what they are worried about.
We are looking forward to sharing that with you. Thank you.
>> MODERATOR: Thank you, Steven. Amazing. In fact what you mentioned about the climate change issue. I was last week in the conference where -- the foundation they announced the climate change in global citizenship education. So I think there is a cry for activity on this topic and I'm very looking forward to see the next steps.
But I think we have time also for one question for Steven. Yes, please.
>> It's not about the question. Just a comment that -- first of all, let me introduce myself. I'm from Myanmar. I want to comment when we are talking about the ER in the education, especially for the children, it's not to focus children from the developing countries and from the rebirth group as well.
My comment is it's better to include them and also try to outreach to the schools in the developing countries and also for them to include in the project would be moving forward for the inclusion and diversity and also think about the future of the EU by means of the whole world.
>> MODERATOR: Thank you so much for raising this so clearly. Steven, I don't know if you want to comment on this at all but -- totally.
>> STEVEN VOSLOO: Thank you for that comment. It's really well appreciated. And you know, as we --
>> MODERATOR: I can hear Steven.
>> STEVEN VOSLOO: Can you hear. Okay. As we know the challenge with AI is that it's concentrated in a few countries and a few companies. And we really need to get those opportunities to the developing world and the global south.
In Africa there was an IMF projection that in Africa, by 2030, there will be 230 million jobs that will need digital skills, which would include AI skills I would think, the way that the world is going. So this really is an issue of how do you skill up children in the global south and also use AI to improve education in an already challenging world. Thank you for that point. That's really well taken.
>> MODERATOR: Thank you. And this gift the opportunity to give the floor to Dominic who will introduce our next speaker. Dominic the floor is yours, and she can come on camera as well.
>> REMOTE MODERATOR: Thank you, Vicky. It's a pleasure to introduce Amy Sara should. She is work are for resilience for mental health and coherence. Here is the founder and director for the foundation dedicated to fostering resilience through community bait -- based mental health and innovations and approaches.
We at Salzburg global, we had the chance to work with Amisa event project which is one of the papers that fed into the design of this session. So I'm very excited to welcome Amisa to the stage and what she will say. And after she talks, there will be time for Q&A.
But Amisa, thank you very much for doing this.
>> MODERATOR: Just to mention, Amissa is based in Kenya.
So thank you, Amisa.
>> AMISA RASHID AHMED: Thank you, everyone. I hope can you hear me.
>> MODERATOR: Yes, very clearly.
>> AMISA RASHID AHMED: My name is Amisa. I will start with a story about more involvement within AI. So I started in an organization as a board member. And this organization was handing a big case of a big organization. I don't want to say the name. And this organization was being sued in Kenya because it is an international corporate organization who is being sued in Kenya.
The content moderators actually felt -- apologies for that. So the content moderators were suing this company because of exploitation. So what happened is they worked with this company, but there was no guiding protocols and no protocols for young people who were workers and content directors and creators and some of them were building content for some AI tools.
So it was easy for them to be laid off and some of them developed mental health conditions because as content developers there were actually very brutal images as they were working
(Audio Difficulties)
This company, these young people are suing this big company
(Audio Difficulties)
Losing the battle at that moment because
(Audio Difficulties)
And these are just young Africans who have -- so the exploitation was there. So my work involved
(Audio Difficulties)
To support their mental health. But guiding them on how we can come up with better policies in regards to AI and the transition we are look into. So that is really what got me interested into AI. And especially with children. Because once -- as the case is still continuing can these young people.
And it is also disadvantageous, because a lot of governments are not supporting the young people. Of course it's been poor. But these are some of the challenges
(Audio Difficulties)
In the continent, right. So when you are talking about also AI within the African continent, one thing that
(Audio Difficulties)
The quality of the organization that we are working with, we have been using Generative AIs and chat bots to create therapy so they can be able to access mental health resources and support or just an online chat bot where they can have a conversation with the virtual therapies. So it has been good.
One of the challenges we are work on is
(Audio Difficulties)
Reflected the training of data. And it actually excludes African languages or context or perspective, because if
(Audio Difficulties)
Generic like an image of a young professional working a multination at corporate work, it won't bring the image of somebody like me. So it will bring the image of somebody else, right. So that means that the people who have been able to
(Audio Difficulties)
Algorithm and train the data, there are other democracies that use AI and should be included. And the last point how AI does not involve children. Apart from involving children, it does not involve individuals from marginalized communities, or individuals from underrepresented community, because if I'm not represented how will my context and languages be known?
And also how do we make sure that it reaches everybody, even in places
(Audio Difficulties)
Accessibility of all of these things. So that is one aspect when you are talking about AI we are face. But now look at it from the lens of children. If there are not policies to support these young people who are suing this big multination at corporate in regards to protocols and policy, so who safeguards the children
(Audio Difficulties)
Just the AI in whichever capacity it is. And then who also looks into the safeguarding policies are in existence and make sure there's a mental health bots that takes care.
The AI and the consumer.
So you know these are the repercussions and as the national company coming up with AI. In case this happens these are the repercussions with people's health and their well-being. Another issue is data privacy and security. So if there's no -- this guides policy does not take care of the policies and the security, how do we make sure that AI tools used in education? Because now AI is the main thing. And we are happy that everybody is using it. How do we make sure that it is actually used and not exploited for data for profit?
And as we know we may not be able to get
(Audio Difficulties)
From the data coming up from AI.
(Audio Difficulties)
So this is how AI and how we are viewing it -- in the continent. But what are the citizenship
(Audio Difficulties)
In Africa. Number one localized AI and create resources in Africa and languages and countries. If you have 48 ethnic languages. If you go other neighbours countries there are 100, 200 ethnical languages. The social norm that people can be able to relate.
To so when I'm using Generative AI how can did give me information and resources for my surrounding and not give me a Euro-American example that I am not able to relate with. And that will actually promote inclusivity.
The other aspect of AI and opportunities around it is equitable access to quality education. The AI addresses -- like we at vision are using AI. We know amazing innovations just topic make sure we are creating equitable access to education and other opportunities.
One thing that I am appreciative of let's say Generative AI and how -- and this is able bodied people we are not able to see. But people with a disable actually use AI a lot to ease their work. It is mental where people who have ADHD and they have issues in studying task. AI is the best tool to be able to use it. Whether it is speech to speech, like a person who is or has visual issues and they are not able to actually use tech, voice to voice or speech to speech that they can be able to use. So as much as we may be able to say okay it is not working. But for people with disabilities that we work with can attest that AI has really worked and has really helped them work in whatever capacity they are doing.
Also AI can be used to encourage global perspective and exposing children in global citizenship skills, understanding different contexts in that as much as we are saying that AI should be contextualized. We are not saying we should give the aspect of about learning more about other people's cultures and information, which can actually bring easy -- encourage global perspective and all of these kinds of things.
And just to finalize my point in regards to ethical AI in governance we really need a central design around AI. So the AI tools must prioritize and that is my call to prioritize developmental needs, particularly within the context with very social dynamic skills and that goes back to research. How much is actually invested in research for AI within the continent. When I'm talking within the continent. Africa is not homo geneic. If you go to South Africa -- so how can we invest in the country.
Not just investing in Africa but intentional about going locally and doing researches around AI so we can have the central design ask come up with need or children's needs which -- despite the barriers social cultural dynamics. So that is a call for action for most of us. Because we know the statistics. Like Africa has the largest population of youth and young children. So if we are not actually working with them to make sure that you are making the future better for them, we cannot say that we are actually creating a better society for them.
How can we have transparency and accountability also when it comes to governments? And having clear guidelines on how AI tools function and the impact on children learning. We need transparency. There's not a lot of transparency because it is for profit and a lot of people profit from the data around AI. And how can our government -- this is also a call for
(Audio Difficulties)
And monitor AI in education. You condition say that AI is there. So us and government not embracing AI and not prioritizing it means we are leaving children and young people exploited and exposed to harm around AI.
And how can we make sure finally that we have policies, PPP where it's regulatory framework. Where they are finding an investment around AI in research, finding local challenges coming up with frameworks, like data protection actions. In Kenya we have one from 2019 but how do we keep making it better. Because every day when you are talking about AI it's changing. So how can we make sure that the data protections Acts are actually going with time.
And finally if you go to my LinkedIn platform you see I'm working with decolonization and colonization of most of these thing, mental health and in AI is one of the intersectionalities that we keep on talking about. And one of the things that I'm keen on is how do we center narratives -- how do we center African narratives in AI development? Be it pushing for languages, indigenous knowledge within the AI system so we can be able to have access to this.
And how do we make sure -- children from marginalized community. I come from one, ensure that they have access. Currently we work with refugees from Sudan, and AI has been -- Generative AI has been amazing as a tool of engagement with them. But how do we make sure we highlight and make it work for them when it comes to AI? And finally how do we include mental health and integrate it in whatever AI education -- children in an emotional and psychological way. And having robots to students. How can we be able to do that?
So let me say I am happy to have any questions around it. But my call to action remains the same. How do we also decolonize AI? Because if we were -- let's say, the languages that are used and all of these things are not from us, that means that our culture from the indigenous communities, marginalist communities will never be seen. So how do you make sure as you are building AI it is inclusive to -- I'm not just saying inclusive but intentional inclusivity that can be seen. So thank you.
>> REMOTE MODERATOR: Thank you. That was fantastic. We have time for questions. So questions from the audio yes, feel free to add them in the chat or if you are in the room. Vicky?
>> MODERATOR: If there is any question from the audience for Amisa, or comments?
>> That was really, really good I'm curious what you do with the refugees in Sudan and why?
>> AMISA RASHID AHMED: Okay what we do is run a fellowship where are localize and contextualize and use cultural sensitivity to education young people around mental health. It's in fourth cohort. And in the fourth cohort we had youth who said they want to be involved. Because a lot of humanitarian support is going to Sudan. But nobody is talking about the mental trauma that comes from the war that is inflicted. So that is how we are able to come up with a fellowship specifically for them.
But since most of them are displaced in different countries we have been able to -- the fellowship already had its own curriculum that was towards Kenyan youth specifically. But now, because of AI we have been able to use it to translate the conversations, the curriculum, so it can suit the Sudanese languages. Of course we have the Sudanese youth advisory board to guide us because of. That.
But also we have a chat bot. We are training it to actually speak or understand the Sudanese language and context. So whatever place they are and they are not able to access mental health support or therapy those can be able to use the chat bot that is culturally sensitive and uses the language to be able to do. That so we are looking at the aspect of voice to voice and speech to speech and feeding it with the Sudanese language so that is Arabic.
So they can be able to access it. While they are doing their fellowship, learning more mental health, there is also resources that are a tool. Because sin we are not able to offer mental health support, as most of them are displaced, a majority within the content, how can they use the existing tool to make sure they are accessing the necessary mental health support that they need.
>> MODERATOR: Thank you, Amisa. Yeah it was great to have you with us today, Amisa. And although I apologize to the audience, the connection sometimes was not very good but I didn't want to interrupt he her because I think all of us understood. And it was a great contribution. Thank you very much, Amisa.
And we are going to move on to our last session. This is an informal conversation I had with another student. So for this workshop we thought -- you know inclusivity not only in terms of geographical or cultural but also inclusivity of youth. It's really important. So that's why we have Athanasios and one more girl.
She was not able to be with us online because she has school obligations so we videotaped that conversation and we are going to watch now the conversation and the video recording. So can we have the video on the screen. And then we will have like 10 minutes for a discussion among us to hear also about your work. Yeah.
>> MODERATOR: Hi, Ariadni, thank you for being with us today. This workshop is to give us AI and young people's global citizenship education and we would like to hear about your opinion based on your experience.
In most countries teenagers grow up in societies but using more AI applications. Can you see what opportunities and risks you see about the use of AI by young people and tell us your opinion about future possible directions?
>> ARIADNI GKLOTSOU: If you for the invitation for part of this IGF 202-4678 I'm glad to share some of my thought on AI and global citizenship education. First I was from Australia and now in Greece has shown how important it is for young people to understand each other when they grow up in different place.
I believe that a global citizen is a person that takes action to make their local communities and global societies a better place for all.
For example people decide about solar panels to support our environment and create alliances for treaties for a peaceful collaborations.
More recently developers create AI technologies that can help teen answerers connection with open teenagers all over the world.
AI has and easily becoming a component in our every day activities. Even if we don't realize it, it ranges from the creation of Instagram feed. You may scroll and extend to tools that help you do your project at school. We are all aware of the life saving tool ChatGPT is when you realize your essay is due the neck morning.
AI benefits us by create ago accommodations such as music -- and acts as more direct research engine. Moreover the use of AI in social media as a tool for young people to stay connected while enriching their understanding for different cultures all around the world.
However AI comes with several ethical challenges. When employed to carry out a task to help us learn and critical for us when we are developing other skills. An example of this is when ChatGPT is used as a generator rather than a feedback studio.
When we give authority for it to change our word or structure the sentences, this is where AI becomes more complex. Sometimes it makes us even believe that the idea is generated by ChatGPT are our own but it is important for all student to understand that in different context the same tool might have positive or neck impacts for us.
>> MODERATOR: Indeed that much is so true. Thanks for sharing your thoughts about the situation. Now can you tell us if you have any suggestions for future directions?
>> ARIADNI GKLOTSOU: I think we all need to be safe in the online environment as it happens in our physical environment. For this we need rules. I would like to talk about that the issue organization or the IBO, which I am part of. This international high school diploma programme, the IB has issued an academic integrity policy document that includes the use of AI.
I agree with it since they have made it more clear that plagiarism is a serious action that can result in a stay tuned for excelled from the IB, or lose their license IB. The document summarized that AI cannot be used as writing tool but only used to help us correct texts with grammatical errors. This way the IB creates boundaries on the use of AI without restricting it completely. To are these reasons it's important to set certain rule and legislations to control the use of AI but not to the extent that it prevents it completely.
I'm very curious, for example, how the decision on the Australian government to ban social media for children under 16 will be applied. AI is a development of modern society that cannot be ignored. And we hope this workshop at IGF will help make decisions for AI and its use.
>> MODERATOR: Thank you so much, Ariadni. That is very helpful for all of us we are glad to hear you your thought about AI for young people and we hope more young people like you will advise us on how to create better online environment for all of you. Thank you very much.
Right, so we saw some worries, especially -- I mean as Ariadni mentioned life saving ChatGPT. So we see the attitude that children have. Probably we will stop the video again. I think we hear again the video. This is what I hear in my -- is it only me? No. Okay.
I would like now to open the floor probably to the audience. If you have any comments or if you want to share something from your work that is relevant to this workshop we would love to hear. Yeah? Sure.
>> Thank you. Can you hear me?
>> MODERATOR: Yes. If you can put the microphone closer. Better.
>> So Amisa has answered my questions but -- I'm from (?) How children often are able to see infrastructure as an indication. So how can AI policies and tools be adapted to address the unique needs of children in these areas ensuring they are not left behind? And how can we ensure that they are benefited from the AI drink global citizenship education?
>> MODERATOR: Do you want -- yeah.
>> STEVEN VOSLOO: Thank you. That's a great question. What we see overall is that -- what I said in the beginning -- just to say it again because it keeps coming up, is that children use AI but they are not involved in how AI is designed or how the policies are made. So it was really great to hear very honest reflects from Ariadni on how AI is used but also how guidance is useful about banning it. But what are some of those boundaries.
So the simple answer to the question is policy makers and AI companies should involve children more in the process. Because if children's voices are heard then the AI systems will talk to their unique need, their developmental stages their different contexts.
You know as was said earlier, the child in Kenya, Nairobi is very different to a child in Tokyo or in Sydney. So these are all valid point but very often a one size fits all. So we need to all work harder and push this point of including children in the process.
So hopefully we can count on the young people among us to help us on this journey.
>> MODERATOR: Thank you. We have one more question. You have the microphone?
>> A question, actually, not a comment. I'm originally from MRI and we are facing a crisis at this stage, lots of -- because of the laws and the political timeline. And also refugee carves lots of things are happening there.
We cannot communicate in person directly to those inside refugee camps because they are also restricted to communicate with the outsider. But so far when Amisa talked about the mental health fellowship, that is something that we are trying to -- doing right now for the young people to be a mentor liaison. The purpose of that programme is supposed to be like a community driven programme.
Because we cannot go in process and train them to be aware of the materials and how to
(Audio Difficulties)
What we can provide at this stage -- and we condition see how their mental health well-being. So what we are trying to do is like we are trying to engage with them virtually throughout the programme. And also trying to provide the temporary
(Audio Difficulties)
Like what they can use in their daily life for their emotional well-being and then coping in some way. But on the one hand our concern is that thible of the programme. Because it's a volunteer based programme.
So the thing is we have to look for someone from the relevant community to get more in that programme to contribute back to the community. I believe that in this Internet Governance community, we looking at this concept and try to progress in the localization programme. That is what I talked about materials and we also need to consider about the ability of the programme without having any grants or any (?) we can figure it out the passionate young person who would like to contribute back to the community.
>> MODERATOR: Yeah, that's an excellent point. Thank you so much for raising it. I know we have just a few minutes to respond. Just from my part, I think what you said -- like having funding, but also, Steven mentioned beforehand we need more have investments and these kinds of programs should be sustainable and should be long-term. Especially if you engage with mental health issues. It cannot be for three months, for example. You need have a long-term plan.
And I hope from this community, from our side, of course we will report back on IGF and eventually to the U.N. And -- with the means that we have we are going to raise this issue, of course. But thank you very much for commenting on this. I don't know if there are other comments on this topic. No? Okay.
Dominic online do you see any -- do we have any questions from online participants? Or we are good to close?
>> REMOTE MODERATOR: Nothing coming through at the moment.
>> MODERATOR: Dominic we can't hear you. Are you muted at the moment?
>> REMOTE MODERATOR: Can you hear me now? I'm not muted at this end.
>> MODERATOR: One moment.
>> REMOTE MODERATOR: There are no questions.
>> MODERATOR: We don't have questions. So I would like to thank you all for being here today with us. And if you want to keep in contact, you have our names, contact emails. Please keep in touch. Thank you very much.
(Applause)