The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ‑‑ let us start with the online (?)
(Audio is cutting in and out)
(Captioner has no audio)
>> ‑‑ and online, whether they share the screen.
>> Yes.
How will the threat landscape for Children in the Digital World develop over the next three or four years?
The first point, it will increase significantly and lead to increased abuse and cybercrime. The threat situation is getting worse.
It will increase significantly, but, at the same time, children's awareness and knowledge of cybersecurity issues and protection against trust, and the threat will increase.
It will stay the same.
Better‑developed digital skills are ensure children can operate more securely in the digital world, and the threat situation will improve.
Or I cannot give an estimation.
I will give you a couple of minutes to complete the survey so we can review the results together.
Do we have the results?
>> We're still waiting for the results. Just a second, please.
>> Just a couple of seconds.
>> Just give me one second to share the graph.
>> ANNE MICKLER: Okay. Let me switch the screen, and then I will be able to share them.
>> GLADYS O. YIADOM: There we go. We now have the results. Several of you have responded that it will increase significantly and lead to increase abuse in cybercrime.
30% of you have indicated that it will increase significantly but, at the same time, children ‑‑ my mic is not clear, apparently.
Can you please fix it? Thank you.
It will increase significantly, but, at the same time, children's awareness and knowledge of cybersecurity will increase.
32% of you have answered that it will increase significantly but knowledge and well‑acknowledged skills will ensure the children can operate more securely in the digital world.
And at least 70% of you cannot give an estimation.
So we'll run again this survey at the end of the session and see if the result has changed.
Thank you very much, Anne.
So let's now start a conversation with our speaker.
I am pleased to have with me Melodena with us. Brief introduction, Melodena Stephens, Technical Community, Asia‑Pacific Group. She's a professor in Dubai, working on please issue with organisation such as (?) Nations Council of Europe and Dubai Foundation.
Please, tell us, what are the main risks and dangers for Children in the Digital World? And can you give a few examples?
And I would ask that you share Melodena's slides, please.
>> MELODENA STEPHENS: Thank you so much.
If I show you this picture, can you tell me where the threat comes. This is what children play with. Post pandemic, we went online. Right now, we're using gaming as an educational medium.
I will give you figures. Mine craft has 173 million active users. Less than 15 years old ‑‑ we don't know what number is 43%. Let's take another online game, Roblox, 71.5 million users. Two‑thirds are children. Right?
And this raises these interesting questions. Because when I look at this figure, I don't know who is a stranger, who is a bot, what kind of information is getting ‑‑ if children online are using VR sets ‑‑ and they are because sometimes parents let them do it unsupervised ‑‑ when they play a game, there's 2 million biometric points ‑‑ this could be information that could be used for security.
So I think the challenge that we have right now, the threats we don't see ‑‑ because we are not having enough discussions. We're taking a one‑sided view. Oh, it's online application and developing digital skills, but we're not looking at what might be the security concerns. We're not asking the questions what happens if children are too long online.
This is not working.
Next slide, please.
There are a lot of age‑developed codes. The global online safety currently showed that 18 to 19‑year‑olds scored very high on addiction. So that's one‑fourth of them. And 18% of 13‑17.
I see babies with three or four‑year‑olds using because people want to keep them occupied.
Many children have profiles put up. If a parent puts up a profile on Instagram or TikTok, there's data collected. We see a problem in this gap.
An interesting thing are is when crimes are happening, most likely, it's associated with people you know. So friends and family.
Imagine a child playing online with a friend. They don't know this friend physically. The parents have not met them. So you see how much the threat has increased.
Then you add the non‑player characters, character AI bots. We recently saw a 14‑year‑old commit suicide because he fell in love with his AI bot in the U.S. The bot did not ask him to ‑‑ the child took a gun and shot himself.
Another big challenge that I think is really important is when you look at the standards on how they decide what is allowed for children to play, there is no alignment.
So I took this picture. It's the same game. You can see six‑year‑old, seven‑year‑old, 10‑year‑old, 12‑year‑old. So we don't have standards. And there's not enough education for parents on this.
So if I look at all the online harms ‑‑ and I'm just going to leave that as an impulse ‑‑ you see there's quite a lot. I want to talk about things like self‑harm, right. We find that the WHO says the fourth‑largest cause ‑‑ 15 to 9‑year‑olds ‑‑ is online and cyberbullying. It often happens online. We may not recognise it because it does not result in physical harm, but it results in deep harm.
I will leave it for more questions afterwards.
>> GLADYS O. YIADOM: Thank you. What you highlighted is key. There's a need to undertake actions to protect children.
This leads us to our next speaker, which is Elizaveta Belyakova. Very good to have you with us.
Elizaveta Belyakova, Technical Community, Eastern European Group for the alliance Of Protection For Children in the Digital Environment.
What were the motives for founding the alliance for protection of Children in the Digital World?
>> ELIZAVETA BELYAKOVA: Thank you. I will speak in Russian because it's better than English.
(Speaking Russian)
>> GLADYS O. YIADOM: I will ask you to speak in English because we do not have interpreter services here. So I will ask you to speak in English.
>> ELIZAVETA BELYAKOVA: Okay. My English is not good, but okay.
There has been together with Russian ‑‑ to create this digital ‑‑ for our children.
The alliance is a platform that is ‑‑ one of the most important creation ‑‑ it's an education portal. This portal provides the children, parents, and ‑‑ helps develop skills and protection.
We have developed a digital environment ‑‑ we also pay special attention to cooperation ‑‑ and many other organisations.
Let me also ‑‑ sorry.
>> GLADYS O. YIADOM: Take your time.
>> ELIZAVETA BELYAKOVA: Okay. Let me also answer a few questions.
First, what is the best way to address children, parents, and teachers? It's important for children to use games and ‑‑ to involve them in the education process.
Second, how to adapt to the learning programmes? We believe the programmes ‑‑ characteristic of children of ‑‑ they simulate it. We see great participation in the organisation in the workshop and around the table ‑‑ business and government.
Thank you for your attention, and I look forward to the continued ‑‑ thank you so much.
>> GLADYS O. YIADOM: Thank you. Thank you very much, Elizaveta, for your work and your contribution that you're doing in these spaces.
And thank you for having shared your experience with us.
Let me turn it over to Elmehdi Erroussafi, Technical Community, African Group.
So my question to you, Elmehdi, is this one. In your opinion and experience, what are the main challenges when it comes to protecting children in the digital space?
>> ELMEHDI ERROUSSAFI: Thank you so much, Gladys. First, I would like to thank you for the invitation and to talk about this very important topic.
I was surprised. This issue is really getting worse year after year.
As you kindly introduced, our experience in Morocco with the (?) Helped us firsthand see how difficult it is to implement a national programme in order to raise awareness around those issues, and I would like to just share some of them. I really think the issue is much bigger than that, but we can summarise them into maybe three or four challenges.
The main challenges, according to what we see, is the speed at which technology evolves, and, unfortunately as well its misuse.
So technology advances at speed, but we can see that the harmful people will use it, use the advances such as AI and deep fakes and those kinds of technologies, to create harm.
So this sophistication really poses a difficulty, but as well as (?) And a technical perspective.
Acting quickly is something we are trying to implement as well through the implementation of the helpline to help children, for example, remove content online.
We also a try to go beyond children. We talk about weak populations. To act quickly, the target people ‑‑ I come to my second challenge ‑‑ need to be aware. Need to be minimum trained to be able to detect, actually, a fraud scenario, I would say or maybe a cyber threat in the Internet realm.
While children are Internet savvy, we know they are not well trained into the main threats in the Internet and digital worlds.
Equally, parents and educators are also ‑‑ may lack the knowledge, especially the technical knowledge and tools, to guide them effectively in this endeavour.
So this is the gap we see, and the cybercriminals actually exploit that gap.
The third and final challenge would be the inconsistency across borders. And I emphasise across borders.
We act locally, but we very quickly found out we cannot. We're talk about Internet giants. We're talk about international platforms. So we had to reach ‑‑ we had to expand our reach, hence the collaboration with some of our partners, like Kaspersky, to actually touch and be in touch with the regulatory boards across the border. So we have addressed these challenges through multiple approaches. I will leave some of that for the next questions.
Thank you.
>> GLADYS O. YIADOM: Thank you, Elmehdi.
Having a representative from the civil society, of course, including government, which is key.
This leads me to our next speaker, Heng, who will represent industry.
Heng Lee, Private Sector, Asia‑Pacific Group, thank you very much for being with us today. Just a quick intro, and then I will share my question to you.
So Heng Lee is Singaporean. He worked at the Singapore's Ministry of Home Affairs and studied issues at the intersection of law enforcement and technology, including crimes targeting children and cyberbullying.
Why is Kaspersky committed to protecting children in the digital space, and what project has the company initiated in that regard?
>> HENG LEE: Thank you. Thank you, Gladys, and my fellow esteemed speakers.
Let me address this the in two parts.
Firstly, the why. I think a lot has been shared about the threats and protecting children online. I want to add the perspective from the advantage point of a cybersecurity company, and why has the online problem made it particularly relevant to an entire tech industry.
In the physical world, we have child safety seats in a car and warning about small parts that can be swallowed by a toddler.
There's an online (?) Effect where individuals behave differently than they would in face‑to‑face interactions. It's because of the anonymity.
The tech industry is well placed to offer solutions. Tech practitioners have an understanding of trend.
The how, what are is projects that Kaspersky has initiated in this space? The first and foremost I want to share is (?) Control. Many cybersecurity companies have come up with solutions and parental control, and so has Kaspersky. Ours is called Safe Kids, it's been around for 20 years. This protects children from harmful content. Once a week, parents receive reports on what their child searched for on the Internet. This helps them to better understand their child's interest and to remind them what is suitable for them to search on the Internet and what is not.
It allows web filtering by enabling parents to block adult content, violent sites, and video games.
Moving on to usage control, Safe Kids ‑‑ parents can set time limits for the use of the device by sharing time off and days off.
Content can be blocked when time is up. It can be turned off when child needs do their health care reform.
Of course, we know that such apps are not ‑‑ guidelines on whether the installation should be discussed with children and how.
For instance, we are suggesting that from the ages of three to six, there's no discussion needed. From seven to 10, children need to be informed. From 11 to 13, there has to be a discussion. From 14 to 17, there should also be mutual agreement.
This is where tech intersects with policy.
So Kaspersky extends into thought leadership and education programmes engaging even parents and teachers who need to be equipped with the knowledge about cyberthreats to make decisions about their child.
In 2023, 170 events were done, reaching out to some 700,000 people around the world. What we really sought to do is to share some of our findings on threats and make them actionable for parents and teachers, like anti‑hacking or protecting children's privacy, identifying indicators of cyberbullying and notify parents (?) Becomes the norm.
We are teaching A to Z on cybersecurity. A is for authentication, B is backup, C is (?) And this is the world we live in today and how fast things are changing. I don't think adults even know all these words. I encourage people to download the copy.
Gladys has also been distributing at our booth in IGF.
Of course, there are other initiatives like a joint study with the UAE and the white paper written with the Singapore University of Technology (?) Real stories to share that is backed by our data.
Given the limited time, I won't be able to go over the details of many of these initiatives. I will be happy to share more about these later on in the questions and as well as the interaction section with the audience.
Thank you very much.
>> GLADYS O. YIADOM: Thank you very much, Heng, for this conference and the action led by Kaspersky.
As you said, there's a need to have this conversation with children.
I want to have a conversation with all of you panellists.
Elmehdi, there's risks coming up with AI.
AI is often exploited by cybercriminals and (?) Text that look real. What can the different stakeholder groups to counter this measure and mitigate risks.
I will kind of ask Melodena to share insight.
>> MELODENA STEPHENS: So I think the first thing is to know how many pictures it takes to make a deep fake. You can do that with one.
How much of a voice recording, 15 minutes.
Then think about children voices recorded and pictures being put up. People are putting pictures up online, this is my new class.
We use platforms where we record things, but we never ask the questions about what happens to these recordings? Are these recordings with the platforms? What are the safety things?
There's constantly training. Zoom is training on the recordings that are there.
The thing is the large companies may choose ‑‑ I don't have clarity not to share the information, but there are many, many educational apps. Who is vetting them? We know that apps fail very quickly. You need a minimum user, 50 million users, otherwise you're not going to be successful. When they die, what happens to the data that the teachers have used to keep the kids engaged? We don't have answers because no one is vetting them. No one is asking these questions.
I think the problem with deep fakes, it's very easy to make, and we need a whole of society responsibility, but we need regulators to get onto this too. We need ministries of education to perhaps vet apps and say these are approved and these are not approved and also monitor them. If they fail, make sure their data is not leaked.
If you look at the deep web, one of the biggest challenges is child pornography. The children being trafficked there are synthesized. They take real pictures and superimpose them on compromising pictures. Imagine the child growing up and confronted with a picture that is a deep fake? What is the impact years later when they're trying to get a job?
We do not have an idea of how this will evolve in the future, but it is a little bit scary, and I am worried for the children.
>> GLADYS O. YIADOM: Likewise, Melodena.
Heng, can you share your thoughts on that.
>> HENG LEE: Certainly. Let me divide it into a few parts. Really, it's really the people, the process, the technology. The people here would really need to give the kind of awareness and education as to what kind of threats children are facing. In terms of policy, we have examples, very recent and effect from Australia. I think many of you may have read in the news that Australia is going to be become the first country in the world which intends to ban children under the age of 16 from using social media.
It is, in fact, the world's most restrictive regime so far, but there are also questions about how this is going to be enforced, how to ensure that children under 16 don't have access to the social media because there are age limits for alcohol, but that doesn't stop children under the age of 18 or 21 from getting alcohol in different countries. So enforcement can be a problem.
And then, of course, there is technology, which, I think, once again, coming from a tech company, I feel a value of what a tech company can contribute is practitioner experience, the understanding of what the latest threats are and how to guard against them.
Since we're on the topic of AI, Melodena has shared a lot about how quickly it is to ‑‑ how easy it is to create a deep fake, just 15 seconds of voice and just one picture. AI is actually making it easier to groom children as well.
Just imagine if adults can fall for deep fakes, what more would it be for children?
And the kind of grooming also comes in the form of conversations where the child could think that he or she is talking to a friend who is playing an online game, but it could really be a bot which is being programmed to gather some of the personal data.
These have long‑lasting effects. Just one example, if we have a bot that's collecting information, asking a child, what is your blood type, and the child takes it as an innocuous question and answers it. This is something that stays on the Internet forever. It's unlikely the blood type is going to change. So in the database, whatever blood type this person has remains on the Internet forever.
So this is actually quite a sobering thought, when you think about it. It's damage that online gaming models and together with AI can create.
So tech companies need to guard against these to flag these out as possible when they come across these new threats.
And coming from wearing my hat as a regulator from the Singapore Ministry of Home Affairs, and there needs to be an enlightened approach when it comes to regulations to ensure the balance between consumers, protection, and innovation. Try not to jump into any new threat (?) Start with guidance and principles rather than: How do we regulate ChatGPT?
So I think contributions from tech companies is essential to create an ecosystem that's safe for children.
Thank you.
>> GLADYS O. YIADOM: Thank you, Heng. Absolutely. When you're mentioning guidelines, we can see here that it's about an ecosystem, really. So we're talking about children, but there is a need to also address parents and teachers, which leads me to my next question that I will address to Elizaveta and Elmehdi? What is the best way to address parents, children, and teachers? And how are (?) Cybersecurity offerings?
I know you answered the question early, but if you want to perhaps add some more comments to that?
Elizaveta, are you with us?
Perhaps we'll start with you, Elmehdi.
>> ELMEHDI ERROUSSAFI: Sure. If I got the question right, we're talking about how to address parents, teachers, children, stakeholders, different targets would require different awe approaches. First of all, let's talk about children.
So what we noticed is that effective curricular actually gamified would create interactive experiences Chile we teach students. We want to get engagement. Remember, we said that one of the challenges to get children trained, able to spot threats and alarms, so the easiest way and most effective way will be to create gamified curriculars: We tried that through the subsection of EMC where we created games on the Internet, such interactive experiences to get as many children engaged as possible.
Now, for parents, the focus should be on practicality, right? So the guidance needs to be ongoing and needs to equip parents to protect their children.
This is due to technology. When we talk to parents, us included, talking about our generation, we start thinking that technology is far ahead of us, so we actually need practical guidance. We need to understand ‑‑ I understood Roblox from my daughter. It took me time to understand the threats behind that. So it's really very interesting to see how this space and how parents sometimes feel lost.
So, again, at the EMC, we created practical guides for parents.
Now, we need to also think about teachers, right, because we think that peer‑based and structures, learning is also important when it comes to cybersecurity.
So teacher will play a crucial role, and they are in contact ‑‑ constant contact with children. Us, as an NGO, we are not as often with children as teachers and parents would be.
So it needs to be a cybersecurity curricular that aligns with their teaching objectives. So depending on what kind of level we're talking, what kind of school. So, again, equip them with foundational cybersecurity skills, and it is essential, again, to make it practical. We don't want to have only theory on that.
I think part of your question is how to adapt the curricular. So the module should be part of standard education. Ideally, we want to have, from an early age, something evolving into more complexity as the child will grow in different levels.
It can be like digital safety weeks, for example. This is some of the initiatives we have been doing or online hygiene session. So we had that as well done with some of the classes.
Again, it involves developing age‑appropriate resources.
So the key is to adapt to agents.
>> GLADYS O. YIADOM: Thank you. I see Elizaveta was with us a couple of seconds ago.
Elizaveta, are you back?
Okay. Let's move to the next question that I will address to Heng and Elizaveta as well, if she is back.
Heng, how can a multistakeholder dialogue corporation ‑‑
>> HENG LEE: I think the problem cannot be faced or solved by any single stakeholder because no one would have all the answers to this problem, even the complexity of it involving regulators, tech companies, parents, teachers. So that humility and understanding, it needs to be all hands on deck. It needs to start with their understanding.
Also, there needs to be a recognition that this problem is not something that's confined to a certain country.
So whenever we see a case of something very alarming happening in another country, that could soon be on our shores very quickly.
So that dialogue gives the urgency of this problem. It really gives us the impetus to create dialogue specific to this problem.
The workshop that we're having today, how we come together to influence policy at a national level because we have regulators, I think, with us today. We have government representatives who can take these ideas back.
The cross‑pollination is not just from industry but people who have done it before, practice nears, NGOs, academics like Melodena, giving very good ideas on how we can shape a balanced approach to regulating (?) That can protect children.
Some of the examples of what have been thematic discussions in the past would be such as the world Anti‑Bullying Forum, it had been widely discussed. Also, there's a safer Internet forum done by the EU. Instances like this allows people to sit together and learn what has succeeded or failed, especially the instances that have failed so that we know how to draft laws in a way that avoids these pitfalls.
And coming to the example I just talked about, becoming very crucial about how blood types could be ‑‑ data of blood types could be taken up from game models.
This is something that health authorities from around the world could be interested in. So especially for children ‑‑ because adults may understand the importance of keeping health data close to themselves, but children may not. I'm just sharing it to see if I'm making a good friend or they look at it in terms of whether I'm just sharing my horoscope. There's nothing wrong with it.
So education and awareness, once again, has to be present across different verticals, health care being one of them.
So the involvement of different verticals ‑‑ and I don't have a comprehensive list of what these may be, but when there are new challenges that target them, they should be involved in this conversation. I think, as a start, that would be a good approach to understanding who we need together for these conversations.
Thank you.
>> GLADYS O. YIADOM: Thank you. Thank you very much, Heng, for this thought.
This gives us the opportunity, also, to open the floor for some comments and questions. I've seen that we have a request from the Ministry of Education. Could we please give an open mic to Mr. Andrey Gorobets.
It's online?
>> ANDREY GOROBETS: Hello.
>> GLADYS O. YIADOM: We can hear you.
>> ANDREY GOROBETS: Can you turn on the translator.
>> GLADYS O. YIADOM: I will ask you to speak in English.
>> ANDREY GOROBETS: I will speak in Russian online, and Marguerite, you can ‑‑
>> GLADYS O. YIADOM: Okay. We have someone that can translate in Russian.
>> I will translate from the audience here.
>> GLADYS O. YIADOM: Go ahead, sir.
>> ANDREY GOROBETS: (Speaking Russian).
>> So I'm supporting colleagues, how to deal with digital challenges, is one of the key one, not only for Russian Federation but for others across the world, it's very important.
>> ANDREY GOROBETS: (Speaking Russian).
>> We believe that our key goal is to focus on new skills development for our kids to adapt to the new digital challenges.
>> ANDREY GOROBETS: (Speaking Russian).
>> And the availability of new technologies, new technological instruments needs to be instrumental and helpful and not to stop ‑‑ not to be a show‑stopper.
>> ANDREY GOROBETS: (Speaking Russian).
>> And we believe that we need to address the issue holistically and cybersecurity issues, challenges has to go along with psychological and educational goals, and it all has to be in one, holistically.
>> ANDREY GOROBETS: (Speaking Russian).
>> And also needs to pay attention to cooperation between governments because the transborder issues are there, and it is important to address kids safety goals.
>> ANDREY GOROBETS: (Speaking Russian).
>> To address kids' safety, we're working on three levels.
>> ANDREY GOROBETS: (Speaking Russian).
>> The first level is technical level to ensure that our devices kids are using are protected from harms from unwanted comment, threats, et cetera.
>> ANDREY GOROBETS: (Speaking Russian).
>> Second level is software level to ensure devices are encrypted with (?) To check the content.
>> ANDREY GOROBETS: (Speaking Russian).
>> And the third level is content level, substance level to ensure that what kids consume is safe, is curated for their development.
>> ANDREY GOROBETS: (Speaking Russian).
>> It was mentioned that ChatGPT, Generative AI also has risks, and we agree with this.
>> ANDREY GOROBETS: (Speaking Russian).
>> But we need to admit that it's also Generative AI, ChatGPT is also instrumental in terms of teaching kids, and they can be used for goods in educational process.
>> ANDREY GOROBETS: (Speaking Russian).
>> And the role of the government is to equip teachers with the technologies that can be embedded in a safe way, for the educational process.
>> ANDREY GOROBETS: (Speaking Russian).
>> And we need to, altogether, work on the common goal of equipping schools and universities with the good, modern technologies for the modern education.
>> ANDREY GOROBETS: Thank you, colleague.
>> (Speaking Russian).
>> GLADYS O. YIADOM: Thank you. I ask you to share your name, organisation, and share who the question is addressed to.
>> Thank you very much. My name is Arda (?) On terrorist content (?) In the Netherlands.
I'm sorry. I don't know the names of the panellists sitting here. But I would like to ask the lady and sir from Morocco. We found that the mow does operandi ‑‑ it's more at stake in the online environments. I was disappointed. I was hoping the room would be packed today because this is one of the most threatening subjects we have at the moment, and I don't think we're doing enough. We keep talking about it, but, at the same time, the big tech is just over our shoulders and not protecting our children enough.
My question would be ‑‑ you mentioned it ‑‑ how do you (?) A regulatory body to put more regulation online to protect our children if you know it's, for the perpetrators, just another easy crime?
>> GLADYS O. YIADOM: Thank you. Melodena, I will ask you to answer this question first.
>> MELODENA STEPHENS: So your question was just what should we do for more regulations? If I'm correct.
There's a literacy gap at societal level and regulatory levels. We're not understanding ‑‑ and even with engineers. I'm also working with big tech. I work with IEEE. Even engineers don't understand the consequences of designing code. Start‑up founders, they mean to change the world for the better, but the moment is when you're embedded in 50,000 devices and you don't have the safeguards or protocols or there it's someone who says where is my money because I've invested 50 million.
Stockholders do the same thing with the stock market.
It took 68 years for airplanes to reach 50 million customers, but it took Pokémon Go 19 days.
There's no regulator in the world that reacts in 19 days. We're far behind.
We need a public sector that's thinking 20 years ahead of the private sector, which is not the case right now.
So what can we do right now? I think we need to make hard no‑go areas. Does a child who is five years old need to be composed to the Internet and have a mobile? Should that child have the right to childhood right now and just exploring certain things, learning how to read, looking at books. When you talk to psychologist, reading is a slow process, but it helps more than gamified stuff.
It's hard to apprehend criminals and make them accountable if they're in another jurisdiction. We need a lot more coordination for apprehension of criminals, a lot more transparency on what is criminals.
And I want to come back to cyberbullying for this. A lot of children bully children, and it would be considered a crime. They don't know better. They think it's okay to put a face on something else, and it's fine.
So I agree with you. Grooming is the same. With AI, it's easy. You're basically mirroring a child. So if a child smiles, you smile back at it. The child develops trust.
Therefore, if you say it's bath time, the child will do whatever needs to be done, and the camera is on.
It's difficult to catch.
And parents have a rule that no kid could be on the computer unsupervised. But we're at home. It's fine. So it's literacy, I think.
>> GLADYS O. YIADOM: Thank you.
Elmehdi?
>> ELMEHDI ERROUSSAFI: I think it's with (?) Regulators, data privacy regulators are actually partners. So it's a common goal, align everyone on a shared objective. This issue has been taken from a different perspective.
Us, as technical people would, you know, as researchers, look into the technicality of it and look at compliance items to be checked.
Whereas from the regulator, those are more into policymaking, more into protecting the customer.
So we need this shared goal, and we need to understand the pain point of each other. So collaboration is key. We don't need multiple initiatives here and there. We need to have focus, and we need to have collaboration.
Outcome from that point alone ‑‑ because I think we are months ‑‑ maybe not years but months ahead, working in a collaborative manner.
>> GLADYS O. YIADOM: Thank you.
Can you share the perspective from the industry, Heng?
>> HENG LEE: Sure. Maybe I will share from my region, which is Asia Pacific. When it comes to regulation and how to respond, like Melodena pointed out, there's no way to be responding to a challenge like this, in this magnitude, in 19 days.
So in my home country of Singapore, there's different outfits set up to respond to new challenges that are emerging.
So, an issue like child safety, public security, the police may come into it. Internet regulators may come into it. The social affairs people may want a hand in it.
So the latest problem to such an outfit is the protection from falsehood.
There's a protection from (?) ‑‑
(Audio interference)
>> HENG LEE: Yes, can I hear you.
>> GLADYS O. YIADOM: There's a problem with your mic.
>> HENG LEE: Can you hear me now? I am not hearing you very well. It's very choppy on your end.
>> GLADYS O. YIADOM: Can you check if it's better.
>> HENG LEE: Can you say something again? Gladys, are you saying anything?
>> GLADYS O. YIADOM: There's an issue. The issue comes from onsite. You can hear online.
While we're fixing this issue, we'll give the floor to another question, and I will come back to you. Online, they can hear you, but I believe that we're having issue with the ‑‑
>> HENG LEE: Okay. Do you want me to continue because I kind of hear you.
>> GLADYS O. YIADOM: Ah, now it works. So please go ahead. Then we will take your question afterward, the question from the audience. Please go ahead.
>> HENG LEE: Sure. I was talking about how there are different outfits being set up for different purposes in Singapore, like the protection from (?) So as we come to a time when we need a child protection authority for content and how does that overlap with existing outfits which are in the physical space which are protection.
And what is the qualitative difference other than online (?) These are all problems we think about from a regulatory perspective, and it's something to think about, how we answer questions, in terms of let's have a new team of people doing it.
The people who are staffing such a new outfit equip with the response skill sets to understand this is an upcoming trend that needs to be addressed.
This is something that will happen no matter whether you regulate it or not. So the acumen has begun over time, but the shape and form that outfit could take is probably something we could start thinking about.
Thank you.
>> GLADYS O. YIADOM: Thank you, Heng.
We'll take one question from the onsite audience, and then we'll have a question from the online audience.
Could you please give the mic to the young man over there?
Thank you. Please share your ‑‑ yes, we can hear you. I will kindly ask you to share your name, organisation, and who you address the question to.
>> Ethan: I'm Ethan from Hong Kong and I'm the youth Ambassador from the (?) Foundation.
I'm asking this question to every one of you.
In Hong Kong, specifically, I guess, many parents try to protect their children from the Internet by physical ways, like banning their children from using phones or maybe banning them from going to the or contacting the Internet before five years old but two years old (?) They are not able to use the phone because of a (?) Problem.
I was wondering. I've only heard perspective from teachers, parents, and my peers that this way ‑‑ the ways of protecting children. I want to hear different perspectives. Will banning children from using mobile phones or banning the children from using the Internet, maybe an effective way or suitable way for them to not be cyberbullied or insecure Internet.
>> GLADYS O. YIADOM: Melodena, please go ahead.
>> MELODENA STEPHENS: So I think the technology is here to stay. I don't think banning alone is good enough because we have to teach people how to use the technology safely. And so it's a dual problem. We have to educate 8 billion people of the world. 30% are still not online. We need to educate everyone online. We need to educate parents as their children grow and how new technologies come, how to use that. And we have to educate children.
I want to give you this one example. I was working for a company, and we were trying to look at preventing bullying and harassment on an online game, and everything we did, parents, a lot were adults. We came together. We looked at content moderators, algorithms, AI bots. But the thing is the children were still accessing it because some parents' friend allowed them to use it. Children, whatever is banned, they want to find a way to use it. So the kids were still finding ways to do that.
Then I decided to interview a little boy who was seven years old. I asked him: How do you know it's a stranger on Minecraft? We had all these answers as parents.
He said, if it's my friend, they would know what I ate for lunch, if they're my friend. I ask them what did I have for lunch today? And if they don't flow the answer, I don't play with them.
I never would have come up with this answer myself. I'm very happy that you're here. We need to include the children in the dialogue. They may have answers. I'm not playing on the ground. I don't have (?) With that thing. Thank you for asking that.
>> ELMEHDI ERROUSSAFI: I think everything has just been said. I will just say in this method of forbidding children might seem to be effective, but what we noticed, talking to children and parents, is that there's a more effective way, which is to actually build trust with your child, as a parent or as an educator. The child needs to trust you enough to be able to come forward if he is being harassed or if he is being bullied on the Internet without fear of punishment. So open communication is key. We advise technology is very fast, and children will get access to technology. Banishing access to the device itself may not be effective in our currently world. So open communication is key and building trust with your child so that everything can be said in an open manner.
>> GLADYS O. YIADOM: Thank you.
Heng, do you have brief comments from the industry perspective?
>> HENG LEE: Yes. Thank you for the question. I'm happy to answer it. Having lived in Hong Kong myself, I'm going to start with a Cantonese answer.
(Speaking Cantonese).
There are trends in a world that we really can't disobey. Like Melodena said, I agree, tech is here to stay. Which is why going back to what I just mentioned, guidelines for safe kits when we interact with parents on whether or not to tell them what to discuss with the children before they install safe kits on their children's phone apps, we're saying that for the younger children, maybe there's no discussion needed.
At a certain age, there needs to be information about what am I doing this for? So it doesn't have to be a very prolonged information, just that there's an app I'm installing to protect you.
But at some point in time, you realise that especially by 14 to 17, our guidance has changed to you must get the children's permission before installing the app.
Because we still recognise that you can probably install the app, but the children are also wise enough to understand how to uninstall the app from their phone.
They could even be reinstalling it every day before they come home. Or they could have a different phone altogether, one they use and then a dummy one just to show the parents.
So there's no hard‑and‑fast or blunt way of solving the problem itself, but I think that constant communication to maintain trust between parents and children, rather than a blanket ban, will be something that's more effective and sustainable.
Thank you.
>> GLADYS O. YIADOM: Thank you, Heng, for your answer now.
Perhaps turning to the online audience to see if there's any question.
>> ANNE MICKLER: Yes. There's one question, and it comes from the head of public affairs Europe at Kaspersky.
This goes to Melodena and Elhmedi.
How can the various competencies of the stakeholders be brought together to provide good and efficient training and proper solutions for children, parents, teachers, and other stakeholders? And should this start at the local level and then be expanded regionally and globally?
>> GLADYS O. YIADOM: Thank you, Anne.
I will kindly ask Elmehdi to answer first.
>> ELMEHDI ERROUSSAFI: Through for the question. I think it's a problem for everyone. Again, we're emphasizing on corporations. So every stakeholder will bring something to the table. The regulators will state, as I say, common objectives and guidelines. NGOs will help to touch the ground. We believe in the NGO work. We think it's very effective. Vendors will provide the technical solutions, technical responses. Educators, academia, will provide the research and oversight, early warnings. It's actually needed from research, and we need to be aiming for the best. We really need to be looking forward and having those years ahead.
Everybody can collaborate to build a common, I would say, strategy. So acting locally is very important. This is where we touch. This is where we also take into consideration local points, such as the one we just heard.
So locally, it's very important.
We need to open the channel of communications.
One example would be, for example, AI regulation. This is a big subject. This is a global issue, and it needs to have a global regulation. So acting in the spirit of collaboration, let me just share that we work with those big tech companies ‑‑
(Static on the audio connection)
>> ELMEHDI ERROUSSAFI: As a partner of trust to be able to report content so they remove it may be quicker than on a regular basis, the collaboration, and, hopefully, we get there with help.
>> GLADYS O. YIADOM: Thank you. I agree with everything you said. I just want to talk about competencies.
(Audio is distorted) ‑‑ (audio is distorted) ‑‑
>> MELODENA STEPHENS: This ruthlessness that I will ensure that the child is safe, and what does it mean, "safe"? From industry, I want alignment on values. What does it mean when I say, "for good"? This alignment on values across all industry.
From society, we want the reflection of culture. I have to say that we have a different source of culture around the world, and I would like that to be there.
For me, for example, a child is not just below 14 years of age. I would think a child is 18 or 19 or 20.
(Audio is distorted) ‑‑ (audio is distorted) ‑‑
>> Researchers publish before something or against something. We've evolved enough. I would like them to ‑‑ what is harmful. What are some of the goods.
>> GLADYS O. YIADOM: Checking with the audience onsite. Do you have questions?
We do have questions from the audience. Can someone share a mic, please?
Please, name, organisation, and who you address this question to, please.
>> Grace: Hi. I'm Grace. I come from Uganda where online is about risk is very low. My question is addressed to anyone who can answer. So I would like to maybe have maybe campaigns. How can I collaborate with any of you to raise these campaigns and let people know about online child protection?
>> GLADYS O. YIADOM: Thank you. I would address this question to Melodena who can share experience and some of the best practices.
>> ELMEHDI ERROUSSAFI: I will pivot from this question to also share best practices. I talk about common goals. We share common goals, but one very effective way to work and start such campaigns would be to work by small‑focused work streams. We call them task forces. So we have a team. When we get there, we contact every stakeholder from the public sector or the private sector. We use the project to work in project mode, and we build those tasks.
(Audio is distorted)
>> ELMEHDI ERROUSSAFI: I would like to suggest that we have a stakeholder and look at (?) Actually able to support that project. Telecom operators, IT companies, Ministries of education.
It also gives you a trusted hat because you're (?) For nonprofit. With young people, I believe this is very important. This is part of your education. A portion of your time should go toward that work.
As I said, (?) We want to build the company. We need someone from the ministry, someone from education to build the content. We need some experts.
From my experience, we know how generous the African people are.
(No discernible speaker)
(Audio is distorted)
(Scheduled captioning will end in three minutes)
>> ANNE MICKLER: The sound quality got really bad. The online audience can't hear.
>> HENG LEE: Is there anyone speaking onsite? We cannot hear anything.
>> ELMEHDI ERROUSSAFI: Local content creators are not really scarce ‑‑
>> ANNE MICKLER: There's no sound in the Zoom room, apparently.
>> HENG LEE: No, there wasn't.
>> ELMEHDI ERROUSSAFI: Another issue, a part of the issue is content creator‑wise because they are not really scared. They are not really aware of educational curricula that requires ‑‑ for them, it's just to have the solution, to have material something on the hand that they can share.
Now, what I can suggest from this workshop, is to allow us, we are coming from such countries. To allow us to have a proper way of bringing about awareness to the public, to those children at schools because some of them, maybe the school is not equipped, but they are coming across the devices out there. Now, the only way to save those children is to bring about a programme such as we're talking about from those kind of experiences so they would be aware and they would be willing to say no to those who come to bully them. Yes, this is the kind of collaboration I can ask for to whoever ‑‑
>> GLADYS O. YIADOM: Thank you very much for sharing that. I believe Elmehdi is the example of the partnership we have. Please do not hesitate to talk to him.
We also have you there to discuss further.
We will take the last question, please.
>> Thank you for giving me the floor. I'm a child rights advocate from the (?). I just wanted to mention that what Melodena said about age of children pretty much resonates with me because we have these cultural nuances, but we also have the UN Conventional rights which defines everyone under the age of 18 is a child, and with regard to all of the things we've been discussing (?) Capacities of the child. It's not the exact age of if the child is 12, 13, or 16. It's about their evolving capacities. I would also like to go back to Grace's question because, in 2021, the United Nations Committee on the Rights of the Child, number 25, in regard ‑‑
(Captioner has no audio)
>> ‑‑ this is the document that you can base all these questions, whether there would be education. The states that have ratified the UN Convention are obliged to (?) Child in the digital environment. So when you are asking for training for digital literacy and for cooperation, you can go to your state government and say, Hey, you've ratified the UN Convention, and here is the basis. You're obliged to do something for the young generation to make their rights come true in the digital environment. So I think it's kind of groundbreaking that we have a document from the United Nations where you can base child online safety but all their rights to provision and participation.
Thank you so much for listening.
>> GLADYS O. YIADOM: Thank you very much for your comment.
We're entering now the end of our session.
The survey, the online participates can enter the code and participate in the survey. We'll reveal the results afterwards.
Just a couple of minutes so the online participates can participate in the survey and the moderators can share the results.
And it will also be the opportunity to see if the answers have changed after the session.
>> ANNE MICKLER: We have received the results. Just give me one second to change the screen and show the results.
Can you see the results now?
>> GLADYS O. YIADOM: Oh, yes. We have seen the results have changed quite a bit. So now ‑‑ actually, more of you saying the threat will increase significantly and lead to increased abuse in cybercrime, that the situation will get worse, both on the A and B aspect.
Nonetheless, more of you believe that the threat against ‑‑ actually, less of you answered ‑‑ and so we're seeing that after the workshop, the results have evolved a little bit. So I would like now to kindly thank our speakers for their contribution.
Thank you, Heng, Melodena, Elizaveta, Elmehdi.
I would also like to thank Anne, our online moderator.
(Audio is cutting in and out)
>> GLADYS O. YIADOM: We will share about that. Let me conclude the session, and we can take this conversation together.
I would like to thank you all for participating in this session, and, please, let's continue the conversation together.
Thank you.