The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> DANDAN ZHONG: Excellencies, ladies and gentlemen, welcome to the IGF 2022. We are hosting parallel sessions of IGF 2022 in the Communication University of China CUC Beijing PR China. My name is Dandan Zhong Director of the office and also the moderator of today's session. It is my honor on behalf of Communication University of China to be with you today and to witness together the presentation of industry standards related to AI and child protection on the Internet. We have another moderator, Afia at the main venue in Addis Ababa. Hello, Afia! Greetings! Greetings from China. It's our pleasure to work with you.
Please allow me to convene warm greetings for all of those gathered and thank China, communication University of China and UNICEF China for their generous support. Welcome to everyone online and offline. Thank you for being here today.
Before we start our session today, I'd like to take a moment to introduce the background of this session. UNICEF has released a Version 2 of the artificial intelligence for children policy guide in 2022, which is a global policy guide for governments and industries, including practical recommendations and principles for children centered around artificial technology. In order to have a further understanding of how artificial intelligence protects, provides, and empowers children's rights, we organized this session for discussion on standards related to minors’ network protection.
Our discussion today will mainly focus on the release of Children's Internet Application Guide based on artificial intelligence technology, and this session is going to last about 45 minutes.
First, let us welcome our honored guest, Secretary‑general from China Internet to give a speech.
Afia, could you share the slides? He is not co‑host to doesn't have right to share the screen. Please transfer the right to him.
>> On behalf of the organizers, China Federation of Internet Society, I would like to thank you for attending this Lunch and Award event on the topic to talk about the topic of child development of protection. Now days, the digital economy is flourishing with AI applied more and more diverse field on daily basis that has brought children information splendid entertainment as well as ‑‑ (Speaking off mic). Developing AI responsible for next generations in a way to promote their healthy growth. From 2021, CFIS, UNICEF and communication University of China, has jointly carried out the project of collecting and promoting cases of AI and guided work of more than including China. To compile the guide of the construction of the Internet application for minors based on AI technology. It was officially released in China as a group of standards in June this year. To continue to expand and extend to this, a greater role of child protection against the backdrop of AI, we hope to take this draft as the standard. Under the guidance of departments, apply to promote formation of AI for children industry for national standard. I would like to take this opportunity to share this with you three views of mine. First, take technology as engine for empowerment, as a strategic technology in the new sci‑tech revolution and industry, and AI should take the lead in industrial innovation and help promote the protect so as to achieve their healthy and all‑round developed, in research, design, development, and use of AI, we should pay comprehensive intelligence be to psychology, education, and others. Provide children with high quality content to enlighten their minds, expand their knowledge and improve their quality, help stimulate children's potential, consult variate science ‑‑ second, take security as mainstay, children are insufficient in ability to identity risks and hidden dangerous areas. The Internet and everything makes every piece of information for children imposing infringement to threats. We should pay more attention to children's privacy and safety in the development of AI, the collection of children's information should follow appropriate, with clear boundaries for data collection, processing and retention. Ensuring the knowledge of constant guardians, to this end it is necessary to improve the system and supervision to avoid potential risks and to architect the legitimate rights of children. Third, put children first and share the governance responsibility. We should continue to focus on children. Conduct in‑depth research to impact AI on children. Establish standards, norms, regulatory rules in line with technological development. And encourage and drive the healthy development of AI towards empowerment and promoting children's rights. At the same time, we should enhance global cooperation around AI. Build platforms, share experience, and responsibilities. Facilitating the common development of AI governance for children. If you want ‑‑ to secure better development for children. We need to work together closely and make lasting progress. SFIS will continue to give full play of its own actively advocate for goal of developing responsible AI. Call for and look forward to working with industry colleagues to improve and upgrade the guidelines for the construction of Internet applications for minors based on AI technology. We will carry out more useful cooperation, pool more wisdom and experience, and build a better digital future for children. In conclusion, I wish this forum to complete a success. Thank you for your time.
>> DANDAN: Thank you from CFIS. We would like the next speaker to deliver the speech. Welcome Mr. Zhou.
>> XIANFENG ZHOU: Ladies and gentlemen, good morning. First of all, on behalf of the one of the organizes of China federation of Internet Society, I would like to thank you for this launch. I'm honor toed have a discussion with you on topic of child development and protection in AI. Now days, digital economy and direction with AI applies to more and more diverse fields on AI on daily basis AI has brought children information resources with entertainment as well as for children and ‑‑ (no interpretation n. (‑‑ digital technology has facilitated human life and played an important role of medical sphere and factoring services between governance. During COVID‑19, AI has ‑‑ AI has played significant role in helping work and production resumption and promote development of digital economy.
In recent years, the joint promotion of policy and capital, the commercial application of AI has accelerated at the present and technology has been in education. Through finance and transportation ‑‑ as strategic technology leading the future, AI is now recorded from national competitiveness and safeguarding national security for the world's major economy through 2030 and regions including the United Nations, China, Europe, Russia, Canada, United Arab Emirates have reviewed the strategies ‑‑ in recent years more and more country vs. joined the rank of AI layout working the implementation of AI in terms of quality, capital, technical special training application infrastructure, construction, and however in the face through development of AI ‑‑ in technological innovation and present how to reach international consensus ‑‑ through AI for children has become a focus of many people and aroused ‑‑
I would like to take this opportunity to share with you some suggestions on how to improve the governance ability of AI. First, we need to foster a safe and stable government environment under the rule of law ‑‑ security governance has feasibility and improve relevant laws and industry since 2019. China has issued several documents including for the next generation of AI governance, developing responsible AI and code of ethics for the next generation of AI, which will clarify the governance framework in action guidelines in 2021, China's new law on the protection of minors can start providing legal protection for minors within the Internet, and second adhere to the guidance of value and grassroots and direction of where AI is headed. As government supervision ‑‑ (Speaking off mic) in efforts and strengthen the macro legislative and correct orientation of possible deviation in algorithm and strive for the inclusiveness and nondiscrimination of AI system in terms of technology provision publication for the ‑‑ informed to strengthen emergency support in the AI industry ‑‑ (audio fading in and out n. of interpreter) ‑‑ to remain highly vigilant and more risk adherent to implement the technology for the good and (?) for social values.
Common development to enhance the community in Cyberspace, ensuring the security reliability and controllability of AI is an important path facing all countries which requires a solid area and coordination of all of us. China be is an advocate and pioneer in promoting the global development and governance of digital technology represented by AI.
The effective governance of AI, has to have joint and concerted efforts of government sector. (audio fluctuating).
>> DANDAN: Thank you, for your excellent speech. For the next speaker, Vice‑principle of Communication University of China to address this session.
>> Secretary general, ladies and gentlemen, good morning, maybe good afternoon or good evening. I am pleased to participate. First of all, please allow me to represent the Communication University of China one of our organizations on thank you for organizing and warmly welcome experts and scholars for attending. Thank you for assisting with the topic of AI for children. It's a rapid development of the Internet. The wave of AI information, networking, has swept the world. By December 2022, netizens in China has number civilian, 21% of which adolescents and students. The popularity of the Internet has enabled children to reach out to AI technology not only brings convenience to children in Internet and health and entertainment with this rapid development but as people's concern for privacy protection. CUC has always valued the integration of discipline construction related to AI, technology progress and social responsibility. And has deep accumulation in intelligent media network. For example, our state key laboratory of media convergence, key laboratory of intelligence media and ministry of education, et cetera. These are some of the efforts and attempts that we have made in this area. We Alts want to export cultivate talent for future AI through these specialties. On the strength of inherent academic and social research advantages of AI technology at the invitation of CFIS and UNICEF, one of scientific research teams from CUC joined AI for children project group. They have conducted in‑depth research on application for AI for children. And have also done some fruitful work. As typical AI application cases in China, the guide we present today shows the advantages of relevant application scenarios and help people from different backgrounds and fields correctly understand and advantages and disadvantages of relevant technologies. I hope that through the exchanges, we can all get inspiration from the application AI technology for children. I hope this forum will be great success. Thank you.
>> DANDAN: Thank you. Let's move on to the most important moment of today's session and let's invite Mr. Ming Director to do a presentation of dropped industry standards related to AI and child protection of Internet. Welcome Mr. Ming.
>> Hi, everyone. I'm Ziaoming Huang. Thank you for the construction of Internet applications for minors based on AI technology as well as Tencent technology and application practice in this respect. Hello. I wanted to share my screen.
(captioner cannot hear interpreter).
Integrated with the United Nations Sustainable Development Goals, Tencent incorporated sustainable innovation into the company's core strategy and to build our work taking root in consumer Internet and embracing industrial Internet with the base of the company's development, meeting all core businesses and fully implementing the clients and technology for the better. Tencent always adheres to mission of user and technology for the good and takes the core and spirit and core of the social value model. This concept upgrade, model upgrade focus on strength point and guidance of needs of users, supporting the economy and community social value. We will fully implement the strategy of sustainable social value innovation. Based on the company's overall strategy, Tencent Cloud focuses on relieving technology based on Cloud. Tencent model is computing brand for which has served 1 billion users. Tencent provides the world's leading Cloud computing data and other technical products and services and enterprises developed all over the world.
Tencent AI provides a world potential leading recognition of image technology and more than 300AI technology sharing applications.
(captioner is unable to hear interpreter).
The Internet is new era with development information has shortened the difference between people that brought new risks and hidden dangers then of the digital anal, education and entertainment and life of minors, and how to develop ‑‑ (?)
At first, I would like to report our first action to protect minors. (Speaking off mic).
Information makes harmful information is impossible to prevent. For some minors with poor self‑control, there is great risk and harm for information in preventing of pornography and illegal publications is also urgent.
This leads to launch the protection of action and covers the following don't that was objectionable information involving minors filtering the content that is harmful to minors, and at the privacy of minors ‑‑ (Speaking off mic).
In the second of our action is anti‑‑addiction restrictions. Prevent minors. Strengthen the anti‑‑addiction restriction of minors, and Tencent will implement and limit and have other control measures for minors through face recognition verification in logging in and compare with the data of the public security of platforms with users who fail to pass the verification to enter the digital anti‑addiction supervision.
And our third action is Tencent Light Public Welfare Innovation Challenge. On December 2020, the first Tencent Light Public Benefit Challenge initiated by Tencent Foundation was launched in China and the public welfare scenarios online protection for minors and various design for aging, and life protection. The participants are likely to use AI technology to create public welfare meaning program to help solve the pain spots behind the different questions. On June, the final of the second Tencent live charity innovation challenge was under the guidance of the publicity department of All China Women's Federation and this year's challenge was 1438 entries, double willing the number of entries from the first edition. All China's women federation meant that Tencent light public welfare innovation challenge is a competition with technology, and public welfare as a way Tencent responsibility of active social responsibility, and more let's take a look at a work that has won the prize. I would like to share with you one of the winning works of Tencent live public service language option bubble which is mainly designed for left‑behind children in poor areas and students who have not fully mastered, providing peer literacy education platform for them in order to better stimulate the interest in learning. The team decided to use a bubble as a form ‑‑ (Speaking off mic). Mastered knowledge of life, child protection, and health while improving the memory level.
I would like to share with another award‑winning work. AI labs, rescue for children. This work uses Tencent Cloud AI image recognition technology to achieve 97% of accuracy of recognition model and at the same time Tencent Cloud Micro mapping is to improve the development efficiency and regular form pages. The fact that we assist for the patients and reducing the burden of families and medical resources, and under the guidance of the China Federation ‑‑ Tencent together with more than 10 institutions ‑‑ standardization (Speaking off mic) ‑‑ formulated and released the group standard for application guidance based on AI technology. This guidance clarified the AI applications with measures for enterprises to actively participate in industrial co‑governance hoping to provide scientific for industry development construction from multi‑party cooperation and forum resource synergy.
This standard is formulated with to relevant lots and regular legal and relevant standards abroad, and this includes convention on the rights of the child, on strengthening the protection of minors in the online regulations on online protection of children and national standards on information security, technology, and personal information security and framework standards for appropriate digital service based on the rights of children.
Then I want to talk about the standard (?) principle. This standard is established and adheres to conducive to minors, including the following work ‑‑ (Speaking off mic) ‑‑ safety of the Internet environment. Second, the development of minors and development and well‑being and enhance digital literacy. Third, it conforms to science and technology, fairness, transparency and nondiscrimination and applicability of AI technology Internet applications. Finally, promote industry governance and encourage Internet enterprises to actively participate in the protection of minors in industry. Then based on the industry critical experience and protection of minors, standards establish participation roles to ensure the effect of AI applications and participation goals to provide ‑‑ to (?) ‑‑ provides and guide minors use in AI applications ‑‑ to teach or guide minors. Underage users, they are the direct user of AI applications and AI application service providers are in the video’s organization engaged in design, development, cooperation and maintenance of AI applications for minors. And the service that evaluates, they are individuals or organizations that assess whether proven AI application made the expected relevant policies, regulations, and standards and assess of AI applications for minors, including ‑‑ (Speaking off mic). ‑‑ in terms of the specific construction of AI applications for minors, the standards in the lifecycle of AI applications, and at the same time from the aspects ‑‑ inequalities, privacy, and data security clarifying the network environment and paying attention to the physical and development. Specific protection from online and establishing the data scope ‑‑ (Speaking off mic). We have proposed measures, standards for. A. Application design and development for minors. First of all, AI application for minors may have ‑‑ to minors’ protection measures include use high‑quality data that's to ensure the objectivity and impartiality, relevant, and appropriated ins of the data. Reliable, diverse, and critical sources ‑‑ (Speaking off mic).
Secondly, it is necessary to protect minors from personal privacy and data security ‑‑ minors include separate privacy statements for minors, buildings, or masked in timely manner using anonymized data from a (?) computing environment, learning transfer and learning technologies ‑‑ (Speaking off mic) and produce ‑‑ introduce professional party parties and institutions risk assessments and in times of verifying the Internet environment for minor, relevant protection measures include filter of preventing addiction including intelligent reminding of the youth. AI technology is used to identify the time management, management and other functions of the Internet applications by minors, including rights, and technology to text, images or people and label and give (?) information that may affect the physical and mental health of minors. Online education AI application, do not insert AI (?) (Speaking off mic) other information all related to teaching, and fourth to pay attention to the physical and mental development of minors.
Finally, we look forward to working with more beneficial Tencent ‑‑ (Speaking off mic). AI application and make science and technology better protected for minors. That's the end of my report. Thank you for listening.
>> DANDAN: Thank you. Just now he provided detailed explanations about the Children's Internet Application Guide, best artificial application technology. Today our organizing committee invites two international experts and scholars to comment on the Draft Industry Standard. First, welcome Eleonore Pauwels intergovernmental organization United Nations.
>> ELEONORE PAUWELS: Ladies and gentlemen ‑‑ (Speaking off mic).
(audio fading in and out).
(captioner cannot hear interpreter).
>> DANDAN: Thank you, Eleonore. Second Professor Andrew from University of Melbourne, Member of the Project Program and portfolio management committee where he presents University of Melbourne and standards. Welcome, Andre.
>> ANDRE GYGAX: Yes. Thank you for the opportunity to participate in this forum. So, what is the context of children start using online tools such as devices and being exposed to online content at a much earlier stage in their life than they did in the past. When we look at the statistics that we collect in Australia, about 10% of children use electronic devices at an age of only 4 years.
Now, the global pandemic has accelerated this process through the wide use of home and online schooling for age groups as early as preparation grade, so that would be about year 5, and it's also called preschool, for example, in other countries.
In Australia specifically, a study of exploitation was conducted at the national level by a special organization. Our family participated in this study as we had the child in the relevant age group, and as we hoped to gain a better understanding of what was unique to our circumstances and what other families would experience with their children when they ventured online.
So, in this short talk, I relayed to some of the key findings of the study conducted by the Australian Center to Counter Child Exploitation that was published in 2020 and is based on data from 2019, so this is a one thing that you should keep in mind because, obviously, the global pandemic also had a major impact on how the behavior of children online changed since then.
So, the study has since been widely used by the Australian Government bodies to develop relevant policies to protect children by online organizations and individuals, and importantly who are often criminal in nature. This is a particular concern and also needs to be considered when implementing or setting a standard.
So, what are some of the key findings of the research styed? My aim is to briefly discuss the studies and then to highlight some of the finding it's that are of particular concern. So, the sample of the quantitative study was about 2559 respondents across Australia, and for the qualitative study, it was 159 participants in the study. The sample covered all Australian States, including and I think that's important that the smaller states as the northern territory, so the northern territory for the biggest population is Aboriginal, so the native indigenous people of Australia. So, gaining of a better understanding of exploitation of Aboriginal children is particularly important, as this community is very vulnerable as it has shown in the past, and it has in the past experienced problems such as early alcohol consumption and abuse and also the widely documented problem of sniffing that was particularly dangerous for development of children as it potentially damages the functioning of the brain and really has like a severe health implication.
So, the results show strong trends with age. Somewhat surprising as the study was conducted before the pandemic in April and May of 2019, so even at that time, we saw like a strong trend in how the behavior of children changed from the early stage of like 4 to 17 or 18 years of age.
So, children in the study were covering 14 to 18 years, and the striking finding is that a small majority of parents, by the time the child turns 8 already feel or felt that they were less knowledgeable about online tools, technology than their children. So this is, I find, quite astonishing that the parents, who are adults, feel disadvantaged relative to the knowledge that they have about the technology at that their children are using at the age of 8.
Then the age trends also show that the younger children predominantly use tablets, whereas older children move to smart phones and increase the time they're spending online. So, the statistics before the pandemic were for the age group of 16 to 18 years old, that on average the kids would spend 4 hours per day, and imagining that they spend that predominantly on smart phones, it also becomes obvious that it's very hard for parents to see what exact low their children are doing online, and it also shows the importance and necessity of industry bodies, the government, and regulatory organizations to make sure that the standards are there and also help to better monitor what is going on and what activities and exposures these children are experiencing.
It would be interesting to match these statistics with the online app usage that are collected by some of the content‑delivery network providers, Internet usage tracking companies and telecommunication providers, such as Tencent as we have heard from before. For privacy reasons, this is a difficult undertaking, but I think there need to be ways to be fond to make some of these data more accessible so that the standards and implementation are also more focused and successful.
So, what are some of the implications for the standards‑setting organizations, such as the International Organization of Standardization, or in Australia, Standards Australia, our country‑level body. A common misperception is that the standard indicates how a process must be done, so usually when we set a standard, we word this in a form that things are recommended or should be done in a certain way because, I think that also in practice has worked better, so it's really a guideline on how to do things in an order, so that seems to work better in our practice.
Standard settings regarding protecting children are very difficult since children often do not understand why certain online exposures could be bad for them. This is accent waited by the fact that children often see themselves in control of their technological savviness the psychological issues they're exposed to and could fall victim to. And obviously most problematic is things such as sexual exploitation of children and it's interesting when we watch how children use, for example, online games such as ROBLOX how they interact or have an opportunity to interact with people they really don't know, and so there it is problematic when the parents do not understand exactly how their children are potentially exposed to people that they even don't know in person.
So, what are some of the feature directions? Much work lies ahead to translate research insights into meaningful standards so that also can be effectively implemented in practice. For this, this study, for example, is a helpful tool because it also connects some of the empirical statistics with more theoretical implementation and then execution of the standards to then be ruled out in practice.
So, it is one thing to set standards for an organization and it is an entirely different challenge to develop standards that can serve to protect our children from the great risk of online child exploitation.
This is all I wanted to present, so thank you again for your attention.
>> DANDAN: Thank you, Andre. The last speaker in today's session is Dora, chief of child protection UNICEF children will give a speech. Welcome to our friend Dora Giusti.
>> DORA GIUSTI: Thank you. We're very glad to be back together with the Federation of China as we have done before. For me and UNICEF China, it is a pleasure to close this important event today which presented the Draft Industry Standards related to Child Protection and AI.
As mentioned by other speakers, these standards were launched as guidelines of the China Federation of Internet Society as standards for members, group standards, in relation to protection of children in the construction of Internet apps using AI technology. As UNICEF and as partners of this event, I would like to commend the efforts of all the parties involved in this Draft Standards and in these Guidelines. The China federation of Internet Society, Tencent and communication University of China and many other experts and companies. These guidelines focus on how to apply ethics, safety, child‑centered principles to the design and implementation of Internet application based on algorithm systems.
These guidelines were also inspired by the 2021 UNICEF Policy Guidance for Children and as Eleonore, our expert defined that there are many strengths, and as everything that is in process, there are also aspects that we can continue working on.
I would like to remind, as we close this session, of the nine key principles of the UNICEF Policy Guidance on AI for Children. The first one is support children's development and well‑being. The second is ensure inclusion on, of, and for children. Prioritize fairness and nondiscrimination for children. Fourth, protect children's data and privacy. Fifth, ensure safety for children. Sixth, empower government and businesses with knowledge of AI and children's rights. Prepare children for present and future developments in AI, and create an enabling environment.
Although all of these principles are important and interlinked and they need to be reflected upon any efforts that we promote on AI for children, I would like today as we close this session to stress the principles that are often more complex to implement and that some of the speakers have pointed out, too. The first one is the need to ensure child‑centered design in the entire lifecycle of AI by ensuring the participation of children as well as of other experts and actors.
The second one is the need to protect children's data and privacy, looking at managing sensitive data, management of anonymity, and group privacy as well. As well as data ownership, among others.
Third is the need to identify child protection risks. We've just heard a presentation focusing on that. And how we use innovation and technology to address some of these issues. For example, if we need to use the use of image screen systems rather than encryption.
I believe this is a journey, and today it's the first step or one of the key steps of this ambitious journey that will take us into transforming the association standards into more ambitious industry and national standards that will apply to all information, communication, and technology companies, and as we embark on this journey, we have to take into consideration these elements and this.
To do so, we need participation of even more studies, more experts, more companies to analyze other similar efforts, to compare to, learn from each other, and also to include different views and make sure that these standards are strong and model even for other standards.
UNICEF China will be glad to provide the Child Rights lens and share good practices, to be a bridge between practices of some other countries and practices developed in China. The UN Committee on the Rights of the Child in General Comment Number 25 recognize that AI is becoming increasingly important across most aspects of children's lives. It offers new opportunities for the realization of children's rights but as poses a risk of violation of abuse. To avoid these risks, we need to ensure that three Ps, that's how they're called, the UNICEF Policy Guidance are met. The first one is provision, do good. So, the AI has to benefit the children in an equal way respecting everyone's rights and bringing development and equity. The second is protection, do no harm. We do not expose children, do not want to expose children to any risks. The third one is participation. We need to include children in all stages of AI, include development, testing, implementation and assessment of impact and need to make sure we include children that are often excluded. Together we can build a safer world for children and today we're taking a very important step toward this goal. Let me thank again all panelists, participants, CFIS, IGF, for this successful event, and look forward to working together even more to create a safe digital world for children. Thank you.
>> DANDAN: Thank you, Dora. In the end, we might have taken some questions depending on the time. And I have seen many attendees have put the questions in the comment area, and I will choose two. The first question is to Andre from the University of Melbourne. You mentioned that the Australian Government has developed policies to better protect children from online exploitation. Could you please introduce briefly to us the specific measures taken by Australia Government to implement these policies? Can these policies be better implemented by using AI? Thank you, Andre.
>> ANDRE GYGAX: Yes, thank you. Yeah, I think the policies can be potentially better implemented by using AI because it is important to understand what the specific exposure of children is, and for example, at schools during the pandemic, it was shown that the time the children spent on the Internet increased dramatically. So obviously this had big negative implications for the learning progress because of lack of concentration and so on.
And in this context, basically, the policies that we had with working with children, have been moved forward to also include to a much greater extent, how we deal or interact with the children in the online space rather than just the physical world.
And one possible tool is for example, time limits that are imposed on devices, so when children at school are using devices and can use certain apps only be for limited time.
Now here is a little bit of a call for the tech sector. Apple, for example, has like this future that you can set a time block, but the way these features are presently designed is a little bit problematic because they're not very flexible, so you cannot really fine tune the settings to the actual needs that are required to protect the children. So, that will be something that will be very much appreciated by the government bodies and also by the educators because that would really help to better protect the children, and that I can also tell that from my own experience with having a child where this was an ongoing struggle to set time limits to effectively monitor the behavior.
>> DANDAN: Thank you, Andre. Thank you for the answer. The second question is to Dora from UNICEF China. Dear Dora, what is the role of the information communication and technology companies in terms of ensuring child participation in the design and implementation of applications using AI technology? Thank you, Dora.
>> DORA GIUSTI: Thank you for the question. As I said, it's one of the key principles of the policy guidance, and I think companies that are investing in technology to develop AI for children, should first of all conduct research and understanding involving children directly and using all of these ethical measures of how or what are the patterns of behavior, like Andrew was describing on the Internet and how are they using it, and what are the challenges and risks they're facing, what do they need to know. And then there has to be also an involvement in the design. Since we need to have also child‑centered approach in all of the design, need to involve them also in what they think. In a way, obviously, that is child friendly and accessible to them. We need to also understand with children the impact of what we have created is ‑‑ is having on children. So, we need to ‑‑ the companies would need to embed really this element into the whole lifecycle of the AI from the design, the thinking, the design, the implementation, and the impact. And to do so it's important also to have the right expertise to make sure the ethical measures and codes of conduct are kept in mind. And also, to which is more of a challenge, how we involve children that are most excluded maybe from even access to the Internet and to digital apps. Thank you.
>> DANDAN: Thank you, Dora. The last question is to Mr. from Tencent. Hello speaker from Tencent can you explain briefly about the AI standardization of technology and industry development.
>> I have been working on standardization for more than 10 years, so mining of this which would sum up the value of AI standardization, mainly licensing aspects and navigation. Mining, referring the development of AI safety, compliance standards to promote the advancement of AI technology because it needs to be a viable and controllable good. At present China has already brief compliance ‑‑ (interpretation sound fluctuating) ‑‑
The second is for the navigation referred to as AI technology standards to be standards and guidance and promote AI innovation‑driven technology., the industries, universities, research, to promote them to the connectivity and improve the quality and striving technological innovation of how industry and further development of AI technology.
>> DANDAN: Thank you. Mr. Li is also from Tencent company and expert of the standards writing team. Thank you very much. Thanks to all attendees today, and thank you for your questions and the expert’s excellent explanations. The forum develops a common understanding of how we can maximize the opportunities the Internet offers and how we can use it for the benefit of all nations and peoples and how we can address risks and challenges.
One particular area of hope but as of concern is relationship of children and young people to the Internet, and the Internet has opened new doors for technology and culture, and yet can also present threat to their safety. The topic of this year's meeting has strong focus on the protection of children and I hope that it will contribute to making them safer. In the end, thanks to all of you for your attention and your time. I appreciate it very much. But now am sorry to say today's conference will have to end here. See you next year. Bye‑bye.