The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ELLA SERRY: Hello, everybody. Good morning and good afternoon, and good evening. We'll wait a couple more minutes until the onsite moderators give us the green light to commence.
>> Hello, can you hear me? It's Matt in the room.? Hello, Matt in the room.
>> MATTHEW NGUYEN: Okay. I think they can. Awesome, excited to kick off. Thanks, everyone, for coming to this session on harmonizing online safety regulation. My name is Matt. I work at the Tony Blair Institute for Global Change. I will be moderating today. So before we kick off, I would like to pass to Ella would is online, who works at the Australian eSafety commission for a brief overview on the new online safety regulators network that they have been involved in.
>> ELLA SERRY: Thanks so much, Matt. Just checking people can hear me loud and clear?
>> MATTHEW NGUYEN: Yep, you're good.
>> ELLA SERRY: Wonderful. Well, thanks, everybody, for joining this session, both online and in person. I truly wish I was able to join you there in Addis. Alas I have to settle from beaming in from Melbourne. My name is Ella Serry and I head up international engagement at eSafety Commissioner, a national online eSafety regulator. You will hear from some of the first mover and shakers in online safety regulation and it's a special session too.
As just a couple of weeks ago, we joined together to launch the world first global online safety regulators network at the family online safety institute conference in Washington, D.C. So we're very pleased to be able to talk to you a little bit about the global online safety regulators network, what it means for us as ‑‑ as organizations, but also as part of a global community working together to keep everyone safer online.
I will be your online moderator. So please feel free to leave questions and comments in the chat box. I'm also joined by our online panelists as well. We have Julie Inman Grant, eSafety Commissioner of Australia. Celene Craig, CEO of the broadcasting authority of Ireland. I believe went have Kevin Bakhurst joining shortly who is the group director of online content at the UK regulator OFCOM, and Taj Devi in Fiji.
So I guess without further ado, I will hand back over to you, Matt, in the room to kick us off for the discussion.
>> MATTHEW NGUYEN: Awesome! Thanks for that intro. So keen to get stuck in, while we wait for Kevin, I would love to hear from our panelists around how in each of your jurisdictions what are some of the key focuses you have within your online safety regimes and what kind of trends are you seeing emerge over the time that you have been stewarding that regime?
>> JULIE INMAN GRANT: Sure. I'm happy to start because we have been stewarding for about seven years here. I guess when you are the first online harms regulator on the scene, you kind of have to write the playbook as you go along and we started small. We were actually created out of a tragedy. A very well‑known TV presenter in Australia, who had struggled with mental health issues had been brutally trolled on Twitter, mostly, had a nervous breakdown, came back on, was told a range of horrible things and she tragically ended up taking her life.
This was when I was applying to be one of the Twitter's first representatives in Australia back in 2014. It was referred to as the Twitter suicide and it kicked off a huge petition to go of the at the time, and the citizens basically said, you know, you need to get involved and regulate people who are being bullied online, you know, people are losing their lives. But what the government decided to do at the time was to start with the children's eSafety Commissioner.
So took an online content we had in place for almost 18 years and as a result of having the strong fundamental laws, very little illegal content is hosted in Australia, and most of it is overseas. And they we created as a youth‑based cyberbullying. We are not proactively monitoring the Internet for harm. It requires a young person who is seriously harassed, intimidated, humiliated or threatened to report to the online platform where the abuse is happening first, and if it doesn't come down, then we serve as that safety net. Now that's appropriate because that's the most expeditious way to get it done. We know with all forms of online harm, the more quickly you can get that down, the better.
And we resolve the vast majority of cases that reach that legislative threshold every time someone reports to us, it triggers an investigation. It's not like we are just putting our finger up in the air, saying, oh, we don't like this comment. There is a legislative threshold. But we ‑‑ we have solved about 88% of those cyberbullying cases informally over time, but we do have powers to issues to perpetrators to fine platforms or perpetrators.
And then over time, we had a different range of functions and programs layered on. Traditionally image‑based abuse has been a very gendered form of online abuse. It used to be about 70% women and girls, but we have seen a surge of criminally‑backed sexual extortion which has seen almost 70% of our image‑based abuse reports are now sexual extortion, and the scales have tipped the vast majority of reports are from young boys between ‑‑ young men between 18 and 24, and younger.
So, you know, we've got powers around violent materials and now serious adult cyber abuse and now we have also some systemic reforms around transparency and accountability. The basic online safety expectations, as well as mandatory industry codes and the first set of codes that are owned by industry are dealing with child sexual abuse material and TVEC, terrorist and violent extremist content, we minimize the threat surface for the future and through initiatives like safety by design and then looking at future tech trends.
So that's ‑‑ that's where we are. One of the interesting discussions, I think, as countries decide what their regulatory schemes should look like, is really should they be doing large systemic reforms with big fines? Should they be providing individual complaint schemes like we do? Should they not? Or should they do both? What I would argue is that we're actually able ‑‑ we were ‑‑ we have been able to help thousands and thousands of people get down very harmful and traumatizing content and to remediate that harm. So we see the impact that we're having every day.
But it also gives us a very rich evidence‑based to show us precisely where the platforms are falling down in terms of responding to systemic abuse and other challenges.
>> MATTHEW NGUYEN: Thanks, Julie, it's really interesting that you have the dual approach between individual resource, notice and action, versus the beginnings of what seems to be a more systemic approach to platform design and accountability.
I would love to hear from our other panelists that I guess in the stages of setting up the online safety regimes at the moment, if that is reflected in your approach or if there are any key differences.
>> CELENE CRAIG: I'm happy to come in here, Matt, if I may.
So Ireland, I'm working with the broadcasting authority of Ireland currently, where the traditional media regulator in Ireland but for the last few years, Ireland has been preparing its new online safety scheme. We are expecting the legislation to be fully enacted before the end of this year. And, of course, I think it's important to point out that Ireland sits within the wider European regulatory framework and legislative framework in relation to oversight of digital platforms and there's quite a sway of legislation, that's either here enacted already, or in the pipeline. So that's the wider context for Ireland.
And Ireland has adopted a largely ‑‑ having regard to the fact that it's also the country of origin for all of Europe and effectively is regulating for some very major pieces of legislation for all of Europe, given the scale of content, indeed, and the speed of upload of content, Ireland has proposed and is proposing a systemic approach to regulation. Albeit it has regards with a pint of concerns that Julie was referring to there, where there's very often seem to be, you know, very significant and harmful pieces of content that are impacting on individuals' lives. It does provide further down the road, once the regulator is established, it does provide the potential for an individual complaints mechanism directly to the regulator.
However, for the moment, it's envisaged that the harms, the online harms have been identified in our legislation, will be addressed by the regulator through an online safety code, a compliance and enforcement regime, which is to be established by the new media regulator, and indeed, with all of the associated powers that will allow the regulator to enforce the legislation.
The kind of harms that we're talking about, well, obviously, these touch on issues that are of tremendous fundamental concern and impact fundamental European values, such as, for example, freedom of expression, hate speech is a major concern, and possibly one the most important pieces is around the protection of minors.
Advertising is in the mix there as well. Within the Irish legislative regime, we have also ‑‑ a number of other specific harms identified, and it ‑‑ these concern very egregious forms of cyberbullying for children, not just for children, not just limited to children, but for adults as well.
Also the promotion of self‑harm and suicide, and the promotion of eating disorders is also within the scope of the Irish legislation, as it's currently ‑‑ as it's currently formulated and to provide a degree of futures proofing, I suppose ‑‑ you know, for recognizing that new trends emerge very quickly, in the online space, it does ‑‑ the Irish legislation also allows for proposals to come from the legislative ‑‑ sorry the regulator to the legislature that would allow specific new harms to be identified and within the scope of the regulatory regime going forward. So I think that gives a certain element of future roofing.
I will say, however, that there are other harms that are very much in the public discourse currently, disinformation and misinformation is a very topical subject right throughout Europe, as it is in Ireland. Gender‑based violence is also a major concern.
So all of these are in the mix, both in the immediate future but potentially down the road as well in materials of ‑‑ and the future ‑‑ the types of harms that will fall within the legislative and regulatory regimes throughout Europe and including Ireland.
>> MATTHEW NGUYEN: Thanks for that, Celene.
And Tajeshwari, are you online? Do you have a perspective from Fiji.
>> TAJESHWARI DEVI: Yes, hello, everyone from Fiji. In terms of the online safety in Fiji, the Online Safety Commission in Fiji, the online safety act was enacted by the parliament of Fiji in 2018 and passed the establishment of the commission comments on the first January 2019. So basically, the Online Safety Commission has been established for the promotion of online safety, deterrence of harmful electronic communication and for related matters. So basically, we promote responsible online behavior and online safety.
So it's both proactive and reactive, but we are more towards the proactive measures. So we also promote safe online culture. And this is the main, actually, objective of the establishment of the commission, is to promote the safe online culture and environment, such as cyberbullying and cyber stalking and harmful content, particularly in respect of children, and once ‑‑ once these issues or matters are identified, we try to eliminate the harm, and provide efficient means of measures for such individuals. Probably those are the objectives of the commission, and for our specific country, as ‑‑ in Fiji, for specifically Fiji, as geographically remote and culturally rich in Fiji, we have very diverse communities that means there are a lot of multiracial people, citizens living here, and they come from different cultural backgrounds.
So what I can say is that the culture that we have here in Fiji, as compared to internationally or other countries, it's very diverse and different people live here. They have different norms and cultures that they follow. And so things that go out on social media platforms concerns them ‑‑ in terms of like, I'm an Indo Fijian and my culture is different compared to the Fijians living in Fiji. It's very different. So we have to be ‑‑ that's why when we talk about online safety in Fiji, we talk about the local content and the local context in which the content needs to be created. So it's very important because at the community level, the culture is very different. At a city or town level, it's very different. So it's very important to understand the cultural aspects of Fiji as well, considering the number of population in Fiji, it's quite less as compared to err countries, but online safety as far as what we have experiences in practical, in terms of after the establishment of online safety act. Most people wouldn't know what online safety is.
If we ask them what Internet, is they know it's Facebook, it's WhatsApp, you know, so it's really ‑‑ it's really important for the people to know that internet can mean going to Google and typing what your assignments are, what your work is. So not only in lays on the Facebook and the social media platforms.
Basically, the commission is here to promote online safety. We also look at the sections including provision of the online safety materials and this year we launched online safety booklets and that was a great material because it reached out to a lot of the schools and the communities and so there were approximately 30,000 booklets that we distributed to the schools. It was contextualized in three different languages.
Fiji has three different languages, one, Indo Fiji, and three the English one, and so it's really important to actually have the content available in their own languages. We also received complaints in that causes arm. We assess and provide advice to the query that they committed to the commission.
And the most important one is we try and consult and work with relevant agencies, considering that the commission has been established three years back, we are still small in number with resources and materials so office of eSafety Commissioner has been important to us in terms of our partnership because most of the work we are not trying to replicate, but we are trying to learn, adapt to what is happening at the eSafety commission. And then trying to actually see how Online Safety Commission Fiji can work accordingly.
So that is how it is and building this relationship is really great, actually.
>> JULIE INMAN GRANT: We learn from you too, Taj.
>> MATTHEW NGUYEN: Awesome. Thanks for that. I'm not sure if OFCOM has joined. So ‑‑ no?
But I wanted to pick up on something around, I guess like Twitter, like the Musk takeover of the Twitter is very much in the news. We have seen a lot of Meta layoffs over the last month or so. And a big hit in both of these companies is within trust and safety teams for these platform companies. And I guess they are the ‑‑ the cold face of kind of work that you do on the government side. And so I guess my question is like, particularly for Celene and how the more systemic reforms of the DSA will shape up. What are your thoughts around harder ‑‑ harder enforced responsibilities such as mandating that there is an X percentage of workforce that has to be occupied ‑‑ occupying in a trust and safety role and managing the language parity has to be in force. Things that go beyond where the conversation is now, around transparency and algorithmic orders.
Do you see that there's a pipeline coming after the first few years of data we see after the DSA is enforced and what are your thoughts on greater powers that governments could have? So I will start with Celene and then if anyone else wants to jump in, I would love to hear your thoughts too.
>> CELENE CRAIG: Thank you, Matt. Look the DSA will have a very strong enforcement framework coming from Europe and coordinated by the European commission. And the new digital services board, that will be put in place, arising from the enactment of the Digital Services Act will give a Europe‑wide structure if you like to enforcement, around the very large online platforms and the very large online search engines.
So that will be a strong and coordinated framework, I would expect. I think the regulatory framework at national level, I would expect once established can actually grow in terms of its ability to ‑‑ well, first of all, the ability to enforce is there in the legislation. But I believe a structured approach to compliance and enforcement will be required to give effect. So it's not just about having an online safety code. But it's how it gives effect. So it may be through a structured performance setting process, and annual compliance reviews or cyclical compliance reviews. I certainly wouldn't anticipate anything less than annual cycle reviews.
And also actions that would be required to be taken coming out of that. I think regulators are likely to expect ‑‑ and I would expect that in Ireland, the regulator will want to see performance improvement year on year with very could cussed performance objectives set from one period to the next with built in reviews independently, verified reviews and that's in addition to all the other regulatory tools if you like that are there around risk assessments and all of these pieces need to link up together. Once the risks have been identified that has to feed into performance objective setting and that whole process of review so that year on year, we can expect better performance and obviously that needs to be matched, with you know, a strong enforcement regime, where there have been previous breaches of legislation or where an online platform is not complying with the rules that are set in online codes or within the legislation. That's broadly how I would envisage it.
I think there's been a lot of experience in Europe, certainly around sort of voluntary monitoring and compliance with that voluntary code. And that has not worked to the type of standard that regulation would expect. So I do believe that it does require strong regulation and that Europe will follow a strong compliance where the rules are not observed or where there isn't that engagement for platforms in relation to compliance with the legislation.
>> JULIE INMAN GRANT: Matt, if you don't mind me weighing in, I do think we again have to look at multiple approaches and I don't know if you meant to be ‑‑ maybe not controversial but I don't ‑‑ I don't know that quotas or requiring certain portions of people to be part of the trust and safety teams, for instance will work.
And I say that based on the fact that most of these companies have a very complex operating rhythm and system. So you will recall that when the first culling happened at Twitter, Elon Musk said, oh, it's only 15% of trust and safety. Well, I sat on the public policy and philanthropy team and that team was obliterated. 50% were gone and the people that sat in public policy were the people that sit in the countries and engage with governments; whereas, the trust and the safety teams were behind the scenes, responding to reports.
But you need all of that. You know, the responsible AI, the ethical AI teams were also obliterated and unfortunately in my 22 years of experience in the tech sector, safety is always kind of been an afterthought. I recall Microsoft's first trust and safety strategy together and I was referred to as a cost center. And I tried to bring safety by design to the company, probably 12 years ago, because I was sitting in product reviews and we were looking for security vulnerabilities, and upon privacy breeches but we weren't looking for personal harms.
So I think we have reached a tipping point now that all of these governments are saying, hey, we have got to draw the line somewhere, and again, I think just to bring it back to the whole idea of the regulatory network, we will take slightly different approaches. There will be different issues that are going to be important to our governments of the day but there are some common themes that are really emerging and one of them is around safety by design. Like fundamentally, you are not allowed if you ‑‑ if you are a car manufacturer, you know for the ‑‑ I'm thinking about the Chevy Corvair and the Pinto, you can't allow them to blow up. You embed antilock brakes and we almost take that for granted but that had to be legislated almost 55 years ago.
You know, you are not allowed to, you know, make food and make people sick or have toys that blow up in kids' faces but we had the technological exceptionalism that has existed. We are pretty much dependent on technology but there have been any rules of the road or requirements. So the fundamental is that you need to do the risk assessment up front and build the protections in at the front end.
That's a pretty consistent theme and different countries will be taking different approaches. We have taken a cooperative cost power approach and collaborative approach in the first instance and we have developed risk assessment tools an enablement mechanism so that these companies can comply with our upcoming codes and basically online safety expectations but as Celene said, other countries will say, you know, much more blunt force approaches. What I think we need to ask ourselves how prescriptive do we need to be and how long it will take for some. Longer systemic reforms to actually replicate through the system and what I suspect that we'll see here is we will all have a bunch of different tools that will work well together depending on what the platform is or the problem that we are trying to solve collectively for the good of all the world's citizens.
>> MATTHEW NGUYEN: Thanks, Julie. I'm only here to ask controversial question. Get ready for some. I know that Kevin is ‑‑ has joined us now, and obviously the UK is in the midst of passing their online safety legislation and OFCOM has ramped up their capacity to enforce that legislation. I would love to hear your quick thoughts on that whole approach and where you see that heading for the UK before I jump back out for questions for the panel.
>> KEVIN BAKHURST: Yes, sorry I'm late. I was locked out by Zoom.
This as a good time. The long delayed UK online safety build was formally introduced to article this week. And now should move through parliament. There's been some light changes to it. In terms of your question, you know, OFCOM first of all, we have ‑‑ you know, we have been quite lucky in a way, because the government here has given us funding for the last couple of years to enable us to build up preparations and recruit teams with specialist knowledge and industry knowledge.
So as an organization, we built up approximately 250 people in the team so far to help us develop our regulatory approach, which includes people from industry and NGOs and so on. So, you know, we are really ready to go and primed to go as soon as parliament actually passes the bill, and we are formally given the powers which will probably be sort of spring next year and we'll go out to consul.
A really important part of what we are doing, and I will try to keep it brief because you said to make it quick. Is this international collaboration. We are lucky to be working with Julie and Celene and the Fijian colleagues as well on launching this network. You know, we are ambitious to expand the network, you know, quite soon because we did give a global approach, as Julie touched on there is by far the most effective way, A, to regulate; but, B, to align what we can in the regimes so the demands on platforms are aligned as far as possible.
>> MATTHEW NGUYEN: Thanks for that, Kevin, you set me up perfectly. I would like to hear the international collaboration. I think with online safety, you have the full spectrum of what is, I think, quite easy to drive consensus on, such as, like, CSAM and terrorist content, all the way to stuff that is maybe one of the thoughtiest issues, in tech policy, what is disinformation, what is polarization, and so, I guess, the four of you have started this really incredible international network. Of what are your plans on actually embedding consensus in these issues that are a bit more on the thorny side.
>> KEVIN BAKHURST: Is that for me first? I can take it first if you would like me.
>> MATTHEW NGUYEN: Yes, go first and anyone who wants to pitch in go for it.
>> JULIE INMAN GRANT: Last in gets the hardest questions.
>> MATTHEW NGUYEN: Well, that's you, Julie. You can answer this.
Look, I think, Matt, there are ‑‑ as you rightly say, I think there's some areas we have already discussed where they are very aligned on our approach and our prioritization such as CSAM and such as, you know, really dreadful illegal content, terrorist content and so on. You know, clearly, those are areas that all the regimes will approach, and the approach may be different as Julie touched earlier on. This is early days for the network and so we will set out a work program to see exactly what is the most valuable thing to share early on. I mean, the UK's regime is very much now focused on one, illegal content, CSAM and terrorist content and protection of children, the legal but harmful, so-called bill has been slightly changed. It's still this but it's more focused on what the platforms are putting in their terms and conditions as it comes back into parliament.
I'm sure I will be prioritizing the worst harms and seeing where we can make the greatest difference and align them in the best way as we can as global regulators as we take up various powers, but I will hand over to the others for other perspectives and to answer the more difficult questions.
>> JULIE INMAN GRANT: I mean, I ‑‑ I guess we have sort of thought about, you no he, what do other like‑minded independent regulators look line and is it a big D democracy or little D democracy and whenever you go into the realm of what is considered censorship, things will good et cetera thorny. There were a few fundamental principles that we all agreed upon in terms of respect for human rights and freedom of expression. And our view, as we observed over time, that, you know, we have a legislative threshold that's relatively high but what we have over time is the misogynistic, racist, homophobic, any kind of targeted online abuse that is designed or intended to cause serious harm is also intended to sigh sense. If you let targeted online harm go untouched, that does have a direct impact on freedom of expression. But as you have seen some of the debates play out in the United States, for instance, a lot of the debate is around either the first amendment or Section 230, but then you have got ‑‑ you've got bills like the Texas bill, and even some of the debate around Section 230 was about whether or not conservative voices were silenced vis‑a‑vis, say, more liberal voices.
So the tenor and tone can really change and become more political when you ‑‑ I guess you add value judgments to it and you are not looking viewing this in the context of harm and alleviating harm. And we will all define that slightly differently, I expect.
>> CELENE CRAIG: Matt, may I speak?
>> MATTHEW NGUYEN: Yes, please.
>> CELENE CRAIG: I think both Kevin and Julie have identified where the commonalty is between the fundamental values. When I you are working on off a basic set of principle, as a regulator in Europe, very often local law or national law and, you know, can be very different from one Member State to another. For example, there's still enormous commonalty in some of the regulatory tools that are used, if you like, to give effect to the objective of media regulation. I think our experience certainly in Ireland has been it's really been around the value that collaboration can bring. We can learn of each other in terms of prioritization, in terms of types of regulatory tools and there's an enormous amount to be gained from the sharing of research is hugely valuable as well. There's lots of ways in which we can benefit from this type of collaboration particularly as I said when you are working off some very commonly agreed principles. For us, the experience has always been extremely valuable to lab rate and I think, you know, we are seeing common themes in terms of regulation around the requirement to undertake risk assessments, you know, the formulation of online safety good codes and the experience in preparing these and implementing them and reviewing them, all of that gives us a huge amount of commonalty, regardless of some of the ‑‑ perhaps the specific details that can sometimes be required at a national or local level. But there is a huge amount of common ground and that has really propelled us, I suppose or impelled us to join together and share our experiences on best practices.
>> MATTHEW NGUYEN: Taj, did you have any more thoughts before we move on to questions?
>> TAJESHWARI DEVI: Yes, practically from Fiji's perspective, I just jump to our case study, for instance. Fiji is low in population and the regulation that has really ‑‑ we are in the ‑‑ our commission has been established from the past three years we ‑‑ in terms of receiving our reports from individuals in Fiji, we can say that there are sometimes individuals who are reporting at the commission in Fiji so the complainant is in Fiji or the complainant is a Fiji citizen or is overseas or the perpetrator is overseas or the perpetrator is here in Fiji. And so both ways. It actually becomes a little difficult for us to work in those cases because really, there aren't such regulations in other countries. We understand Australia does have. So it makes our work quite easy. But if it's in other countries, it is like ‑‑ for us to approach Fiji police force, who we are really closely working with. And so they deal with Interpol in order to connect to the cross‑border cases.
So I'm really actually ‑‑ we are really, actually excited about this global online safety regulators network, because for a country that's geographically remote and culturally rich as Fiji, the global online safety regulators network offers a great opportunity for the members to share information, discuss the challenges such as cross‑border complaints and acknowledging cultural diverse.
Collaboration with our international stakeholders is actually crucial because it is allowing us to achieve the online safety space that does not have any boundaries and we are stronger together than apart.
>> MATTHEW NGUYEN: That was a great sentiment to lead into questions. Thanks.
Yes, I want to open to the room if anyone has any questions or if Ella, if there's any online. I see there's a couple here so I might start here and Ella, if you want to corral the online audience.
>> AUDIENCE MEMBER: Good morning, thank you, my name is Bia and I come from Brazil. I would like to say congrats for the launch of the network. I wonder if you have a regulator from Latin America working with you or being part of the network. And for all of you, if you could comment a little bit the idea of the risk assessment from the DSA, if you think that it might work since some of you have already mentioned that the voluntary approach from the platforms has not been working so far but I think that even in the DSA, in the European DSA, you have ‑‑ there's this idea of ‑‑ that the risk assessment is something that the platforms are going to be responsible for. If you are hoping it will work. And since we are from Brazil, we are looking very carefully to the new legislation the DSA and the DMA in Europe as well. Like in other experience, we have a very good opportunity to take advantage from the European legislation to foster national legislation in Brazil and I would like to hear from the risk assessment that you think it might work right now and also how we could collaborate with you from Latin America. Thank you very much.
>> JULIE INMAN GRANT: I would just say that the reason we wanted to develop risk assessments is ‑‑ is so that, again, companies are being mindful that their technologies, their policies and the processes are being built in ways that don't harm people and, you know, I think it's fair to say that no one started ‑‑ not many people would have started a tech company with the idea of hurting.
It always starts with good ideas and good intentions, but we also know that whenever you have humans in the mix, and you can't always moderate human behavior or human malfeasance and that's what the result is.
I was going to say, I am not aware of any independent regulators or statutory authorities in central or Latin America. In my previous roles in the tech sector, I definitely worked across Latin America with Data Protection Authorities and, you know, my observation is many countries in Latin America, in particular South America, do tend to follow the European tradition of the Data Protection Authority. But that's something to look into in terms of how we might try and expand our reach. I mean we haven't talked about this, but over time we might look at, you know, model laws or doing case studies around what has been effective with risk assessments or specific regulatory tools or industry codes.
And I think in some ways companies will appreciate and industry will approach that. They don't want to be negotiating, you know, 190 wildly different sets of codes that they then have to comply with.
>> KEVIN BAKHURST: Can I add to that, if that's all right, Julie.
>> JULIE INMAN GRANT: Yeah.
>> KEVIN BAKHURST: On the risk assessments, also the UK's legislation has risk assessments at the heart of it as well and companies will be required to do risk assessments on illegal harm. And we will look at them and OFCOM in the UK is required to do an overall risk assessment to identify the issues that the larger companies should be looking at. This is the ‑‑ this is the category one or the biggest companies. Just to pick up on Julie's point and the question about Latin American members.
I mean as Julie has mentioned already, we sort of agreed membership of this network global network is based on the fact that it's independent regulators, that have ‑‑ that are aligned on commitment of freedom of expression and human rights and so on. However, I think, you know, one of the areas we have also discussed is apart from the core memberships is convening other interested parties and NGOs and so on. So we will find other ways to engage with those people in Latin America and ambitious across Europe as well to bring on other like‑minded regulators and find other ways to engage with regulatory authorities that meet that core criteria.
>> CELENE CRAIG: Matt, if I may come in here just on the subject of risk assessments. I think my colleagues Julie and Kevin have covered the issue around participation in the network and suffice to say, we will not be standing still in relation to its current formulation. Just in relation to the issue of risk assessments under the DSA, it was around the structure that would be given to that. Obviously, I think it's also what is facilitated in the Irish legislation.
In the risk assessment under the DSA will have to be tailored to the breadth of issues that within the scope of the DSA, and these potentially ‑‑ they go beyond media content as such. There's other illegal harms that fall within the scope of the DSA. Obviously, our focus in Ireland is very much on the ‑‑ you know, giving ‑‑ put ago framework around the legislative provisions that will allow a thorough assessment of the risks and as they are presented.
As I said earlier, what will be very key in the risk assessment is to look at product design to see where the risks might have been mitigated in the first instance but also to look at the platform's policies, platforms and procedures in terms of how they address that, and how they mitigate the risk and to look at ways and try to find ways of minimizing the risk to a much greater extent. There's broad framework there that can be developed and can apply, perhaps. You know, to address local concerns or more specific local or national needs. But as I said, the risk assessment template for the DSA has yet ‑‑ it hasn't obviously been given full effect at this stage, but I would expect that that will be a key priority at the European level in 2023.
>> AUDIENCE MEMBER: Thank you. Malcolm for the London Internet Exchange. Since the topic of this session is about harmonizing the treatment for online harms and what we have been introduced to is the regulator to regulate network here, which I think ‑‑ if I understand correctly, it's about learning from each other, regulators learning from each other and developing, if not best practices maybe that's bit strong but learning opportunities from what even other are doing. I wonder if that works in just one direction? The regulation of allegedly harmful content that is a balance in a given country, between concerns about what should not be allowed and concerns about protecting freedom of expression and the rights to participate online and to receive information online. But that balance is often struck and mostly struck before the legislation is passed. So for example, in the UK context, if we were talking a couple of weeks ago, OFCOM would talk about the content that's harmful to adults that's one the major pillars of the legislation that's coming and now it's not going to be, because the government has been persuaded that that's going too far against freedom of expression and taken a big chunk out of bill.
So if that regulator‑to‑regulator contact had happened two weeks ago, OFCOM, I would expect it would have been describing how you could learn about all what we are doing on content that's harmful to adults that's not illegal and now won't be in the same way because it's not included as a result of that change in the balance ‑‑ in the balance that was made in the legislative process in the UK. So how would that have been received by the other regulators if OFCOM had been saying that? And do we get a different outcome? Do the other regulators take up the idea of content that's legal but harmful to adults when OFCOM says ‑‑ if it goes that way in the UK two weeks ago, but not spread around the world, if gets taken out of the bill before OFCOM comes and engages in this regulatory network?
Or do the regulators challenge each other not only in what we could be doing to suppress harmful content but also what are we doing that we shouldn't be doing or that should be protecting freedom of expression more? I'm only really aware of the Texas legislation as being legislation that really assesses suppression by trust and safety teams as a major harm in itself.
For the most part, the legislation that I'm aware of assumes that freedom of expression also underpin harmful regulation. But do the regulators in this network seek to fill in that gap by adding new specific things that should be done to protect that? Or is that just assumed, again, as part of the background and so the information sharing acts as ratchet of suppression.
>> But we see it also inside of a single country. And ‑‑
>> MATTHEW NGUYEN: I think we have someone who dialed in accidentally.
>> JULIA INMAN GRANT: Yep.
>> MATTHEW NGUYEN: If you want to just push them out.
>> JULIE INMAN GRANT: Can we mute them?
>> Because they are violating intellectual property rights and ‑‑
>> MATTHEW NGUYEN: Can we mute them or ‑‑
>> Regarding gambling ‑‑
>> ELLA SERRY: I'm sorry, I'm not able to mute.
>> We say to the Internet search provider or to the platform ‑‑
>> MATTHEW NGUYEN: Cool.
>> ELLA SERRY: Oh, thank you whoever did that.
>> JULIE INMAN GRANT: Listen, that was a long question, and I will probably turn it over mostly to Kevin, but I just want to say, I mean he rightly points out, it's legislatures, it's parliaments, it's policymakers that create the laws. And you know, we certainly have the experience. I think we have been working with the ‑‑ I'm going to pronounce this terribly wrong, Celene, Ariktos, the Irish policymakers in terms of shaping, not because we had in I particular interest in shaping how Ireland developed its online safety regulation, but just ‑‑ just to answer questions about how our operating model worked, what worked, what doesn't. So that others can learn from our mistakes. And you know, the idea of a global network of regulators isn't new. The global policy standard was established in 1979. And we have so many privacy commissioners around the world. We don't think about a time when they are ‑‑ there wasn't regulations in that area, in much of the world. If you look at the IIC, which are the traditional regulators and Celene and Kevin would know much more about them, but they have ‑‑ they have been around even longer. So we have built flexibility into the framework of the network so that we ‑‑ we can have certain countries or governments that are interested and fit the criteria to serve as observers so that they can learn from us too. And sure, we may shape each other's practices.
I may learn from Ireland that the way ‑‑ that the fundamental research they have done on changing ideas around how to mitigate self‑harm, for instance, works really well according to their evidence and we might want to try it in Australia. So I can foresee a range of different scenarios when we'll be learning from each other so that we are not re‑inventing the wheel.
I don't think we are here to change anyone's particular way of doing things because for most of us, our parliaments or Congress or our legislatures will have defined what our functions are and, you know, what the risk tolerance is around some of these issues.
>> MATTHEW NGUYEN: Does anyone have a quick comment before I close?
>> KEVIN BAKHURST: Matt, may I just pick up the freedom of expression thing, which is a valid point made from the floor there. But there are different approaches. You know, it is actually baked into the UK online safety legislation an OFCOM as the regulator has a statutory responsibility to consider freedom of expression, which we already do in our broadcasting work. So, you know, it will be forefront of our minds and just on the wider point about, you know, regulators and how we are going to work, if you like. I think I would echo what Julie just said. It's about sharing best practice. You know, we will all be bound by our ‑‑ it's Julie said by our parliament or legislature. So we will not try to change what each other are doing. We will align as far as we can do, we are very aware these will be relatively regimes whether in scope or approach, whether it's systems and processes in the UK or some other parts of the world.
>> MATTHEW NGUYEN: Awesome, thanks. I'm an optimist at heart and I think international collaboration is the only way to be able to solve this. Congrats on launching your network and joining us here in Addis for a really great panel. I'm sure we will all looking forward to everything that you guys do over the next few years as the network expands and gets some interesting progress on this issue.
So thanks so much for joining us. And good night, good morning, good day.
>> JULIE INMAN GRANT: Thanks for making international networks sexy!