IGF 2023 Lightning Talk #94 The technopolitics of face recognition technology

Wednesday, 11th October, 2023 (01:40 UTC) - Wednesday, 11th October, 2023 (02:40 UTC)
SC – Room H

Human Rights & Freedoms
Non-discrimination in the Digital Space

Human Rights & Freedoms

Latin American Network of Surveillance, Technology and Society Studies - LAVITS
Rodrigo José Firmino, LAVITS and JararacaLab, Brazil, Latin America. Júlia Faustina Abad, LAVITS and JararacaLab, Brazil, Latin America. Fernanda Glória Bruno, LAVITS and MediaLab.UFRJ, Brazil, Latin America. Débora Pio, LAVITS and MediaLab.UFRJ, Brazil, Latin America.


Fernanda Glória Bruno, LAVITS and MediaLab.UFRJ, Brazil, Latin America, and Rodrigo José Firmino, LAVITS and JararacaLab, Brazil, Latin America.

Onsite Moderator

Rodrigo José Firmino

Online Moderator

Débora Pio


Júlia Faustina Abad


10. Reduced Inequalities
16. Peace, Justice and Strong Institutions

Targets: Based on public security relations within the scope of urban planning and local governance, the use of surveillance technologies has become a mechanism for repressive control of violence and crime in public and private spaces. The use of video surveillance with face recognition technology enables the location, identification and social classification of individuals, in a constant, systematic and indiscriminate way, and, consequently, makes public spaces hostile environments, violating fundamental rights — which demnstrated the strong links of this proposal with the SGD 16, Peace, Justice and Strong Institutions. In this sense, what was shown as a possibility of surveillance for the reduction of urban violence, became a mechanism of violation of rights inherent to the subjects, such as equality and non-discrimination, to the extent that these technologies accompany the segregating and racist realities of the criminal system, making the public environments of local meetings and protests subject to criminalization for the general population. According to a recent study by the Security Observatory Network (2019), 90.5% of people arrested with the use of face recognition in Brazil are black. This reflects the so-called algorithmic racism of these technologies, mainly due to the inaccuracy of the systems and data bases that constitute them. The use of face recognition has been shown to be another mechanism of oppression and mass incarceration of the black, poor and peripheral population — which connects the themes of this session to the SGD 10, Reduced Inequalities.


The session will consist of an introduction of participants (5 minutes), a short presentation (10 minutes) of an advocacy project towards the ban of face recognition technologies in public spaces in Brazil, followed by a round of questions and comments (15 minutes).

Duration (minutes)



The use of face recognition as a preventive and repressive crime control mechanism meets demands for security fueled by fear of urban violence. However, the use of this technology leads to increased vigilantism in public spaces, based on a discriminatory algorithmic bias, and can create hostile spaces, which further implies in the violation of fundamental rights. This session, titled “technopolitics of face recognition technologies” aims to present and discuss a political incidence project based on a champaign to ban the use of face recognition technologies in public spaces, by governmental institutions in Brazilian cities. It constitutes one of the possible strategies of resistance against the use of this technology within the scope of an oppressive urban security system, in the search for the preservation of a free and democratic public space and for the creation of new narratives about mass surveillance in public administration and urban planning. With this, we seek to question the nuances of the so-called right to the city, mainly on the part of stigmatized and minorized populations, such as residents of peripheral areas, black people, women and transgender people. More information about this theme, and this political advocacy project can be found in the following links (in Portuguese): https://medium.com/jararaca/smart-sampa-e-o-reconhecimento-racial-4a54e… https://lavits.org/miniguia-reconhecimento-facial-e-quando-a-maquina-er… https://lavits.org/tjsp-mantem-proibicao-de-coleta-de-dados-pela-via-qu… https://lavits.org/parlamentares-de-todas-as-regioes-do-brasil-apresent… https://lavits.org/iniciativas-que-apoiam-o-banimento-do-reconhecimento… https://www.semcameranaminhacara.meurecife.org.br https://jararacalab.org/pl_banimento-rf/ https://jararacalab.org/tire-meu-rosto-da-sua-mira/ https://jararacalab.org/espacos-publicos-e-reconhecimento-facial/

According to the programme and following IGF Secretariat instructions, Lightning Talks will not have online participation. There will be no live broadcast or recording of these type of sessions.

Key Takeaways (* deadline 2 hours after session)

To strengthen the public debate on the potential risks of using face-recognition technologies for public security and education.

Call to Action (* deadline 2 hours after session)

Public sector in general, but mainly in the fields of education and safety, as well as law-makers are called into action to critically and transparently involve civil society in the debate on the risks and repercussions of implementing high-risk technologies such as face-recognition.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

This Lightning Talk focused on the project to mobilize researchers, social movements and sectors of civil society for a public debate on the use of face-recognition (FR) by State agents in public spaces in Brazil.

It was acknowledged that the use of FR as a mechanism for the prevention and repression of crime satisfies the demands for safety fed by the fear of urban violence. However, it was argued that the use of this technology leads to expanded vigilantism with a discriminatory algorithmic bias in public spaces and can create hostile spaces that favor the violation of fundamental rights and guarantees such as freedom, privacy, personal data protection, assembly and association. Speakers highlighted how, in the course of this debate, legislation has been developed with members of congress to ban the use of FR in public spaces and how the use of FR has become more common in Brazilian schools, with a focus on the case of the state of Paraná, in the South of Brazil. 

The session covered the following points:

  • The use of face-recognition (FR) technology in Brazil, where speakers gave a little bit of context and explained how a resistance to FR has been built in the past few Years.
  • Arguments for banning FR.
  • The naturalization of FR, and the escalation of its use in the state of Paraná, with massive usage of FR in schools.
  • And a conclusion arguing that this is a battle being waged.

The main presentation ended with the statement that the debate on FR in Brazil has been a milestone in the inclusion of racial, socioeconomic and gender issues on the sociotechnical agenda. It also represents, for the same reasons, progress in the debate on the penal system and mass incarceration in a country which has the third largest prison population in the world. With more than 900,000 prisoners, of whom 45% are temporary detainees, Brazil lies behind only the USA and China. Today, the debate about these issues can be found in the press, traditional and independent media and parliaments as well as in the, albeit very often hypocritical, discourse of agents of the State and the private sector.

Finally, it was said that this whole debate is not a question of merely banning the use of a technology that has technical failings, but of questioning its technopolitical dimensions and challenging the unfair, racist structure of surveillance, monitoring and repression systems in Brazil.

Therefore, it is fair to say that the main aim of this session was to raise awareness about the debate on the potential risks of using face recognition technologies for public security and education in countries with a history of repression and discrimination against minority populations.

Once the case in Brazil was presented within 20 minutes by both speakers, the discussions focused two main issues: The first one was centred in the problem of naturalising the use and acceptance of surveillance technologies in schools through low-profile implementations without proper critical assessment or public control. The second issue focused on the ways in which these projects are socially, politically and economically constructed in regard to the actors and interests involved in the spread of face-recognition technologies in many sectors of society.

Session was closed in time.

Speakers: Fernanda Bruno and Rodrigo Firmino. Moderator: Rodrigo Firmino.