Session
Organizer 1: Diogo Cortiz da Silva, Network Information Center (NIC.br)
Organizer 2: Lucia Santaella, Pontifical Catholic University of São Paulo
Organizer 3: Hartmut Richard Glaser, Brazilian Internet Steering Committee - CGI.br
Speaker 1: Diogo Cortiz da Silva, Technical Community, Latin American and Caribbean Group (GRULAC)
Speaker 2: Lisa Feldman Barrett, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Javier Hernandez, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Jessica Szczuka, Civil Society, Western European and Others Group (WEOG)
Speaker 5: Marina Meira, Civil Society, Latin American and Caribbean Group (GRULAC)
Henrique Xavier, Technical Community, Latin American and Caribbean Group (GRULAC)
Gabriela Nardy, Technical Community, Latin American and Caribbean Group (GRULAC)
Pollyanna Rigon, Technical Community, Latin American and Caribbean Group (GRULAC)
Round Table - U-shape - 60 Min
This workshop aims to establish an initial and interdisciplinary discussion to introduce the topic of Affective Computing on the IGF agenda, so we propose the following policy questions: To what extent Affective Computing is a reliable and trustworthy technology to infer about a user's emotions? What are the scientific and technical limits for it and how could we use it to improve quality of life as well as monitor global health risks on a large scale? Affective Computing solutions are typically designed to be global, how can we ensure that cultural and local criteria are taken into account in the design phase to enable equitable access to technology? How can we make sure that Affective Computing is not developed and used for harmful purposes? What values and norms should guide the development and use to enable this?
Connection with previous Messages:
3. Good Health and Well-Being
10. Reduced Inequalities
Targets: This workshop proposal is related to the following SDGs: 3- Good Health and Wellbeing One topic that we will address in this workshop is the potential use of Affective Computing (if used properly) to improve quality of life as well as monitoring global health risks on a large scale. 10 - Reduce inequality within and among countries This SDG is linked with one of our policy questions regarding how we can ensure that cultural and local criteria are taken into account in the design phase to enable equitable access to affective computing technologies.
Description:
Affective Computing is an emerging technology that comprises the study of how computers can recognize, interpret and simulate human affections. The approach employs various types of input, such as images of facial expressions, text, voice, and physiological data. With the advancement of Artificial Intelligence (AI), Emotion Recognition is becoming more detailed and fine-grained. A few years ago the most common task was classifying emotion in terms of its valence (positive, negative, and neutral), but today some techniques allow recognizing a user's emotional state into up to 40 different categories. If we take into account the advances in Virtual Reality (VR) as an enabling technology for the Metaverse, we could find an even more favorable environment for collecting and mapping data about people's subjectivity, cognition, and emotions. This scenario can have different practical results, such as providing a better user experience or empowering people by promoting a richer understanding of their emotional states, providing a tool for monitoring global health risks on a large scale. However, they come with new privacy and governance challenges. This workshop will discuss the growing adoption of AI models to identify users' emotions that could indicate highly accurate subjectivity aspects, for multiple purposes, such as in profiling or to deliver a better and customized experience to the users on the Web and in the Metaverse in the future. Despite the convincing results of those systems, in scientific terms, there is still no consensus on the technical, social, and ethical feasibility of using AI to infer peoples' emotions. This workshop will provide an interdisciplinary space to bring together experts from neuroscience, AI, privacy, and regulatory matters to discuss and identify the potential use of those technologies and possible risks to drive future governance needs.
Inform audiences and all interested parties about key uses, new trends, and challenges of Affective Computing. Provide a list of emerging specific initiatives to ensure ethical and safe use of affective computing on the web and the Metaverse Elaborate key recommendations from speakers and participants to introduce a multi-stakeholder perspective of Affective Computing to guide the local policy agenda.
Hybrid Format: The session will be divided into three main segments: introduction, debate and interaction. The first segment will consist of the initial presentation of each of the participants (duration of 5 minutes for each participant) to bring a multistakeholder view to the topic. The second segment will consist of the speakers' view on the policy questions. The last segment will consist of the interaction between the audience and the speakers. For this final part, the Q&A questions will be collected by the online moderator and given to the in-person moderator, who will select and distribute them among the speakers. We plan an online meeting with all the speakers one-week before the IGF to coordinate the interventions of the speakers based on who will participate online or in person. The onsite moderator will then be able to coordinate the speaking time of each participant according to the topic and type of presence (online or onsite) to ensure the best possible experience for both onsite and online audience. We plan to use all tools provided by the IGF organization.
Usage of IGF Official Tool.
Report
Session Report
IGF 2022 WS #354 Affective Computing: The Governance Challenges
Tuesday, 29th November 2022 (12:05 UTC) - Tuesday, 29th November, 2022 (13:05 UTC)
Speakers: Dr. Diogo Cortiz, Dr. Lisa Feldman Barrett, Dr. Javier Hernandez, Dr. Jessica Szczuka, Mrs. Marina Meira
Moderator: Dr. Henrique Xavier
Rapporteur: Mrs. Pollyanna Rigon Valente
The moderator opened the session by introducing the theme of the discussion: How we can use computers to interpret and simulate human emotions and its potential, issues and other challenges.
Dr Diogo Cortiz, researcher at Web Technology Study Center (Ceweb.br), a center of the brazilian network information center (NIC.br) and professor at Pontifical Catholic University of São Paulo (PUC-SP), started his initial contributions presenting some inputs and concepts about Affective Computing (AC). Dr. Cortiz introduces the concepts of Affective Computing and how it is part of IGF Agenda: Affective Computing is not a specific technology, but an area of knowledge. It’s possible to develop different types of application to recognize, detect, simulate, and organize data about human emotions. Dr. Cortiz stated that AC is closed to AI when the discussion is about governance and regulation, because they are not a specific technology but a broad area of knowledge that could involves different types of applications. Dr Cortiz also appointed an important note: Affective computing does not always use AI, it could use technology for self-report, for example, but the most important cases in the moment are based on AI models for emotion recognition. Dr Cortiz ended his initial talk presenting two challenges (sensitive problem) that need to be addressed:
- Using Affective computing with AI, how is it possible to be sure that an AI application is right? When inferring about subjectivity, AC may be wrong but make us believe it’s right.
- Global models: we use models that were trained in the most cases with data from users from the global north, but that model will have impact and will be used over other regions and cultures in the world. How can we ensure it will work? What are the risks?
Dr Lisa Feldman Barrett, professor at Northeastern University, shared about one specific subject about affective computing: automating emotion recognition. Using an example from images of a research, she showed that it is possible to understand how wrong AI could be in recognizing human emotions. With more examples over facial expression and the emotion, Dr. Barrett argued that is important to remember that facial movements are only expressions and they we are not necessarily related to internal emotional state. That is the challenge for the affective computing and AI models that uses facial expressions to detect emotions. If we really want to be able to use technology to our benefit, affective computing must measure many signals, not just one, two or three. Dr Barret ended her initial talk arguing that for emotional AI to be successful, the entire ensemble must be measured across different situations, different people and in different cultures.
Dr. Hernandez spoke, researcher at Microsoft, highlighted we need to have a discussion across multiple disciplines, because probably many of us are excited and very worried about the potential applications of this technology. In addition to what Dr. Cortiz had shared previously, Dr. Hernandez also got more context about the research over affective computing: it had start around 1995 and is the study and development of systems and devices that can recognize, interpret, and simulate human affects. Talking about his role at Microsoft, he explained they have different categories and that is the area of comfortable sensing, a lot sensing with wearable devices trying to find way to capture information from users, doing lot of work on AI and how they can use it to better understand what emotional states really mean and how they can sense them in settings and with that they can create affective interaction/experiences that use that information in unique way to help achieve certain goals. Looking to all those things Javier says the one of the core mission statements is improving regulation and help users become better at managing their own emotions. It was in 2015 that affective computing started as emerging technologies and even it seems a good opportunity to research, on the other hand the companies started to look at it as an opportunity to them.
- Challenges: The theory of human emotions is evolving; Human emotions are difficult to describe and label; A lack of representative and generalizable data; Oversimplified language is used to communicate system capabilities; There is blurred boundary between what should be private and public.
- How to minimize challenges: communication; consent; calibration; contingency.
Dr Jessica Szczuka intervened to present a subject over the affective computing that probably some of the audiences haven’t thought much about: intimacy and sexuality, as well inviting us to explore how important emotion can be. We have three different ways how we can come to intimacy and sexuality with technologies: through, via and with. The last one can be looked as very futuristic or sci-fi, an actual intimate or sexualized interaction with the technology itself, but we are not that far away. One of the challenges she presented was how affective computing is now really related to short-term and long-term interactions? As she presented a part of a research she highlighted one part of the model that’s relevant for the question made before: sexual arousal, which shifts your attention and your cognitive resources to reflect of the aspects away to the great fixation of the sexual fulfillment, therefore you do not have in this specific moment all capabilities to reflect that maybe this machine is not understanding the emotions right. Dr. Szczuka also present other research that shows that recurring interactions that evolve for us in the dynamics is key to what makes artificial entity chatbot or whatever and comparing to our daily contacts and how we perceived things it’s super hard to implement and we really may need to make sure that companies that are using this technology are aware of the potential consequences. To have more context about the consequences we saw examples: using affective computing is actually a way to nudge user into using a specific technology as we have this need to rely on others and use our emotions for this, if you think about the way people will interact in emotionally intense state, which wrong by affective computing obviously, it will also come along with manu and very sensitive data. As part of how minimize the challenges: we should stay technology positive, providing platforms for satisfying needs for intimacy and sexuality and being responsible, anticipating and implementing possible consequences and vulnerabilities.
Mrs. Marina Meira spoke about regulation of AI in general where emotional AI is inserted into. The first thing to think about is why regulate AI and technology in general, because the development of technologies can be supported by regulation while people’s rights are protected, individually and collectively. We have a big challenge about how to regulate technology, especially when it comes to AI because there are not many regulations throughout the world so in general, they are learning while technologies evolve. Looking to the past and when technologies started evolving principles and ethical guidelines started being thought around the world, but they didn’t have binding effects and it reflects on a lot of challenges when it comes to being followed. Those guidelines were most related to transparency, explainability, safety, security, fairness, non-discrimination, responsibility, privacy, and human control over technology and that were not followed by the companies, because in that case for example, they were establishing ethics Councils within their companies or nominating people who were specialists in AI ethics however they were not changing their practices. All that scenario showed the big challenge over law regulation, which means laws that can be enforced that will be sanctioned if they’re not followed and that translates into very specific measures. Looking to the nowadays it’s possible to see a similar scenario: several laws being discussed, and the most of these regulations follow what they call a risk-based approach, that means that the more risks to human rights that are the technology present to those who are going to be affected by it, the more obligations those developing the technology. There are risks following this risk-based approach idea and regulation sense previously assessed, because a very important figure in general regulation are the impact assessments that must be conducted under a strong and scientific solid methodology to assess and understand that are the actual risks that technology can present and think of ways to mitigate them. She also highlighted the importance of those risks should be assessed with a big participation of society. Even over all that challenges, Mrs. Meira finalized her presentation talking that it’s possible to regulate and it’s a positive thing we can achieve a better society with regulation as well as with technology, but first we need to consider the most vulnerable groups and how emotional computing affects them.