Session
ISOC Gender Standing Group
- Theorose Elikplim, Ghana Institute of Journalism, Academia, Africa - Nicolas Fiumarelli, YCIG (Youth coalition on Internet Governance), Civil Society, GRULAC - Umut Pajaro Velasquez, ISOC Gender Standing Group, Civil Society, GRULAC
Umut Pajaro Velasquez, ISOC Gender Standing Group, Civil Society - GRULAC
Umut Pajaro Velasquez
Theorose Elikplim
Nicolas Fuimarelli
5. Gender Equality
8. Decent Work and Economic Growth
9. Industry, Innovation and Infrastructure
10. Reduced Inequalities
11. Sustainable Cities and Communities
16. Peace, Justice and Strong Institutions
17. Partnerships for the Goals
Targets: This proposal links to the SDGs 8, 9, and 10 by exploring the potential of AI to access decent jobs, improve the efficiency of supply chains and innovation for the common good, reduce inequities, and by addressing pitfalls before it is too late; SDG 16 by addressing what are the biases of AI, for these have the potential to exacerbate inequalities and weaken Institutions; SDG 5 by giving space to underrepresented voices, such as women and gender-expansive people; SDG 17 by favoring cross border dialogues and cooperation, exchange of good practices and public/private partnership to develop AI solutions for sustainable development.
The talk will consist in a presentation following y a conversation between the speaker and the audience onsite and online. The presentation will be based on a paper with the same title of this session that is the continuation and product of an unanswered question from our last year’s LT that had the titled: Queering AI: A queer perspective for AI. The session will guarantee the hybrid format following th e next proposed agenda: 10 minutes speaker’s presentation 15 - 20 minutes of conversation with the audiencie onsite and online 5 -10 minutes for possible conclusions or new questions to be answer at future talks The guarantee the full participation of everyone in the session, despite of the location, both moderators will be granting according to petition the floor to the people that want to be part of the disscussion with questions and comments related to the presentation and topic directly or inderectly. The conclusion will be an aproximation of the summary of this shared by the rapporteur.
Laws and policies surrounding the collection, storage, and use of data by artificial intelligence need to be analyzed further. Currently, data is being used in advertising, education, and policing to reinforce racism, and genderism and amplify other inequalities. Data is also being used to bolster our current ideologies. Filter bubbles surround us with information that aligns with our current views and deters us from engaging with ideas that conflict with our own. This amplifies the privileging of certain ideas regarding gender and race and can “serve to exploit prejudice and marginalize certain groups”. These uses of data need to be explored further, with special attention to the sufficiency of current and future regulation for vulnerable gender-expansive groups to detect bias that allows no increase in discrimination within the design, development, deployment and auditing of these new technologies.
The online and onsite moderator will the conversation between participants with an equal time (if the session allows it) for both and having mainly into account the order of how participants requested the floor. Also, participants can make question or participation in Spanish, English and Portuguese.
Report
The speaker introduced some key concept that is about datasets are enabling problem-solving copying a human’s decision-making process and making predictions or classifications based on input data, this could provoke bias (gender and other)
About the question on the possibility of having a Queer AI it was concluded that:
- Recognize the systematic and repetitive errors that create unfair results. As for example, to give more privileges to a certain group of user instead of others without a logical reason.
- To audit our models in terms of demographic groups, to ensure that there are no biases in this regard
- It is important to report demographic statistics for the data used to train and test our models.
As for the possible solutions to address these challenges are:
- Diversifying the datasets
- Privacy by design
- Fairness, Accountability, Transparency, and Ethics (F.A.T.E.)
- It must include participation in all the 4Ds (Design, Development, Deployment and Detection of Biases) in order to curate the datasets and get results closer to a benefit than a harm.
- Governments, NGOs, Social Movements, Technical Sector, Private Sector, Intergovernmental Institutions (Multistakeholderism).
More challenges:
- Ensuring that technical and legal definitions of bias, equality, and fairness match up with what is valued more broadly in society, but specially for what is defined by these minorities called queer or gender-diverse, etc.
- Laws and policies on AI are still at the embryonic stage, which could make the process slightly staggered. However, this could also be an opportunity, especially as many policies are not yet ossified. Such research will need to keep abreast of emerging developments, and work to create access to, and inform, policies in development.
- It will be important for researchers to consider how they will address the trade-offs in terms of moral and ethical guidelines or frameworks in order to keep the perspective that queer people wants and safeguard their human rights.
On session conclusions:
- The implementation of a queer perspective in all over the world could be problematic and is considered difficult to achieve
- This approach could offer a more holistic way of understanding, embodying, and codifying the experiences of queer, trans, and marginalized people in AI and other new data-driven technologies.
- Any AI design, development, deployment, and bias detection framework that aspires to be fair, accountable, transparent and ethical must incorporate queer, decolonial, trans, and other theories into its 4Ds.
- A global algorithm can be achieved by following the F.A.T.E. AI system.
- Discuss in the countries whether gender, sexuality and other aspects of queer identity should be used in data sets and AI systems and how risks and harms should be addressed and which lines should be mitigated, all this without forget the end users, so any AI can effectively promote diversity and inclusion and thus credibly and effectively develop a reliable AI from its practical and regulatory aspects.