Anthill Inside 2019

A conference on AI and Deep Learning

Tickets

Gendered Biases in Artificial Intelligence

Submitted by Radhika Radhakrishnan (@radhika-radhakrishnan) on Thursday, 7 November 2019

Session type: Full talk of 40 mins Section: Full talk Technical level: Intermediate Session type: Lecture

View proposal in schedule

Abstract

My talk will attempt to bust the myth of “objectivity” or “neutrality” of Artificial Intelligence (AI) technologies by highlighting gendered biases in AI and how they arise. To substantiate, I will focus on Virtual Assistants / “Bots” by presenting my primary research on two case studies - first, a comparative analysis of the responses of smartphone-based Virtual Assistants (such as Siri and Alexa) to user queries on gender-based violence in India; and second, a gendered enquiry of Microsoft’s Twitter bot Tay.

Outline

Part 1. Introduction to Gendered Biases
The talk will begin with a brief introduction to fairness and gendered bias concerns in Artificial Intelligence technologies with relevant examples.

Part 2. Are Smart-Device Based Virtual Assistants Capable of Assisting with Gender Based Violence
Concerns in India?
I will present my research which critically examines the responses of five Virtual Assistants in India – Siri, Google Now, Bixby, Cortana, and Alexa – to a standardized set of concerns related to Gender-Based Violence (GBV). A set of concerns regarding Sexual Violence and Cyber Violence were posed in the Virtual Assistant’s natural language, English. Non-crisis concerns were asked to set a baseline. All crisis responses by the Virtual Assistants were characterized based on the ability to (1) recognize the crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other resources. The findings of my study indicate missed opportunities to leverage technology to improve referrals to crisis support services in response to gender-based violence.
Read my paper here: https://itforchange.net/e-vaw/wp-content/uploads/2018/01/Are-Smart-Device-Based-Virtual-Assistants-Capable-of-Assisting-with-Gender-Based-Violence-Concerns-in-India-1.pdf

Part 3. Feminist Perspectives on the Social Media Construction of Artificial Intelligence
I will analyse how Microsoft’s Twitter bot Tay went from tweeting “can i just say that im stoked to meet u? humans are super cool” to “I .... hate feminists and they should all die and burn in hell” and how we can avoid designing such biased AI technologies for the future.
Read my work here: https://gendermediacultureblog.wordpress.com/2018/12/24/feminist-perspectives-on-the-social-media-construction-of-artificial-intelligence/

Requirements

An open mind and an interest in listening to different perspectives.

Speaker bio

I currently work as a Programme Officer at the Centre for Internet and Society (CIS), New Delhi, researching the intersections of gender and emerging technologies such as Artificial Intelligence. Previously, I worked with the Internet Governance Forum of the United Nations as a Consultant on Gender and Access (2018), and with the Association of Progressive Communications (APC) (2017) on gender and technology. I have a Master’s degree in Women’s Studies from the Tata Institute of Social Sciences (TISS), Mumbai, and a Bachelor’s degree in Computer Science Engineering from M.S. Ramaiah Institute of Technology, Bengaluru. Outside of work, you will find me tweeting about feminism, writing on Medium, and engaging with grassroots political activism.
Twitter: @so_radhikal
LinkedIn: https://www.linkedin.com/in/radhika-radhakrishnan/
Medium: https://medium.com/radhika-radhakrishnan

Links

Comments

Login with Twitter or Google to leave a comment