Artificial Intelligence
AI for Social Good
Through this research, I critically question the dominant narrative of “AI for social good” that has been widely adopted by various stakeholders in the healthcare industry towards solving development challenges through the introduction of AI applications targeted towards the sick-poor. I build upon feminist theory to argue for the reframing of AI systems away from conceptions framing them as neutral products towards framing them as complex socio-technical processes embedded with gendered knowledge and labour. Through this framework, I analyse the layers of expropriation and experimentation that come into play when AI technologies become a method of using diverse bodies and medical records of the sick-poor as data to train proprietary AI algorithms at a low cost in the absence of effective state regulatory mechanisms. I also offer social and policy recommendations that would enable us to envision inclusive feminist futures in which we understand and prioritse the needs of underserved populations over capitalist market logics in the development, deployment and regulation of AI systems.
I carried out this research originally at the Advanced Centre for Women’s Studies, Tata Institute of Social Sciences, Mumbai, India for my thesis during my Master’s in Women’s Studies (2017-2019). I took this research forward at the Centre for Internet and Society, New Delhi, India (2019) as part of the Big Data for Development network, established and supported by the International Development Research Centre, Canada.
Select work based on this project:
-
Radhakrishnan, Radhika (2021 forthcoming). Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India. Special Issue: ’Probing the System,’ Catalyst: Feminism, Theory, Technoscience.
-
Radhakrishnan, Radhika. (2020). Interrogating the AI Hype: A Situated Politics of Machine Learning in Indian Healthcare. Economic & Political Weekly.
- Tutorial: “AI on the Ground Approach: Critical methodological reflections and lessons from the field”, ACM FAccT (Fairness, Accountability, and Transparency) Conference, 2021.
Regulation of AI
This research examines the issue of transparency as a key ethical component in the development, deployment, and use of Artificial Intelligence. We propose a framework that seeks to overcome the challenges in preserving transparency when dealing with machine learning algorithms, and suggest solutions to building interpretable models right from the design stage.
I carried out this research at the Regulatory Practices Lab at the Centre for Internet and Society, supported by Google and Facebook.
Select work based on this research:
- Radhakrishnan, Radhika, & Sinha, Amber. (2020). Towards Algorithmic Transparency. Centre for Internet and Society.
Gendered Biases in AI
Are smart device based virtual assistants capable of assisting with gender-based violence concerns in India? This research critically examines the responses of five virtual assistants in India – Siri, Google Now, Bixby, Cortana, and Alexa – to a standardized set of concerns related to gender-based violence. A set of concerns regarding sexual violence and cyber violence were posed in the virtual assistant’s natural language - English. Non-crisis concerns were asked to set a baseline. All crisis responses by the virtual assistants were characterized based on the ability to (1) recognize the crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other resources. The findings of this research indicate missed opportunities to leverage technology to improve referrals to crisis support services in response to gender-based violence.
I carried out this research at the Advanced Centre for Women’s Studies, Tata Institute of Social Sciences, Mumbai, India.
Select work based on this research:
-
Radhakrishnan, Radhika. (2018). Are Smart-Device Based Virtual Assistants Capable of Assisting with Gender Based Violence Concerns in India? National Conference on Gender-Based Cyber Violence, Mumbai.
-
Invited speaker: “Gendered Biases in Artificial Intelligence”, Anthill Inside, HasGeek, Bengaluru, 2019.