Home / Opinion / Columns / Two women show how communities can unite to shape and guide AI's advance
Two women show how communities can unite to shape and guide AI's advance
Women tend to be less tolerant of unethical business strategies than men, say studies. Naturally, expressions of ethical dilemmas are relatively subdued in workforces where men outnumber women
premium
There is a user gap too. A Harvard study found women adopting AI at a 25 per cent lower rate than men because they tend to question the technology’s ethics more. (Photo: Shutterstock)
4 min read Last Updated : Sep 19 2025 | 11:10 PM IST
Two fascinating characters make Aranya Sahay’s Humans in the Loop an accomplished debut feature. The first is the protagonist, a recently-separated Adivasi woman (Sonal Madhushankar), who moves back to her village in Jharkhand and takes up work labelling data for an artificial intelligence (AI) company. The new role is a source of purpose and confidence until she discovers a problem in the system. More on the second character later.
The film, based partly on a 2022 article by journalist Karishma Mehrotra called Human Touch, draws on a metaphor popularly used for AI. That it is a baby in diapers trying to learn concepts. Women workers in rural India often find themselves in the position of nannies to this fledgling technology. For a small wage, they clean and tag thousands of foreign images to help it identify things, while staying mostly invisible. An important injustice to highlight.
AI companies usually pitch it as an equaliser and democratiser, for its ability to let people access knowledge and skills that would otherwise take many years and bucks to acquire. It is supposed to be the silver bullet for screening job applicants, diagnosing health symptoms, and assessing loan seekers, and so on. How well will AI do these tasks if it is developed, controlled, and applied unevenly?
Women have been raising such concerns regularly. Back in 2020, in a paper titled “On the Dangers of Stochastic Parrots”, four women researchers cautioned that a lot of the texts used to train large language models (LLMs) have unequal representation of marginalised groups, which could potentially bake biases into tools built on top of them. The paper’s co-author Timnit Gebru, AI ethics researcher and then a Google employee, was fired by the company for making a key technology look bad.
I recently picked up Empire of AI, in which journalist Karen Hao thrillingly unpacks the race among Silicon Valley billionaires to deploy the technology, and the particular mythmaking that surrounds Sam Altman’s Open AI. The entrepreneurs’ priorities — fuelling market valuation and preventing human extinction, among them — are many miles apart from the anxieties expressed by the technology’s critics.
In another book this year, plainly called The AI Con, researchers Emily Bender (who also co-wrote the Stochastic Parrots paper) and Alex Hanna critique many aspects of AI: The bubble-like valuations, the risk of dulled critical thinking among users, the threat it poses to livelihoods.
Women tend to be less tolerant of unethical business strategies than men, according to studies. Naturally, expressions of ethical dilemmas are relatively subdued in workforces where men outnumber women. Of the 1.6 million AI-related professionals worldwide, they make up just 22 per cent, according to the European think tank Interface. Even fewer have a say in decision-making — just 14 per cent are at senior levels.
There is a user gap too. A Harvard study found women adopting AI at a 25 per cent lower rate than men because they tend to question the technology’s ethics more. Consider the divide in phone access. About half of India’s population of rural women aged 15 and above do not possess a mobile device, compared with 80.7 percent of men. In urban areas, 71.8 per cent of women owned a mobile phone, versus 90 percent of men.
In one area, they are overrepresented: AI chatbots frequently have female voices. Whether or not women are heard, they are put in listening roles.
Fix the system, not the women, has long been a slogan. Cues can be taken from Ms Gebru, the researcher Google let go, who has been running the organisation “Black in AI” to grow the presence of Black people in the field. Rather than being integrated into not-very-good processes, women — and every marginalised group — need to be part of better systems.
That is what makes the data labelling manager (Gita Guha) the other interesting character in Humans in the Loop. Simply carrying out orders for international bosses at first, she overlooks a question from the young tribal woman in her team. But she comes around later, realising the significance of her warning. The answer to the empire-like ways of tech power players may be for communities to band together and take control of AI’s advance.
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper