AI companies usually pitch it as an equaliser and democratiser, for its ability to let people access knowledge and skills that would otherwise take many years and bucks to acquire. It is supposed to be the silver bullet for screening job applicants, diagnosing health symptoms, and assessing loan seekers, and so on. How well will AI do these tasks if it is developed, controlled, and applied unevenly?
Women have been raising such concerns regularly. Back in 2020, in a paper titled “On the Dangers of Stochastic Parrots”, four women researchers cautioned that a lot of the texts used to train large language models (LLMs) have unequal representation of marginalised groups, which could potentially bake biases into tools built on top of them. The paper’s co-author Timnit Gebru, AI ethics researcher and then a Google employee, was fired by the company for making a key technology look bad.