Google firing respected AI researcher Timnit Gebru is cause for concern

Over 1,400 Google employees and thousands of techies employed in other Silicon Valley businesses have signed a letter of protest about the manner of her dismissal

Timnit Gebru, Google, AI
Gebru was a pioneer in pointing out inherent flaws and racist biases in AI-based facial recognition programs in a paper, “Gender Shades”
Devangshu Datta New Delhi
4 min read Last Updated : Dec 09 2020 | 6:10 AM IST

Don't want to miss the best from Business Standard?

The sacking of a respected artificial intelligence (AI) researcher and key member of Google’s Ethical AI team has set off a firestorm. Over 1,400 Google employees and thousands of techies employed in other Silicon Valley businesses have signed a letter of protest about the manner of her dismissal.

Timnit Gebru, a computer scientist of Ethiopian origin, was on leave from her post as co-head of the Google Ethical AI team when she received an e-mail saying her resignation was accepted, and found her access to her corporate email account cut off. According to her, she hadn’t resigned but was in the middle of negotiations. Her immediate boss, Jeff Dean, claims she co-authored a paper, which did not “meet the bar for publication”.

The paper in question, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, is an exploration of the ethics of creating “Large Language models”. Gebru’s co-authors are four other Google researchers, and Emily Bender, a professor of Computational Linguistics at University of Washington. Bender is the only person to speak on record. She says the paper is still in an early stage of preprint and review and not officially published. Hence, the reason for dismissal (apart from the manner) sounds thin.

Gebru has a towering reputation in AI circles. She’s also known for being part of the Apple team that developed the iPad. The 37-year-old electrical engineer from Stanford has also established a group called Black Women in AI. This was after she discovered there were eight black women (including her) in a 8,500-strong AI convention.

Gebru was a pioneer in pointing out inherent flaws and racist biases in AI-based facial recognition programs in a paper, “Gender Shades”. These algorithms are trained and fine-tuned by showing them datasets of many thousands of faces. The data-sets used in training were overwhelmingly white and male. As a result, many AI face-recognition programs have an enduring problem with recognising non-white faces, and women, and especially with women of colour.

This can be embarrassing when it highlights whites and crops out people of colour in group photographs. It’s often life-threatening if a face-recognition program wrongly identifies an innocent person as a murder suspect. As a more or less direct result of that paper, Amazon, Google and Microsoft stopped selling face-recognition programs to the police.

There are many other ethical issues and risks in training AI. The unreleased “parrot” paper has been reviewed (unofficially) by MIT’s Technology Review. AI is now being trained to process massive data sets of natural language to improve its understanding and ability to use language in human ways.

The paper points out several risks. One is just the massive overheads in terms of computer power involved in such training. Computer power equates to electricity consumption, which equates to environmental impact. One researcher (not an author of the parrot paper) estimated that one round of training of Google’s BERT (Bidirectional Encoder Representations from Transformers) program that understand searches has the same carbon footprint as a commercial New York-San Francisco flight. Large language models require multiple rounds of training, which means big environmental impacts. Another implication: “Rich society” biases since these resources can only be provided by wealthy nations or corporations.

Another problem is lack of discrimination. Large language models literally grab all sorts of available natural language examples without weeding out abusive, racist, sexist, hate speech. If an AI is taught such speech is normal, that’s a problem.

Moreover, changes in social usage caused by movements such as Black Lives Matter, or MeToo, or the LGBTQ movement, will not be captured because it’s a small subset of all content. Again, language generated by poorer nations will be less represented simply due to lower quantum of content. Finally, of course, there’s the issue of mimicry. An AI that “speaks” naturally may be a wonderful tool for scams.

Gebru will not have problems finding employment. But the non-transparent processes that culminated in her dismissal could have a chilling effect on the entire field of AI research.

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

Topics :Googleartificial intelligenceSilicon Valley

Next Story