Google has made a dangerous U-turn on military artificial intelligence

This week, the company deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018

Google
(Photo: Bloomberg)
Bloomberg
5 min read Last Updated : Feb 07 2025 | 10:33 PM IST
By Parmy Olson
 
Google’s “Don’t Be Evil” era is well and truly dead.  
Having replaced that motto in 2018 with the softer “Do the right thing,” the leadership at parent company Alphabet Inc. has now rolled back one of the firm’s most important ethical stances, on the use of its artificial intelligence by the military. 
 
This week, the company deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018. Its “Responsible AI” principles no longer include the promise, and the company’s AI chief, Demis Hassabis, published a blog post explaining the change, framing it as inevitable progress rather than any sort of compromise. 
 
“[AI] is becoming as pervasive as mobile phones,” Hassabis wrote. It has “evolved rapidly.” 
 
Yet the notion that ethical principles must also “evolve” with the market is wrong. Yes, we’re living in an increasingly complex geopolitical landscape, as Hassabis describes it, but abandoning a code of ethics for war could yield consequences that spin out of control. 
 
Bring AI to the battlefield and you could get automated systems responding to one another at machine speed, with no time for diplomacy. Warfare could become more lethal, as conflicts escalate before humans have time to intervene. And the idea of “clean” automated combat could compel more military leaders toward action, even though AI systems make plenty of mistakes and could create civilian casualties too.   
 
Automated decision making is the real problem here. Unlike previous technology that made militaries more efficient or powerful, AI systems can fundamentally change who (or what) makes the decision to take human life. 
 
It’s also troubling that Hassabis, of all people, has his name on Google’s carefully worded justification. He sang a vastly different tune back in 2018, when the company established its AI principles, and joined more than 2,400 people in AI to put their names on a pledge not to work on autonomous weapons. 
 
Less than a decade later, that promise hasn’t counted for much. William Fitzgerald, a former member of Google’s policy team and co-founder of the Worker Agency, a policy and communications firm, says that Google had been under intense pressure for years to pick up military contracts.  
 
He recalled former US Deputy Defense Secretary Patrick Shanahan visiting the Sunnyvale, California, headquarters of Google’s cloud business in 2017, while staff at the unit were building out the infrastructure necessary to work on top-secret military projects with the Pentagon. The hope for contracts was strong.
 
Fitzgerald helped halt that. He co-organized company protests over Project Maven, a deal Google did with the Department of Defense to develop AI for analyzing drone footage, which Googlers feared could lead to automated targeting. Some 4,000 employees signed a petition that stated, “Google should not be in the business of war,” and about a dozen resigned in protest. Google eventually relented and didn’t renew the contract.  
 
Looking back, Fitzgerald sees that as a blip. “It was an anomaly in Silicon Valley’s trajectory,” he said.  
 
Since then, for instance, OpenAI has partnered with defense contractor Anduril Industries Inc. and is pitching its products to the US military. (Just last year, OpenAI had banned anyone from using its models for “weapons development.”) Anthropic, which bills itself as a safety-first AI lab, also partnered with Palantir Technologies Inc. in November 2024 to sell its AI service Claude to defense contractors.     
 
Google itself has spent years struggling to create proper oversight for its work. It dissolved a controversial ethics board in 2019, then fired two of its most prominent AI ethics directors a year later. The company has strayed so far from its original objectives it can’t see them anymore. So too have its Silicon Valley peers, who never should have been left to regulate themselves. 
 
Still, with any luck, Google’s U-turn will put greater pressure on government leaders next week to create legally binding regulations for military AI development, before the race dynamics and political pressure makes them more difficult to set up.
 
The rules can be simple. Make it mandatory to have a human overseeing all AI military systems. Ban any fully autonomous weapons that can select targets without a human approval first. And make sure such AI systems can be audited. 
 
One reasonable policy proposal comes from the Future of Life Institute, a think tank once funded by Elon Musk and currently steered by Massachusetts Institute of Technology physicist Max Tegmark. It is calling for a tiered system whereby national authorities treat military AI systems like nuclear facilities, calling for unambiguous evidence of their safety margins. 
 
Governments convening in Paris should also consider establishing an international body to enforce those safety standards similar to the International Atomic Energy Agency’s oversight of nuclear technology. They should be able to impose sanctions on companies (and countries) that violate those standards. 
 
Google’s reversal is a warning. Even the strongest corporate values can crumble under the pressure of an ultra-hot market and an administration that you simply don’t say “no” to. The don’t-be-evil era of self-regulation is over, but there’s still a chance to put binding rules in place to stave off AI’s darkest risks. And automated warfare is surely one of them.    
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :GoogleArtificial intelligenceTechnology

First Published: Feb 07 2025 | 10:22 PM IST

Next Story