Google reports 250+ AI deepfake terrorism content to Australian regulator

The Australian eSafety Commission called Google's disclosure "world-first insight" into how users may be exploiting the technology to produce harmful and illegal content

Google, Google Logo
Under Australian law, tech firms must supply the eSafety Commission periodically with information about harm minimisation efforts or risk fines. (Photo: Reuters)
Reuters SYDNEY
2 min read Last Updated : Mar 05 2025 | 6:58 PM IST

Don't want to miss the best from Business Standard?

Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material. 
The Alphabet-owned tech giant also said it had received dozens of user reports warning that its AI program, Gemini, was being used to create child abuse material, according to the Australian eSafety Commission. 
Under Australian law, tech firms must supply the eSafety Commission periodically with information about harm minimisation efforts or risk fines. The reporting period covered April 2023 to February 2024. 
Since OpenAI's ChatGPT exploded into the public consciousness in late 2022, regulators around the world have called for better guardrails so AI can't be used to enable terrorism, fraud, deepfake pornography and other abuse. 
The Australian eSafety Commission called Google's disclosure "world-first insight" into how users may be exploiting the technology to produce harmful and illegal content. 
"This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated," eSafety Commissioner Julie Inman Grant said in a statement. 
In its report, Google said it received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist content made using Gemini, and another 86 user reports alleging AI-generated child exploitation or abuse material. 
It did not say how many of the complaints it verified, according to the regulator. 
Google used hatch-matching - a system of automatically matching newly-uploaded images with already-known images - to identify and remove child abuse material made with Gemini. 
But it did not use the same system to weed out terrorist or violent extremist material generated with Gemini, the regulator added.
The regulator has fined Telegram and Twitter, later renamed X, for what it called shortcomings in their reports. X has lost one appeal about its fine of A$610,500 ($382,000) but plans to appeal again. Telegram also plans to challenge its fine.
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Topics :GoogleArtificial intelligenceTerrorismAustraliaSocial Media

First Published: Mar 05 2025 | 6:58 PM IST

Next Story