ALSO READFacebook is using artificial intelligence to combat terrorism Facebook AI bots developed their own language beyond human comprehension Artificial intelligence to power Facebook translations Like Facebook, YouTube to set policies to curb extremist content online Kerala police monitoring FB, WhatsApp for Islamic State related messages
Using Artificial Intelligence (AI) and Machine Learning (ML) techniques, Facebook is successfully removing all Islamic State (IS) and Al Qaeda-related terror content from its platform before anyone flags it, the social media giant said on Wednesday.
"Today, 99 per cent of the IS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site," Monika Bickert, Head of Global Policy Management at Facebook wrote in a blog post.
Facebook does this primarily through the use of automated systems like photo and video matching and text-based machine learning.
"Once we are aware of a piece of terror content, we remove 83 per cent of subsequently uploaded copies within one hour of upload," added Brian Fishman, Head of Counterterrorism Policy, Facebook.
Deploying AI for counterterrorism is not as simple as flipping a switch.
A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda.
"To that end, we tap expertise from inside the company and from the outside, partnering with those who can help address extremism across the Internet," the company noted.
Facebook has announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) where the social media giant is working with Microsoft, Twitter and YouTube to fight the spread of terrorism and violent extremism across their platforms.
GIFCT has already brought together more than 50 technology companies over the course of three international working sessions.
"Through GIFCT, we also engage with governments around the world and are preparing to jointly commission research on how governments, tech companies and civil society can fight online radicalisation," Facebook said.
These partners include Flashpoint, the Middle East Media Research Institute (MEMRI), the SITE Intelligence Group, and the University of Alabama at Birmingham's Computer Forensics Research Lab.
They flag Pages, profiles and groups on Facebook potentially associated with terrorist groups.
"These organisations also send us photo and video files associated with IS and Al Qaeda that they have located elsewhere on the Internet, which we can then run against our algorithms to check for file matches to remove or prevent their upload to Facebook altogether," the blog post read.