Facebook told to stop hosting extremist propaganda after New Zealand attack

Facebook said it had been working directly with New Zealand police and across the technology industry to 'help counter hate speech and the threat of terrorism.'

Think tank asks panel to probe Facebook's lobbying practices in India
Jason ScottTracy WithersEdward Johnson I Bloomberg
4 min read Last Updated : Mar 19 2019 | 10:57 PM IST
Pressure is building on Facebook Inc. and other social media platforms to stop hosting extremist propaganda including terrorist events, after Friday’s deadly attacks on two mosques in New Zealand were live-streamed.

Australia’s prime minister has urged the Group of 20 nations to use a meeting in June to discuss a crack down, while New Zealand media reported the nation’s biggest banks have pulled their advertising from Facebook and Google.

“We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published,” New Zealand Prime Minister Jacinda Ardern told parliament on Tuesday. “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.”

Facebook said it had been working directly with New Zealand police and across the technology industry to "help counter hate speech and the threat of terrorism."

The lone shooter accused of killing 50 people in the New Zealand city of Christchurch live-streamed the murders, with the video continuing to be widely available on a range of platforms hours after the attack. The suspect, an Australian, uploaded his hate-filled manifesto online shortly before launching his assault.

Offensive Content

It’s the latest example of social media companies struggling to keep offensive content from sites that generate billions of dollars in revenue from advertisers -- a problem that’s seen Facebook founder Mark Zuckerberg grilled by Congress.

The shooting video was viewed fewer than 200 times during its live broadcast, and no users reported the video during that time, Facebook Vice-President and deputy general counsel Chris Sonderby said in a blog post. It was reported to the company 29 minutes after the video started and viewed 4,000 times before being removed, he said.

The G-20 should discuss the issue at its Osaka Summit in June, Australian Prime Minister Scott Morrison said Tuesday in an open letter to this year’s host, Japan counterpart Shinzo Abe. The group should work to ensure technology firms implement appropriate filtering and remove terrorist-linked content, and show transparency in meeting those requirements, he said.

“It is unacceptable to treat the internet as an ungoverned space,” Morrison said. “It is imperative that the global community works together to ensure that technology firms meet their moral obligation to protect the communities which they serve and from which they profit.”

Ardern’s government will look at the role social media played and what steps it can take, including on the international stage. Previously she vowed to seek talks with Facebook, which said it blocked the upload of 1.2 million video clips and removed another 300,000 within 24 hours.

The New Zealand business community is becoming increasingly vocal that the social-media companies should be rebuked by restricting their bottom line.

The Association of New Zealand Advertisers is encouraging advertisers to recognize they have a choice where their advertising dollars are spent and to carefully consider where ads appear.

“We challenge Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy can be streamed online,” the association said in a statement.
Meanwhile, New Zealand’s three biggest broadband providers called on Facebook, Twitter and Google to join an urgent discussion at an industry and government level to find a solution to the live-streaming and hosting of video footage such as that produced in Christchurch.

“The discussion must start somewhere,” the chief executives of the companies said in an open letter on their websites Tuesday. “Social media companies and hosting platforms that enable the sharing of user-generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.”

Artificial intelligence techniques could be deployed and, for the most serious types of content, more onerous requirements should apply including taking down the material within a specified period, proactive measures and fines for failure to do so, they said.

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

Next Story