You are here: Home » International » News » Companies
Business Standard

Facebook told to stop hosting extremist propaganda after New Zealand attack

Facebook said it had been working directly with New Zealand police and across the technology industry to 'help counter hate speech and the threat of terrorism.'

Jason Scott Tracy Withers & Edward Johnson I Bloomberg 

Think tank asks panel to probe Facebook's lobbying practices in India

Pressure is building on Inc. and other social media platforms to stop hosting extremist propaganda including terrorist events, after Friday’s deadly attacks on two mosques in New Zealand were live-streamed.

Australia’s prime minister has urged the Group of 20 nations to use a meeting in June to discuss a crack down, while New Zealand media reported the nation’s biggest banks have pulled their advertising from and

“We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published,” New Zealand Prime Minister told parliament on Tuesday. “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.”

said it had been working directly with and across the technology industry to "help counter hate speech and the threat of terrorism."

The lone shooter accused of killing 50 people in the New Zealand city of live-streamed the murders, with the video continuing to be widely available on a range of platforms hours after the attack. The suspect, an Australian, uploaded his hate-filled manifesto online shortly before launching his assault.

Offensive Content

It’s the latest example of social media struggling to keep offensive content from sites that generate billions of dollars in revenue from advertisers -- a problem that’s seen Facebook founder grilled by Congress.

The shooting video was viewed fewer than 200 times during its live broadcast, and no users reported the video during that time, Facebook Vice-President and deputy general counsel Chris Sonderby said in a blog post. It was reported to the company 29 minutes after the video started and viewed 4,000 times before being removed, he said.

The should discuss the issue at its Osaka Summit in June, Australian Prime Minister Scott Morrison said Tuesday in an open letter to this year’s host, Japan counterpart The group should work to ensure technology firms implement appropriate filtering and remove terrorist-linked content, and show transparency in meeting those requirements, he said.

“It is unacceptable to treat the internet as an ungoverned space,” Morrison said. “It is imperative that the global community works together to ensure that technology firms meet their moral obligation to protect the communities which they serve and from which they profit.”

Ardern’s government will look at the role social media played and what steps it can take, including on the stage. Previously she vowed to seek talks with Facebook, which said it blocked the upload of 1.2 million video clips and removed another 300,000 within 24 hours.

The New Zealand business community is becoming increasingly vocal that the social-media should be rebuked by restricting their bottom line.

The is encouraging advertisers to recognize they have a choice where their advertising dollars are spent and to carefully consider where ads appear.

“We challenge Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy can be streamed online,” the association said in a statement.

Meanwhile, New Zealand’s three biggest broadband providers called on Facebook, Twitter and to join an urgent discussion at an industry and government level to find a solution to the live-streaming and hosting of video footage such as that produced in

“The discussion must start somewhere,” the chief executives of the said in an open letter on their websites Tuesday. “Social media companies and hosting platforms that enable the sharing of user-generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.”

Artificial intelligence techniques could be deployed and, for the most serious types of content, more onerous requirements should apply including taking down the material within a specified period, proactive measures and fines for failure to do so, they said.

First Published: Tue, March 19 2019. 22:55 IST
RECOMMENDED FOR YOU