India continues to be the second highest requestor for user data from Facebook, according to its bi-annual Transparency Report, which revealed government requests for user data in the second half of 2019.
"Of the total volume, the US continues to submit the largest number of requests, followed by India, the UK, Germany and France," said Chris Sonderby, VP & Deputy General Counsel at Facebook in a Newsroom post on Tuesday.
India made 26,698 requests for user data from Facebook in the six months ended December 2019, a marginal increase from the 22,684 requests it made in the first half of year.
Of the total requests made by India, Facebook produced some data for 15,206 accounts.
The US made the highest number of legal process requests at 47,958, followed by India with 24,944 legal process requests. In India, Facebook produced data for nearly 14,345 of these legal requests.
"As always, we scrutinize every government request we receive to make sure it is legally valid, no matter which government makes the request. If a request appears deficient or overly broad, we push back, and will fight in court, if necessary.
We do not provide governments with “back doors” to people’s information," added Sonderby in the post.
Facebook also released its latest Community Standards Enforcement Report and said it removed about 4.7 million pieces of content globally on the platform connected to organized hate, an increase of over 3 million pieces of content from the previous quarter.
The report provides metrics on how well Facebook and Instagram enforced their policies from October 2019 through March 2020.
"We’ve spent the last few years building tools, teams and technologies to help protect elections from interference, prevent misinformation from spreading on our apps and keep people safe from harmful content," said Guy Rosen, VP Integrity in a post.
Facebook claimed that it is now able to proactively find almost 90 per cent of hate speech that is taken down from the platform, compared to 24 per cent in 2018. This was made possible because Facebook expanded its proactive detection technology to more languages.
The company also increased its proactive detection rate, which is the content it removes on its own before someone reports it, for organized hate, to 96.7 per cent in Q1 2020 from 89.6 per cent in Q4 2019.
On Instagram, the proactive detection rate increased from 57.6 per cent to 68.9 per cent, 175,000 pieces of content were removed in Q1 2020, up from 139,800 the previous quarter.
Sharing enforcement data for bullying on Instagram for the first time in this report, the Menlo Park-based firm including taking action on 1.5 million pieces of content in both Q4 2019 and Q1 2020.
"On Instagram, we made improvements to our text and image matching technology to help us find more suicide and self-injury content. As a result, we increased the amount of content we took action on by 40 per cent and increased our proactive detection rate by more than 12 points since the last report," said Rosen.
As part of this report, Facebook has added new data on hate speech, adult nudity and sexual activity, violent and graphic content, and bullying and harassment for Instagram, and organized hate on Facebook and Instagram.
The Community Standards report does not reflect the full impact of how Facebook tackled misinformation during the pandemic, because it includes data only through March 2020.