"Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter," Xinhua quoted the report as saying.
Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.
The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier.
Facebook said it has recently developed metrics as a way to review the content shared on its platform and the transparency report reviewed the content posted in the community during the period from October 2017 through March 2018.
The content audited included graphic violence, hate speech, adult nudity and sexual activity, spam, terrorist propaganda (IS,al-Qaeda and affiliates) and fake accounts.
Facebook took action against 2.5 million pieces of content in the first quarter, up 56 per cent over the previous quarter.
It also took action on 837 million pieces of content for spam, 21 million for adult nudity or sexual activity and 1.9 million for promoting terrorism.
A total of 583 million fake accounts have been disabled in the quarter, down from 694 million in the first quarter of 2017, according to the report.
"We estimate that fake accounts represented approximately 3-4 per cent of monthly active users on Facebook during Q1 2018 and Q4 2017," the report said.