On Wednesday, Facebook released a report detailing its enforcement efforts between October 2017 and March 2018.
Metrics on enforcement included
- Graphic Violence
- Adult Nudity and Sexual Activity
- Terrorist Propaganda (ISIS, al-Qaeda and affiliates)
- Hate Speech
- Fake Accounts
In the report,its shown that Facebook took moderation action against almost 1.5bn accounts and posts which violated its community standards in the first three months of 2018
For example, the social media giant took down 837 million pieces of spam in Q1 2018. the company also disabled about 583 million fake accounts, took down 21 million pieces of adult nudity and sexual activity and removed 2.5 million pieces of hate speech.
most of the posts and acccounts taken down was thanks to Facebook’s own technology, which while showing lots of promise, its CEO Mark Zuckerberg acknowledged at F8 its still years away from being effective for most bad content because context is so important.
The company also announced measures that require political advertisers to undergo an authentication process and reveal their affiliation alongside their advertisements.
Facebook’s moderation figures come a week after the release of the Santa Clara Principles, an attempt to write a guidebook for how large platforms should moderate content.
“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.
The amount of content moderated by Facebook is influenced by both the company’s ability to find and act on infringing material, and the sheer quantity of items posted by users.