Facebook’s Content Moderation Rules Revealed

Person using Facebook on iPhone.

Facebook has public guidelines, but the advice on which content moderators base their decisions is a closely guarded secret. The Guardian, however, has got hold of a copy of the 300-page document. It goes into minute detail, including  dictating which emojis count as “praise” or “condemnation.”

A particular area of contention surrounds what are defined as dangerous individuals and organisations. In the leaked documents dating from December 2020, moderators for Facebook and Instagram are instructed how to define “support” for terrorist groups and other “dangerous individuals”, whether to distinguish between “explaining” and “justifying” the actions of terrorists, and even in what contexts it is acceptable to call for the use of “gas chambers”. While Facebook’s community guidelines – once almost entirely hidden from the view of users – have been public since 2018 when it first laid out in a 27-page document what it does and does not allow on its site, these newly leaked documents are different. They constitute much more detailed guidelines on what the published rules mean in practice. Facebook has long argued that to publish the full documents would be counterproductive since it would let malicious users avoid a ban for deliberately borderline behaviour.

Check It Out: Facebook’s Content Moderation Rules Revealed

One thought on “Facebook’s Content Moderation Rules Revealed

  • Charlotte:

    One can describe this leaked guideline as a nice attempt at an inadequate threat response; rather like bringing an umbrella to a full on gunfight.

    Anyone not using AI linked to a global network as the initial and principal means dedicated to identifying terrorist threats, both foreign and domestic (the latter now enjoying international support and funding), and applying machine learning as to how these insinuate into everyday casual social media use, and moving to intervene at increasingly earlier stages of radicalisation and recruitment efforts, using human intervention only as the final adjudication, is not serious and is leaving their user base exposed behind an ineffectual veneer.

    When it comes to threat mitigation, it is increasingly difficult to discern whether FB, under the leadership, for want of a better term, of Mark Zuckerberg, are malfeasant or simply incompetent. NB: Neither option is good.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WIN an iPhone 16 Pro Max!