The supervisory body not only supported Facebook in its moderation decisions, but also made important recommendations, such as: B. An assessment of health misinformation related to the COVID-19 pandemic.

Facebook set up its first supervisory body on Thursday quarterly update. This is the first document of its kind issued by Facebook describing its relationship with the quasi-judicial authority that takes appeals from Facebook users who are dissatisfied with moderation decisions. Facebook in a. authorized blog entry On top of the report that it may have bitten off more than it can chew (or at least more than it can chew quickly):

In the first quarter of 2021, the Executive Board made 18 recommendations in six cases. We are implementing 14 recommendations in whole or in part, are still checking the feasibility of three and are not taking any action on one. The scope and scope of the board’s recommendations go beyond the guidelines we expected when the board was formed, and some may require investment of months or years. The board’s recommendations include how we enforce our policies, how we inform users about actions we have taken and what they can do about them, as well as additional transparency reports. – Jennifer Broxmeyer, Director, Content Governance, Facebook

26 cases referred to the board, 3 accepted

In addition to receiving appeals from the public, the board receives filings from Facebook to challenge decisions the company has already responded to, but wants more guidance on how to address similar situations in the future. In this context, the company referred 26 cases, only three of which were accepted: “A case about supposed COVID-19 cures; a case over a veiled threat based on religious beliefs; and a case over the decision to suspend former US President Donald Trump’s account indefinitely“Said the company.

A breakdown of the 26 cases that Facebook referred to the board of directors, by policy and region. Source: Facebook

Status of the recommendations

In addition to decisions, the supervisory body issues “non-binding recommendations” that Facebook can implement at its own discretion. Here is a list of recommendations, sorted by status:

Non-binding recommendations based on the current status. Source: Facebook

  • Implement / fully implemented: These are the recommendations that Facebook is fully implementing or has already fully implemented.
    • Improve the automatic detection: “Improve the auto-recognition of text overlay images to ensure posts draw attention to breast cancer symptoms [on Facebook and Instagram] are not incorrectly marked for verification. “
    • Clarity of adult nudity guidelines: “Revise the Instagram Community Guidelines on Adult Nudity. Make it clear that Instagram Community Guidelines will be interpreted in accordance with Facebook Community Standards and that if there is any discrepancy, the latter will prevail. ”Facebook said that its content guidelines for Instagram and Facebook are largely the same and that it would clarify this.
    • Ability to complain: “Ensure users can appeal decisions made by automated systems to human review if their content is found to violate Facebook’s community standard on adult nudity and sexual activity.” Facebook offers opportunities to appeal in such cases.
  • Implemented / partially implemented: These are recommendations that Facebook partially implements.
    • Consistency of Facebook and Instagram: “Whenever you’re telling people how they violated guidelines, make sure you understand the relationship between Instagram Community Guidelines and Facebook Community Standards.” Facebook said, “We will continue to work towards and provide consistency between Facebook and Instagram
      Updates in the next few months. “
    • Special feature of the reason: “Go beyond the community standard that Facebook enforces and add more details on what part of the policy has been violated.” Facebook said it’ll do what it can based on technological feasibility.
    • User notifications: “Make sure users are always informed of the community standards that Facebook is enforcing.” This recommendation followed a bug in Facebook’s systems that resulted in users not being notified of any community standard action. This has been fixed, said Facebook.
    • Dangerous people and organizations: “Explain and give examples of the application of key terms used in the Dangerous Persons and Organizations Policy. These should be consistent with the definitions used in Facebook’s internal implementation standards. ”Facebook said it would make this policy clearer. “We are committed to adding a language to the Dangerous Individuals and Organizations Community Standard that clearly explains our intentional requirements for this policy. We are also committed to increasing transparency with regard to the definitions of “praise,” “support,” and “representation,” according to the company.
    • Clarity about health misinformation: “Clarification of community standards regarding health misinformation, especially regarding COVID-19. Facebook should set a clear and accessible community standard for health misinformation that consolidates and clarifies existing rules in one place. ”Facebook stated that it has partially implemented this by creating a help article.
    • Transparency about incorrect health information: “Facebook should 1) publish its range of enforcement options within community standards and rank those options from most to least intrusive based on how they violate freedom of expression, 2) explain what factors, including evidence-based criteria, use the platform will choose the least intrusive option in enforcing their community standards to protect public health and 3) clarify within community standards which enforcement option applies to each rule. ”Facebook announced that it would set up a transparency center in the coming weeks to address these concerns.
    • Evaluation of health misinformation: “To ensure that health misinformation enforcement action is the least intrusive means of protecting public health, Facebook should conduct an assessment of its existing health misinformation toolkit and consider the potential for developing more tools that are less intrusive than that the removal of content. ”Facebook said it will continue to develop tools to provide users with authoritative health information.
    • Transparency in COVID measures: “Publish a transparency report on how community standards were enforced during the global health crisis of COVID-19.” Facebook said it would look at ways to disclose this information but made no commitment to report per se.
    • To the Indian baiting: “Provide users with additional information on the scope and enforcement of this Community standard [on veiled threats of violence]. Enforcement criteria should be public and consistent with Facebook’s internal implementation standards. In particular, Facebook’s criteria should take into account the intent, identity of the user and target audience, and context. ”Facebook said it would add language to this policy to make it clearer.
  • Evaluation of the feasibility: These are cases where Facebook is investigating how feasible it is to implement a recommendation. No action has yet been taken here.
    • Automation transparency: “Let users know if automation is being used to take enforcement action against their content, including accessible descriptions of what it means.”
    • Automated decision transparency: “Extend transparency reporting to reveal data about the number of automated removal decisions and the proportion of those decisions that were subsequently undone after human review.”
    • List dangerous people and organizations: “Provide a public list of organizations and individuals identified as” dangerous “under the Community Standard for Dangerous Persons and Organizations.”

also read