The Free Internet Project

Facebook White Paper

Revisiting Facebook's "White Paper" Proposal for "Online Content Regulation"

In the Washington Post last year, Facebook CEO Mark Zuckerberg called for governments to enact new regulations for content moderation. In February 2020, Monika Bickert, the VP for Content Policy at Facebook, published a White Paper "Charting a Way Forward Online Content Regulation" outlining four key questions and recommendations for governments to regulate content moderation. As the U.S. Congress is considering several bills to amend Section 230 of the Communications Decency Act and the controversy over content moderation rages on, we thought it would be worth revsiting Facebook's White Paper. It is not every day that an Internet company asks for government regulation.

The White Paper draws attention to how corporations like Facebook make numerous daily decisions on what speech is disseminated online, marking a dramatic shift from how such decisions in the past were often raised in the context of government regulation and its intersection with the free speech rights of individual. Online content moderation marks a fundamental shift in speech regulation from governments to private corporations or Internet companies: 

For centuries, political leaders, philosophers, and activists have wrestled with the question of how and when governments should place limits on freedom of expression to protect people from content that can contribute to harm. Increasingly, privately run internet platforms are making these determinations, as more speech flows through their systems. Consistent with human rights norms, internet platforms generally respect the laws of the countries in which they operate, and they are also free to establish their own rules about permissible expression, which are often more restrictive than laws. As a result, internet companies make calls every day that influence who has the ability to speak and what content can be shared on their platform. 

With the enormous power over online speech, corporations like Facebook are beset with many demands from users and governments alike:

As a result, private internet platforms are facing increasing questions about how accountable and responsible they are for the decisions they make. They hear from users who want the companies to reduce abuse but not infringe upon freedom of expression. They hear from governments, who want companies to remove not only illegal content but also legal content that may contribute to harm, but make sure that they are not biased in their adoption or application of rules. 

Perhaps surprisingly, Facebook calls upon governments to regulate content moderation by Internet companies:

Facebook has therefore joined the call for new regulatory frameworks for online content—frameworks that ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression. This balance is necessary to protect the open internet, which is increasingly threatened—even walled off—by some regimes. Facebook wants to be a constructive partner to governments as they weigh the most effective, democratic, and workable approaches to address online content governance.

The White Paper then focused on four questions regarding the regulation of online content:

1. How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?

Regulators can aim to achieve the goal of reducing harmful speech in three ways: (1) increase accountability for internet companies by requiring certain systems and procedures in place, (2) require "specific performance targets" for companies to meet in moderating content that violates their policies (given that perfect enforcement is impossible), and (3) requiring that companies restrict certain forms of speech beyond what is already considered illegal content. Generally, Facebook leans towards the first approach as the best way to go. "By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts," Facebook explains. Facebook thinks the 3 approaches can be adopted in combination, and underscores that "the most important elements of any system will be due regard for each of the human rights and values at stake, as well as clarity and precision in the regulation."

2. How should regulation enhance the accountability of internet platforms to the public?

Facebook recommends that regulation require internet content moderation systems follow guidelines of being "consultative, transparent, and subject to independent oversight." "Specifically, procedural accountability regulations could include, at a minimum, requirements that companies publish their content standards, provide avenues for people to report to the company any content that appears to violate the standards, respond to such user reports with a decision, and provide notice to users when removing their content from the site." Facebook recommends that the law can incentivize or require, where appropriate, the following measures: 

  • Insight into a company’s development of its content standards.
  • A requirement to consult with stakeholders when making significant changes to standards.
  • An avenue for users to provide their own input on content standards.
  • A channel for users to appeal the company’s removal (or non-removal) decision on a specific piece of content to some higher authority within the company or some source of authority outside the company.
  • Public reporting on policy enforcement (possibly including how much content was removed from the site and for what reasons, how much content was identified by the company through its own proactive means before users reported it, how often the content appears on its site, etc.). 

Facebook recommends that countries draw upon the existing approaches in the Global Network Initiative Principles and the European Union Code of Conduct on Countering Illegal Hate Speech Online

3. Should regulation require internet companies to meet certain performance targets?

Facebook sees trade-offs in government regulation that would require companies to meet performance targets in enforcing their content moderation rules. This approach would hold companies responsible for the targets which they have met and not for the systems put in place to achieve these standards. Using this metric, the government would focus on specific targets in judging a company’s adherence to content moderation standards. The prevalence of content deemed harmful is a promising area for exploring the development of company standards. Harmful content is harmful because of the number of people who are exposed and engage with it. Monitoring prevalence would allow for regulators to determine the extent to which harm is being done on the platform. In the case of content that is harmful even with a limited audience, such as child sexual exploitation, the metric would be shifted to focus of the timeliness of action taken against such content by companies. Creating thresholds for violating content also requires that companies and regulators agree on which content are deemed harmful. However, Facebook cautions that performance targets can have unintended consequences: "There are significant trade-offs regulators must consider when identifying metrics and thresholds. For example, a requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. The biggest internet companies have developed technology that allows them to detect certain types of content violations with much greater speed and accuracy than human reporting. For instance, from July through September 2019, the vast majority of content Facebook removed for violating its hate speech, self-harm, child exploitation, graphic violence, and terrorism policies was detected by the company’s technology before anyone reported it. A regulatory focus on response to user or government reports must take into account the cost it would pose to these company-led detection efforts."

4. Should regulation define which “harmful content” should be prohibited on internet platforms?

Governments are considering whether to develop regulations that define “harmful content,” requiring that internet platforms remove new categories of harmful speech. Facebook recommends that governments start with the freedom of expression recognized by Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Governments seeking to address internet content moderation have to address the complexities. In creating rules, user preferences must be taken into account, as well as making sure not to undermine the goal of promoting expression. Facebook advises that governments must consider the practicalities of Internet companies moderating content: "Companies use a combination of technical systems and employees and often have only the text in a post to guide their assessment. The assessment must be made quickly, as people expect violations to be removed within hours or days rather than the months that a judicial process might take. The penalty is generally removal of the post or the person’s account." Accordingly, regulations need to be enforced at scale, as well as allow flexibility across language, trends, and content.

According to Facebook, creating regulations for social media companies has to be achieved through the combined efforts of not just lawmakers and private companies, but also through the help of individuals who use the online platforms. Governments must also create incentives by ensuring accountability in content moderations, that allow companies to balance safety, privacy, and freedom of expression. The internet is a global entity, and regulations made must respect the global scale and spread of communication across borders. Freedom of expression cannot be trampled, and any decision made must be made with the impact of these rights in mind. An understanding of technology and the proportionality in which to address harmful content needs to also be taken into account by regulators. Each platform is its own entity and what works best for one may not work best for another. A well-developed framework will make the internet a safer place and allow for continued freedom of expression.

--written by Bisola Oni

 

 

Blog Search

Blog Archive

Categories