The Free Internet Project

government regulation

Revisiting Facebook's "White Paper" Proposal for "Online Content Regulation"

In the Washington Post last year, Facebook CEO Mark Zuckerberg called for governments to enact new regulations for content moderation. In February 2020, Monika Bickert, the VP for Content Policy at Facebook, published a White Paper "Charting a Way Forward Online Content Regulation" outlining four key questions and recommendations for governments to regulate content moderation. As the U.S. Congress is considering several bills to amend Section 230 of the Communications Decency Act and the controversy over content moderation rages on, we thought it would be worth revsiting Facebook's White Paper. It is not every day that an Internet company asks for government regulation.

The White Paper draws attention to how corporations like Facebook make numerous daily decisions on what speech is disseminated online, marking a dramatic shift from how such decisions in the past were often raised in the context of government regulation and its intersection with the free speech rights of individual. Online content moderation marks a fundamental shift in speech regulation from governments to private corporations or Internet companies: 

For centuries, political leaders, philosophers, and activists have wrestled with the question of how and when governments should place limits on freedom of expression to protect people from content that can contribute to harm. Increasingly, privately run internet platforms are making these determinations, as more speech flows through their systems. Consistent with human rights norms, internet platforms generally respect the laws of the countries in which they operate, and they are also free to establish their own rules about permissible expression, which are often more restrictive than laws. As a result, internet companies make calls every day that influence who has the ability to speak and what content can be shared on their platform. 

With the enormous power over online speech, corporations like Facebook are beset with many demands from users and governments alike:

As a result, private internet platforms are facing increasing questions about how accountable and responsible they are for the decisions they make. They hear from users who want the companies to reduce abuse but not infringe upon freedom of expression. They hear from governments, who want companies to remove not only illegal content but also legal content that may contribute to harm, but make sure that they are not biased in their adoption or application of rules. 

Perhaps surprisingly, Facebook calls upon governments to regulate content moderation by Internet companies:

Facebook has therefore joined the call for new regulatory frameworks for online content—frameworks that ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression. This balance is necessary to protect the open internet, which is increasingly threatened—even walled off—by some regimes. Facebook wants to be a constructive partner to governments as they weigh the most effective, democratic, and workable approaches to address online content governance.

The White Paper then focused on four questions regarding the regulation of online content:

1. How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?

Regulators can aim to achieve the goal of reducing harmful speech in three ways: (1) increase accountability for internet companies by requiring certain systems and procedures in place, (2) require "specific performance targets" for companies to meet in moderating content that violates their policies (given that perfect enforcement is impossible), and (3) requiring that companies restrict certain forms of speech beyond what is already considered illegal content. Generally, Facebook leans towards the first approach as the best way to go. "By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts," Facebook explains. Facebook thinks the 3 approaches can be adopted in combination, and underscores that "the most important elements of any system will be due regard for each of the human rights and values at stake, as well as clarity and precision in the regulation."

2. How should regulation enhance the accountability of internet platforms to the public?

Facebook recommends that regulation require internet content moderation systems follow guidelines of being "consultative, transparent, and subject to independent oversight." "Specifically, procedural accountability regulations could include, at a minimum, requirements that companies publish their content standards, provide avenues for people to report to the company any content that appears to violate the standards, respond to such user reports with a decision, and provide notice to users when removing their content from the site." Facebook recommends that the law can incentivize or require, where appropriate, the following measures: 

  • Insight into a company’s development of its content standards.
  • A requirement to consult with stakeholders when making significant changes to standards.
  • An avenue for users to provide their own input on content standards.
  • A channel for users to appeal the company’s removal (or non-removal) decision on a specific piece of content to some higher authority within the company or some source of authority outside the company.
  • Public reporting on policy enforcement (possibly including how much content was removed from the site and for what reasons, how much content was identified by the company through its own proactive means before users reported it, how often the content appears on its site, etc.). 

Facebook recommends that countries draw upon the existing approaches in the Global Network Initiative Principles and the European Union Code of Conduct on Countering Illegal Hate Speech Online

3. Should regulation require internet companies to meet certain performance targets?

Facebook sees trade-offs in government regulation that would require companies to meet performance targets in enforcing their content moderation rules. This approach would hold companies responsible for the targets which they have met and not for the systems put in place to achieve these standards. Using this metric, the government would focus on specific targets in judging a company’s adherence to content moderation standards. The prevalence of content deemed harmful is a promising area for exploring the development of company standards. Harmful content is harmful because of the number of people who are exposed and engage with it. Monitoring prevalence would allow for regulators to determine the extent to which harm is being done on the platform. In the case of content that is harmful even with a limited audience, such as child sexual exploitation, the metric would be shifted to focus of the timeliness of action taken against such content by companies. Creating thresholds for violating content also requires that companies and regulators agree on which content are deemed harmful. However, Facebook cautions that performance targets can have unintended consequences: "There are significant trade-offs regulators must consider when identifying metrics and thresholds. For example, a requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. The biggest internet companies have developed technology that allows them to detect certain types of content violations with much greater speed and accuracy than human reporting. For instance, from July through September 2019, the vast majority of content Facebook removed for violating its hate speech, self-harm, child exploitation, graphic violence, and terrorism policies was detected by the company’s technology before anyone reported it. A regulatory focus on response to user or government reports must take into account the cost it would pose to these company-led detection efforts."

4. Should regulation define which “harmful content” should be prohibited on internet platforms?

Governments are considering whether to develop regulations that define “harmful content,” requiring that internet platforms remove new categories of harmful speech. Facebook recommends that governments start with the freedom of expression recognized by Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Governments seeking to address internet content moderation have to address the complexities. In creating rules, user preferences must be taken into account, as well as making sure not to undermine the goal of promoting expression. Facebook advises that governments must consider the practicalities of Internet companies moderating content: "Companies use a combination of technical systems and employees and often have only the text in a post to guide their assessment. The assessment must be made quickly, as people expect violations to be removed within hours or days rather than the months that a judicial process might take. The penalty is generally removal of the post or the person’s account." Accordingly, regulations need to be enforced at scale, as well as allow flexibility across language, trends, and content.

According to Facebook, creating regulations for social media companies has to be achieved through the combined efforts of not just lawmakers and private companies, but also through the help of individuals who use the online platforms. Governments must also create incentives by ensuring accountability in content moderations, that allow companies to balance safety, privacy, and freedom of expression. The internet is a global entity, and regulations made must respect the global scale and spread of communication across borders. Freedom of expression cannot be trampled, and any decision made must be made with the impact of these rights in mind. An understanding of technology and the proportionality in which to address harmful content needs to also be taken into account by regulators. Each platform is its own entity and what works best for one may not work best for another. A well-developed framework will make the internet a safer place and allow for continued freedom of expression.

--written by Bisola Oni

 

 

Should tech companies do more for election security?: hard lessons from Russian social media warfare in 2016 U.S. elections

Bill Gates, founder of Microsoft, joined the growing number of high-profile individuals demanding that the U.S. government step up its regulation of big tech companies. In a June 2019 interview at the Economic Club of Washington, DC, Gates said, “Technology has become so central that governments have to think: What does that mean about elections?” Gates focused on the need to reform user privacy rights and data security.

This concern comes following the details of a Russian-led social media campaign to “sow discord in the U.S. political system through what it termed ‘information warfare’” outlined in Volume I Section II of the Mueller Report.  According to the Mueller Report, a Russian-based organization, known as the Internet Research Agency (IRA), “carried out a social media campaign that favored presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton.” As early as 2014, IRA employees traveled to the United States on intelligence-gathering missions to obtain information and photographs for use in their social media posts. After returning to St. Petersburg, IRA agents began creating and operating social media accounts and group pages which falsely claimed to be controlled by American activists. These accounts addressed divisive political and social issues in America and were designed to attract American audiences. The IRA's operation also included the purchase of political advertisements on social media in the names of American persons and entities.

Once the IRA-controlled accounts established a widespread following, they began organizing and staging political rallies within the United States. According to the Mueller Report, IRA-controlled accounts were used to announce and promote the events. Once potential attendees RSVP’d to the event page, the IRA-controlled account would then message these individuals to ask if they were interested in serving as an “event coordinator.” The IRA then further promoted the event by contacting US media about the event and directing them to speak with the coordinator. After the event, the IRA-controlled accounts posted videos and photographs of the event. Because the IRA is able to acquire unwitting American assets to contribute to the events, there was no need for any IRA employee to be present at the actual event.

Throughout the 2016 election season, several prominent political figures [including President Trump, Donald J. Trump Jr., Eric Trump, Kellyanne Conway, and Michael Flynn] and various American media outlets responded to, interacted with, or otherwise promoted dozens of tweets, posts, and other political content created by the IRA. By the end of the 2016 U.S. election, the IRA had the ability to reach millions of Americans through their social media accounts. The Mueller Report has confirmed the following information with individual social media companies:

  1. Twitter identified 3,814 IRA-controlled accounts that directly contacted an estimated 1.4 million people. In the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets.
  2. Facebook identified 470 IRA-controlled accounts who posted more than 80,000 posts that reached as many as 126 million persons. IRA also paid for 3,500 advertisements.
  3. Instagram identified 170 IRA-controlled accounts that posted approximately 120,000 pieces of content.

Since the details of the IRA’s social media campaign were publicized, big tech companies have been subject to heightened levels of scrutiny regarding their effort to combat misinformation and other foreign interference in American elections. However, many members of Congress were pushing for wide-ranging social media reform even before the release of the Mueller Report.

In April 2018, Facebook Founder and CEO Mark Zuckerberg testified over a two-day period during a joint session of the Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee. These hearings were prompted by the Cambridge Analytica scandal. Cambridge Analytica—a political consulting firm with links to the Trump campaign—harvested the data of an estimated 87 million Facebook users to psychologically profile voters during the 2016 election. Zuckerberg explained that, when functioning properly, Facebook is supposed to collect users’ information so that their advertisements can be tailored to a specific group of people that the third party wishes to target as part of their advertising strategy. In this scenario, the third-parties never receive any Facebook users’ data. However, Cambridge Analytica utilized a loophole in Facebook’s Application Programming Interface (API) that allowed the firm to obtain users’ data after the users accessed a quiz called “thisismydigitallife.” The quiz was created by Aleksandr Kogan, a Russian American who worked at the University of Cambridge. Zuckerberg explained to members of Congress that what Cambridge Analytica was improper, but also admitted that Facebook made a serious mistake in trusting Cambridge Analytica when the firm told Facebook it was not using the data it had collected through the quiz.

Another high-profile hearing occurred on September 5, 2018 when Twitter Co-Founder and CEO Jack Dorsey was called to testify before the Senate Intelligence Committee to discuss foreign influence operations on social media platforms. During this hearing, Dorsey discussed Twitter’s algorithm that prevents the circulation of Tweets that violate the platform’s Terms of Service, including the malicious behavior we saw in the 2016 election. Dorsey also discussed Twitter’s retrospective review of IRA-controlled accounts and how the information gathered is being utilized to quickly identify malicious automated accounts, a tool that the IRA relied heavily on prior to the 2016 election. Lastly, Dorsey briefed the committee on Twitter’s suspicion that other countries—namely Iran—may be launching their own social media campaigns.

With the 2020 election quickly approaching, these social media executives are under pressure to prevent their platform from being abused in the election process. Likewise, the calls for elected officials to increase regulation of social media platforms are growing stronger by the day, especially since Gates joined the conversation.

[Sources: Mueller Report, PBS, Washington Post, CNN, The Guardian, Vox I, Vox II]

Blog Search

Blog Archive

Categories