The Free Internet Project

content moderation

Summary of EARN IT and proposed bills to amend Section 230 of CDA regarding ISP safe harbor


Section 230 of the Communications Decency Act of 1998 has come under fire in the U.S. Congress. Republican lawmakers contend that Section 230 is being invoked by Internet platforms, such as Facebook, Google, and Twitter, as an improper shield to censor content with a bias against conservative lawmakers and viewpoints. These lawmakers contend that Section 230 requires Internet sites to maintain "neutrality" or be a "neutral public forum." However, some legal experts, including Jeff Kosseth who wrote a book on the legislative history and subequent interpretation of Section 230, contend this interpretation is a blatant misreading of Section 230, which specifically creates immunity from civil liability for ISPs for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Donald Trump issued an Executive Order that attempts to (re)interpret "good faith" to require political neutrality.  The Department of Justice appeared to concede, however, that "good faith" is unclear and recommended that Congress provide a statutory definition of the term.  Several Republican lawmakers in the House and the Senate have proposed new legislation that would reform or eliminate Section 230, and limit Internet platforms’ ability to censor content that the platforms feel is harmful, obscene, or misleading.  This article summarizes the proposed bills to amend Section 230. 

1. Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 (EARN IT Act, S.3398): no immunity for violation of law on child sexual abuse material unless ISP earns back via best practices

The EARN IT Act was introduced by Senator Lindsey Graham (R-SC) and co-sponsored by Senator Richard Blumenthal (D-CT). The EARN IT Act’s main purpose is to carve out from the ISP immunity under Section 230(c)(2)(A) and thus to expose ISPs to potential civil liability pursuant to 18 U.S.C. section 2255 or state law based on activity that violates 18 U.S.C section 2252 or 2252A (which cover child sexual abuse material (CSAM) distribution or receipt). However, an ISP can "EARN" back its immunity if it follows the requirement's of the Act's newly created safe harbor:

  • "(i) an officer of the provider has elected to certify to the Attorney General under section 4(d) of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 that the provider has implemented, and is in compliance with, the child sexual exploitation prevention best practices contained in a law enacted under the expedited procedures under section 4(c) of such Act and such certification was in force at the time of any alleged acts or omissions that are the subject of a claim in a civil action or charge in a State criminal prosecution brought against such provider; or
  • “(ii) the provider has implemented reasonable measures relating to the matters described in section 4(a)(3) of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020, subject to the exceptions authorized under section 4(a)(1)(B)(ii) of that Act, to prevent the use of the interactive computer service for the exploitation of minors.”

To develop the "child sexual exploitation prevention best practices" required for the new safe harbor, the EARN IT Act would create a commission called the “National Commission on Online Child Sexual Exploitation Prevention,” consisting of sixteen members. The Commission’s duty would be to devise a list of “best practices” for combatting child sexual abuse material (CSAM) and send the list to Attorney General William Barr, the Secretary of Homeland Security, and the Chairman of the Federal Trade Commission—all of whom would be appointed as members of the Commission—for review. These three members, dubbed the “Committee,” would have the power to amend, deny, or approve the list of “best practices” created by the Commission. After the Committee approves a list of “best practices,” the list is sent to Congress, which has ninety days to file a “disapproval motion” to veto the list from going into effect. 

Text of EARN IT Act (S. 3398)

Sponsors Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT)

UPDATED July 4, 2020: The Senate Judiciary Committee unanimously approved the bill (22-0).  It now will be considered by the Senate. 

2. Limiting Section 230 Immunity to Good Samaritans Act: creates civil action against edge providers for "intentionally selective enforcement" of content moderation

In June 2020, Sen. Josh Hawley (R-MO) introduced a bill titled Limiting Section 230 Immunity to Good Samaritans Act. The bill defines a "good faith" requirement in Section 230 for content moderation by a newly defined category of "edge providers," Internet platforms with more than 30 million users in the U.S. or more than 300 million users worldwide, plus more than $1.5 billion in annual global revenue. It does not include 501(c)(3) nonprofits. The bill defines good faith so it does not include "intentionally selective enforcement of the terms of service," including by an algorithm that moderates content. The term is vague. Presumably, it is meant to cover politically biased moderation (see Ending Support for Internet Censorship Act below), but it might apply to situations that ISPs selectively enforce their policies simply because of the enormous amount of content (billions of posts) on their platforms in a kind of triage. The bill also creates a cause of action for users to sue Internet platforms that intentionally selectively enforce and to recover $5,000 in statutory damages or actual damages.  

Text of Limiting Section 230 Immunity to Good Samaritans Act

Sponsors: Sen. Josh Hawley (R-MO); Sens. Marco Rubio (R-FL), Mike Braun (R-IN), Tom Cotton (R-AR); Sen. Kelly Loeffler (R-GA)

3.  Ending Support for Internet Censorship Act (“Hawley Bill," S.1914): ISPs must get "immunity certification" from FTC that ISP doesn't moderate content in "politically biased manner"

The Hawley bill, Ending Support for Internet Censorship Act, introduced by Senator Josh Hawley (R-MO) and co-sponsored by Senators Marco Rubio (R-FL), Mike Braun (R-IN), and Tom Cotton (R-AR), seeks to require ISPs to obtain an "immunity certification from the Federal Trade Commission"; the certiication requires the ISP "not [to] moderate information provided by other information content providers in a manner that is biased against a political party, political candidate, or political viewpoint."  The ISP must "prove[] to the Commission by clear and convincing evidence that the provider does not (and, during the 2-year period preceding the date on which the provider submits the application for certification, did not) moderate information provided by other information content providers in a politically biased manner."

The bill defines "politically biased moderation" as:

POLITICALLY BIASED MODERATION.—The moderation practices of a provider of an interactive computer service are politically biased if—

  • “(I) the provider moderates information provided by other information content providers in a manner that—
  • “(aa) is designed to negatively affect a political party, political candidate, or political viewpoint; or
  • “(bb) disproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint; or
  • “(II) an officer or employee of the provider makes a decision about moderating information provided by other information content providers that is motivated by an intent to negatively affect a political party, political candidate, or political viewpoint."

Text of ESICA (S. 1914)

Sponsor: Senator Josh Hawley (R-MO)

4. Stop the Censorship Act (“Gosar Bill,” H.R.4027): removes "objectionable" from Good Samaritan provision for content moderation, limiting it to "unlawful material"

The Gosar billl, Stop the Censorship Act, seeks to eliminate Section 230 immunity for Internet platforms like Facebook, Google, and Twitter, for censoring content that the platforms deem “objectionable.” US Representative Paul Gosar (R-AZ), joined by fellow Conservative Congressmen Mark Meadows (R-NC), Ralph Norman (R-SC), and Steve King (R-IA), believe the language of Section 230's Good Samaritan blocking is too broad. The Gosar bill would strike language in Section 230 that allows Internet platforms to censor content deemed “objectionable”; the only content that should be censored, the sponsors argue, is “unlawful” content (i.e. CSAM). Further, the bill would establish an option for platform users to choose between a safe space on the platform (would feature content-moderated feeds controlled by the platform) and an unfettered platform (would include all objectionable content).  The bill would change Section 230(c)(2) as follows:


(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of— (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or..."

Proposed change:

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of—(A) any action voluntarily taken in good faith to restrict access to or availability of unlawful material;

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1); and

(C) any action taken to provide users with the option to restrict access to any other material, whether or not such material is constitutionally protected.”

Text of SCA (H.R.4027)

Sponsors: Rep. Gosar. Cosponsors: Rep. Mark Meadows (R-NC), Rep. Steve King (R-IA), Rep. Ralph Norman (R-SC), Rep. Ted Yoho (R-FL), Rep. Ron Wright (R-TX), Rep. Glenn Grothman (R-WI)

5. Stopping Big Tech Censorship Act (Sen. Kelly Loeffler (R-GA):  adds conditions to both Section 230(c)(1) and (c)(2 immunities, including subjecting content moderation of Internet platforms to First Amendment-style limitations on government restrictions of speech

US Senator Kelly Loeffler (R-GA) recently introduced the “Stopping Big Tech’s Censorship Act,” which would amend language in Section 230 of the Communications Decency Act to “protect First Amendment Rights” of users on social media platforms.

The first change is to the immunity in Section 230(c)(1). The bill would require Internet platforms to "take[] reasonable steps to prevent or address the unlawful use of the interactive computer service or unlawful publication of information on the interactive computer service,’’ in order to qualify for the immunity from defamation and other claims based on the content of their users.

The second change is to the immunity in Section 230(c)(2). Internet platforms will only enjoy Section 230(c) immunity for their content moderation if: “(I) the action is taken in a viewpoint-neutral manner; (II) the restriction limits only the time, place, or manner in which the material is available; and (III) there is a compelling reason for restricting that access or availability.” This set of requirements is substantial and might be hard to put into place with current community standards.  For example, removing hate speech, white supremacist propaganda, neo-Nazi content, racist speech, and other offensive content might be viewed as viewpoint discrimination under this approach.

Duty to take reasonable steps to moderate unlawful content. Loeffler's bill also adds a requirement that the Internet platforms "take reasonable steps to prevent or address the unlawful use of the interactive computer service or unlawful publication of information on the interactive computer service."

Disclosure of policies. Further, the bill requires Internet platforms to disclose their content moderation policy: “(A) a provider of an interactive computer service shall, in any terms of service or user agreement produced by the provider, clearly explain the practices and procedures used by the provider in restricting access to or availability of any material; and (B) a provider or user of an interactive computer services that decides to restrict access to or availability of any material shall provide a clear explanation of that decision to the information content provider that created or developed the material.”

Text of SBTCA

-written by Adam Wolfe

Facebook announces content moderation policy change in clamp down on QAnon and movements tied to violence

On August 19, 2020, Facebook announced a change to its community standards in moderating content on Facebook for safety reasons. Facebook's community standards already require the removal of content that calls for and advocates violence and the removal of individuals and groups promoting violence. Facebook now will restrict content that doesn't necessarily advocate violence, but is "tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations and QAnon." Facebook explained, "we have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior." U.S. based militia organizations and far-right conspiracy, QAnon, have begun to grow on the social site. As we reported, earlier in July 2020, Twitter suspended 7,000 users who supported Qnon conspiracy theories. Facebook followed suit by removing 790 QAnon groups on Facebook (including one group that had 200,000 members) and 10,000 Instagram accounts in August 2020.

Facebook listed seven actions they planned to take against movements and organizations tied to violence:

  1. Remove From Facebook: Facebook Pages, Groups, and Instagram accounts that are a part of harmful movements and organizations will be removed from the platform when they discuss potential violence. To help identify when violence is taking place, Facebook plans to study the technology and symbolism these groups use.
  2. Limit Recommendations: Pages, Groups, and Instagram accounts associated with harmful organizations that are not removed will not be recommended to people as Pages, Groups, or accounts they might want to follow.
  3. Reduce Ranking in News Feed: Looking forward to the future, content from these Pages and Groups will be ranked lower in the news feeds. This will lessen the amount of people who see these pages on their news feed on Facebook.
  4. Reduce in Search: Hashtags and titles for related content will be ranked lower in search suggestions and will not be suggested in the Search Typehead.
  5. Reviewing Related Hashtags on Instagram: On Instagram specifically the Related Hashtag feature has been removed. This feature allowed people to view hashtags that were similar to those they use. Facebook is clear that this feature could potentially return in the future once they have introduced better safety measures to protect people when utilizing the feature.
  6. Prohibit Use of Ads, Commerce Surfaces and Monetization Tools: Facebook starting softly has planned a two-step action, against the prohibition of Ads and the use of the Marketplace to in relation to these movements. Currently they have stopped Facebook Pages related to these movements from running Ads or selling products through the Marketplace and Shop. In the future, Facebook plans to take stronger action stopping Ads praising or supporting these movements from being run by anyone.
  7. Prohibit Fundraising: Finally fundraising associated with these groups will be prohibited. Nonprofits who identify with these groups will be disallowed from using the Facebook fundraising tools.

With the new policy, Facebook expands its existing policy against violence to include the removal of groups and individuals that impose a risk to public safety. The threshold previously, according to Facebook, would not have allowed these groups to be removed because they did not meet the rigorous criteria to be deemed dangerous to the platform. Facebook is not banning QAnon content from the site in its entirety, Facebook is restricting the ability of the individuals who follow these groups to organize on the platform. QAnon believers can still post these conspiracies on the platform in an individualized manner.

With the expansion of its policy, Facebook takes an important step in stopping the spread of harmful information on its platform. As a result of the expanded policy, Facebook has already been able to take down hundreds of groups and ads tied to QAnon and militia organizations and thousands tied to these movements on Instagram. Whether these changes are effective enough to keep Facebook from being used as a tool to organize violence remains to be seen, however.

--written by Bisola Oni

FCC request for comments on issuing regulations on Section 230 of the Communications Decency Act

Earlier in August 2020, the Federal Communications Commission opened a public comment period for people to express their views on "Petition for Rulemaking recently filed by the Department of Commerce regarding Section 230 of the Communications Decency Act of 1996." The inquiry was prompted by Donald Trump's Executive Order on Preventing Online Censorship, issued on May 28, 2020. Trump has accused social media sites of suppressing conservative speech after Twitter flagged some of his tweets for violating their community standards. In the Executive Order, Trump takes the view that Internet companies that "engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints" should lose immunity under Section 230 of the CDA. That provision states in part: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230). For more about how Section 230 operates, read our prior explanation. This legislation from a time when internet communications were in their infancy has been a vital protection invoked by social media sites that enable their users to exchange information. In the Executive Order, Trump called upon "the Secretary of Commerce (Secretary), in consultation with the Attorney General, and acting through the National Telecommunications and Information Administration (NTIA), [to] file a petition for rulemaking with the Federal Communications Commission (FCC) requesting that the FCC expeditiously propose regulations to clarify" Section 230. The NTIA did so, largely along the lines suggested by the Executive Order. 

Now, the FCC has opened up public comments on the NTIA petition. As of Aug. 24, 2020, the FCC has received 619 comments.  The FCC's involvement has already drawn controversy. In a speech in May, Republican FCC Commissioner Michael O'Reilly expressed "deep reservations" about whether the FCC had any authority to issue regulations on Section 230. On August 4, the White House announced it was withdrawing O'Reilly's nomination for another term on the FCC, meaning his tenure will end before the new Congress starts next year, according to the Wall Street Journal. 

Whether or not the FCC has legal authority to issue regulations related to Section 230 (which it hasn't done so far) is likely to be contested. It its petition, the NTIA argues: 

Section 201(b) of the Communications Act (Act) empowers the Commission to “prescribe such rules and regulations as may be necessary in the public interest to carry out this chapter.” Under this authority, the FCC should promulgate rules to resolve ambiguities in Section 230. The Supreme Court has confirmed that “the grant in section 201(b) means what it says: The FCC has rulemaking authority to carry out the ‘provisions of this Act.’” Section 230, in turn, was incorporated into the Act – in the same portion of the Act, Title II, as section 201(b) – by the Telecommunications Act of 1996 (1996 Act). The fact that section 230 was enacted after section 201(b) is of no consequence; the Supreme Court repeatedly has held that the Commission’s section 201(b) rulemaking power extends to all subsequently enacted provisions of the Act, specifically identifying those added by the Telecommunications Act of 1996. Thus, the Commission has authority under section 201(b) to initiate a rulemaking to implement section 230. That broad rulemaking authority includes the power to clarify the language of that provision, as requested in the petition.



Political bias?: WSJ reports on Facebook's alleged favoritism in content moderation of Indian politician T. Raja Singh and ruling Hindu nationalist party, Bharatiya Janata Party


On Aug. 14, 2020, Newley Purnell and Jeff Horwitz of the Wall Street Journal reported of possible political favoritism shown by Facebook in its content moderation of posts on Facebook by ruling party Hindu nationalist politicians in India. These allegations of political bias come as Facebook faces similar claims of political bias for and against Donald Trump and conservatives in the United States.  The Wall Street Journal article relies on "current and former Facebook employees familiar with the matter." According to the article, in its content moderation, Facebook flagged posts by Bharatiya Janata Party (BJP) politician, T. Raja Singh, and other Hindu nationalist individuals and groups for “promoting violence”--which should have resulted in the suspension of his Facebook account. But Facebook executives allegedly intervened in the content moderation. Facebook's "top public-policy executive in the country, Ankhi Das, opposed applying the hate-speech rules to Mr. Singh and at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence, said the current and former employees." Ankhi Das is a top Facebook official in India and lobbies India’s government on Facebook’s behalf. Das reportedly explained her reasoning to Facebook staff that "punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country, Facebook’s biggest global market by number of users."

According to the Wall Street Journal article, Andy Stone, a Facebook spokesperson, "acknowledged that Ms. Das had raised concerns about the political fallout that would result from designating Mr. Singh a dangerous individual, but said her opposition wasn’t the sole factor in the company’s decision to let Mr. Singh remain on the platform." Facebook said it has not yet decided whether it will ban the BJP politician from the social media platform.

The WSJ article gives examples of alleged political favoritism to the BJP party. Facebook reportedly announced its action to remove inauthentic pages to Pakistan’s military and the Congress party, which is BJP’s rival. However, Facebook made no such announcement when it removed BJP’s inauthentic pages because Das interceded. Facebook's safety staff determined that Singh's posts warranted a permanent ban from Facebook, but Facebook only deleted some of Singh's posts and stripped his account of verified status.  In addition, Facebook's Das praised Modi in an essay in 2017 and she shared on her Facebook page "a post from a former police official, who said he is Muslim, in which he called India’s Muslims traditionally a 'degenerate community' for whom 'Nothing except purity of religion and implementation of Shariah matter.'"

On August 16, 2020, Facebook's Das filed a criminal complaint against journalist Awesh Tiwari for a post he made on his Facebook page about the WSJ article. Das alleges a comment someone posted to Tiwari's page constituted a threat against her. 

--written by Alfa Alemayehu

Revisiting Facebook's "White Paper" Proposal for "Online Content Regulation"

In the Washington Post last year, Facebook CEO Mark Zuckerberg called for governments to enact new regulations for content moderation. In February 2020, Monika Bickert, the VP for Content Policy at Facebook, published a White Paper "Charting a Way Forward Online Content Regulation" outlining four key questions and recommendations for governments to regulate content moderation. As the U.S. Congress is considering several bills to amend Section 230 of the Communications Decency Act and the controversy over content moderation rages on, we thought it would be worth revsiting Facebook's White Paper. It is not every day that an Internet company asks for government regulation.

The White Paper draws attention to how corporations like Facebook make numerous daily decisions on what speech is disseminated online, marking a dramatic shift from how such decisions in the past were often raised in the context of government regulation and its intersection with the free speech rights of individual. Online content moderation marks a fundamental shift in speech regulation from governments to private corporations or Internet companies: 

For centuries, political leaders, philosophers, and activists have wrestled with the question of how and when governments should place limits on freedom of expression to protect people from content that can contribute to harm. Increasingly, privately run internet platforms are making these determinations, as more speech flows through their systems. Consistent with human rights norms, internet platforms generally respect the laws of the countries in which they operate, and they are also free to establish their own rules about permissible expression, which are often more restrictive than laws. As a result, internet companies make calls every day that influence who has the ability to speak and what content can be shared on their platform. 

With the enormous power over online speech, corporations like Facebook are beset with many demands from users and governments alike:

As a result, private internet platforms are facing increasing questions about how accountable and responsible they are for the decisions they make. They hear from users who want the companies to reduce abuse but not infringe upon freedom of expression. They hear from governments, who want companies to remove not only illegal content but also legal content that may contribute to harm, but make sure that they are not biased in their adoption or application of rules. 

Perhaps surprisingly, Facebook calls upon governments to regulate content moderation by Internet companies:

Facebook has therefore joined the call for new regulatory frameworks for online content—frameworks that ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression. This balance is necessary to protect the open internet, which is increasingly threatened—even walled off—by some regimes. Facebook wants to be a constructive partner to governments as they weigh the most effective, democratic, and workable approaches to address online content governance.

The White Paper then focused on four questions regarding the regulation of online content:

1. How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?

Regulators can aim to achieve the goal of reducing harmful speech in three ways: (1) increase accountability for internet companies by requiring certain systems and procedures in place, (2) require "specific performance targets" for companies to meet in moderating content that violates their policies (given that perfect enforcement is impossible), and (3) requiring that companies restrict certain forms of speech beyond what is already considered illegal content. Generally, Facebook leans towards the first approach as the best way to go. "By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts," Facebook explains. Facebook thinks the 3 approaches can be adopted in combination, and underscores that "the most important elements of any system will be due regard for each of the human rights and values at stake, as well as clarity and precision in the regulation."

2. How should regulation enhance the accountability of internet platforms to the public?

Facebook recommends that regulation require internet content moderation systems follow guidelines of being "consultative, transparent, and subject to independent oversight." "Specifically, procedural accountability regulations could include, at a minimum, requirements that companies publish their content standards, provide avenues for people to report to the company any content that appears to violate the standards, respond to such user reports with a decision, and provide notice to users when removing their content from the site." Facebook recommends that the law can incentivize or require, where appropriate, the following measures: 

  • Insight into a company’s development of its content standards.
  • A requirement to consult with stakeholders when making significant changes to standards.
  • An avenue for users to provide their own input on content standards.
  • A channel for users to appeal the company’s removal (or non-removal) decision on a specific piece of content to some higher authority within the company or some source of authority outside the company.
  • Public reporting on policy enforcement (possibly including how much content was removed from the site and for what reasons, how much content was identified by the company through its own proactive means before users reported it, how often the content appears on its site, etc.). 

Facebook recommends that countries draw upon the existing approaches in the Global Network Initiative Principles and the European Union Code of Conduct on Countering Illegal Hate Speech Online

3. Should regulation require internet companies to meet certain performance targets?

Facebook sees trade-offs in government regulation that would require companies to meet performance targets in enforcing their content moderation rules. This approach would hold companies responsible for the targets which they have met and not for the systems put in place to achieve these standards. Using this metric, the government would focus on specific targets in judging a company’s adherence to content moderation standards. The prevalence of content deemed harmful is a promising area for exploring the development of company standards. Harmful content is harmful because of the number of people who are exposed and engage with it. Monitoring prevalence would allow for regulators to determine the extent to which harm is being done on the platform. In the case of content that is harmful even with a limited audience, such as child sexual exploitation, the metric would be shifted to focus of the timeliness of action taken against such content by companies. Creating thresholds for violating content also requires that companies and regulators agree on which content are deemed harmful. However, Facebook cautions that performance targets can have unintended consequences: "There are significant trade-offs regulators must consider when identifying metrics and thresholds. For example, a requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. The biggest internet companies have developed technology that allows them to detect certain types of content violations with much greater speed and accuracy than human reporting. For instance, from July through September 2019, the vast majority of content Facebook removed for violating its hate speech, self-harm, child exploitation, graphic violence, and terrorism policies was detected by the company’s technology before anyone reported it. A regulatory focus on response to user or government reports must take into account the cost it would pose to these company-led detection efforts."

4. Should regulation define which “harmful content” should be prohibited on internet platforms?

Governments are considering whether to develop regulations that define “harmful content,” requiring that internet platforms remove new categories of harmful speech. Facebook recommends that governments start with the freedom of expression recognized by Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Governments seeking to address internet content moderation have to address the complexities. In creating rules, user preferences must be taken into account, as well as making sure not to undermine the goal of promoting expression. Facebook advises that governments must consider the practicalities of Internet companies moderating content: "Companies use a combination of technical systems and employees and often have only the text in a post to guide their assessment. The assessment must be made quickly, as people expect violations to be removed within hours or days rather than the months that a judicial process might take. The penalty is generally removal of the post or the person’s account." Accordingly, regulations need to be enforced at scale, as well as allow flexibility across language, trends, and content.

According to Facebook, creating regulations for social media companies has to be achieved through the combined efforts of not just lawmakers and private companies, but also through the help of individuals who use the online platforms. Governments must also create incentives by ensuring accountability in content moderations, that allow companies to balance safety, privacy, and freedom of expression. The internet is a global entity, and regulations made must respect the global scale and spread of communication across borders. Freedom of expression cannot be trampled, and any decision made must be made with the impact of these rights in mind. An understanding of technology and the proportionality in which to address harmful content needs to also be taken into account by regulators. Each platform is its own entity and what works best for one may not work best for another. A well-developed framework will make the internet a safer place and allow for continued freedom of expression.

--written by Bisola Oni



Summary: Mounting Allegations Facebook, Zuckerberg Have Political Bias and Favoritism for Trump and conservatives in content moderation

In the past week, more allegations surfaced that Facebook executives have been intervening in questionable ways in the company's content moderation procedure that show favoritism to Donald Trump, Breitbart, and other conservatives. These news reports cut against the narrative that Facebook has an "anti-conservative bias." For example, according to some allegations, Facebook executives didn't want to enforce existing community standards or change the community standards in a way that would flag conservatives for violations, even when the content moderators found violations by conservatives.  Below is a summary of the main allegations that Facebook has been politically biased in favor of Trump and conservatives.  This page will be updated if more allegations are reported.

Ben Smith, How Pro-Trump Forces Work the Refs in Silicon Valley, N.Y. Times (Aug. 9, 2020): "Since then, Facebook has sought to ingratiate itself to the Trump administration, while taking a harder line on Covid-19 misinformation. As the president’s backers post wild claims on the social network, the company offers the equivalent of wrist slaps — a complex fact-checking system that avoids drawing the company directly into the political fray. It hasn’t worked: The fact-checking subcontractors are harried umpires, an easy target for Trump supporters’ ire....In fact, two people close to the Facebook fact-checking process told me, the vast bulk of the posts getting tagged for being fully or partly false come from the right. That’s not bias. It’s because sites like The Gateway Pundit are full of falsehoods, and because the president says false things a lot."

Olivia Solon, Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages, NBC News (Aug. 7, 2020, 2:31 PM): "The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook's fact-checking could go public and fuel allegations that the social network was biased against conservatives. The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias."

Craig Silverman, Facebook Fired an Employee Who Collected Evidence of Right-Wing Page Getting Preferential Treatment, Buzzfeed (Aug. 6, 2020, 4:13 PM): "[S]ome of Facebook’s own employees gathered evidence they say shows Breitbart — along with other right-wing outlets and figures including Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and conservative video production nonprofit Prager University — has received special treatment that helped it avoid running afoul of company policy. They see it as part of a pattern of preferential treatment for right-wing publishers and pages, many of which have alleged that the social network is biased against conservatives." Further: "Individuals that spoke out about the apparent special treatment of right-wing pages have also faced consequences. In one case, a senior Facebook engineer collected multiple instances of conservative figures receiving unique help from Facebook employees, including those on the policy team, to remove fact-checks on their content. His July post was removed because it violated the company’s 'respectful communication policy.'”

Ryan Mac, Instagram Displayed Negative Related Hashtags for Biden, but Hid them for Trump, Buzzfeed (Aug. 5, 2020, 12:17 PM): "For at least the last two months, a key Instagram feature, which algorithmically pushes users toward supposedly related content, has been treating hashtags associated with President Donald Trump and presumptive Democratic presidential nominee Joe Biden in very different ways. Searches for Biden also return a variety of pro-Trump messages, while searches for Trump-related topics only returned the specific hashtags, like #MAGA or #Trump — which means searches for Biden-related hashtags also return counter-messaging, while those for Trump do not."

Ryan Mac & Craig Silverman, "Hurting People at Scale": Facebook's Employees Reckon with the Social Network They've Built, Buzzfeed (July 23, 2020, 12:59 PM): Yaël Eisenstat, Facebook's former election ads integrity lead "said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned."

Elizabeth Dwoskin, Craig Timberg, & Tony Romm, Zuckerberg once wanted to sanction Trump. Then Facebook wrote rules that accommodated him., Wash. Post (June 28, 2020, 6:25 PM): "But that started to change in 2015, as Trump’s candidacy picked up speed. In December of that year, he posted a video in which he said he wanted to ban all Muslims from entering the United States. The video went viral on Facebook and was an early indication of the tone of his candidacy....Ultimately, Zuckerberg was talked out of his desire to remove the post in part by Kaplan, according to the people. Instead, the executives created an allowance that newsworthy political discourse would be taken into account when making decisions about whether posts violated community guidelines....In spring of 2016, Zuckerberg was also talked out of his desire to write a post specifically condemning Trump for his calls to build a wall between the United States and Mexico, after advisers in Washington warned it could look like choosing sides, according to Dex Torricke-Barton, one of Zuckerberg’s former speechwriters."  

Regarding election interference: "Facebook’s security engineers in December 2016 presented findings from a broad internal investigation, known as Project P, to senior leadership on how false and misleading news reports spread so virally during the election. When Facebook’s security team highlighted dozens of pages that had peddled false news reports, senior leaders in Washington, including Kaplan, opposed shutting them down immediately, arguing that doing so would disproportionately impact conservatives, according to people familiar with the company’s thinking. Ultimately, the company shut down far fewer pages than were originally proposed while it began developing a policy to handle these issues."

Craig Timberg, How conservatives learned to wield power inside Facebook, Wash. Post (Feb. 20, 2020, 1:20 PM): "In a world of perfect neutrality, which Facebook espouses as its goal, the political tilt of the pages shouldn’t have mattered. But in a videoconference between Facebook’s Washington office and its Silicon Valley headquarters in December 2016, the company’s most senior Republican, Joel Kaplan, voiced concerns that would become familiar to those within the company. 'We can’t remove all of it because it will disproportionately affect conservatives,; said Kaplan, a former George W. Bush White House official and now the head of Facebook’s Washington office, according to people familiar with the meeting who spoke on the condition of anonymity to protect professional relationships."

Related articles about Facebook

Ben Smith, What's Facebook's Deal with Donald Trump?NY Times (June 21, 2020): "Mr. Trump’s son-in-law, Jared Kushner, pulled together the dinner on Oct. 22 on short notice after he learned that Mr. Zuckerberg, the Facebook founder, and his wife, Priscilla Chan, would be in Washington for a cryptocurrency hearing on Capitol Hill, a person familiar with the planning said. The dinner, the person said, took place in the Blue Room on the first floor of the White House. The guest list included Mr. Thiel, a Trump supporter, and his husband, Matt Danzeisen; Melania Trump; Mr. Kushner; and Ivanka Trump. The president, a person who has spoken to Mr. Zuckerberg said, did most of the talking. The atmosphere was convivial, another person who got an account of the dinner said. Mr. Trump likes billionaires and likes people who are useful to him, and Mr. Zuckerberg right now is both."

Deepa Seetharaman, How a Facebook Employee Helped Trump Win--But Switched Sides for 2020, Wall St. J (Nov. 24, 2019, 3:18 PM): "One of the first things Mr. Barnes and his team advised campaign officials to do was to start running fundraising ads targeting Facebook users who liked or commented on Mr. Trump’s posts over the past month, using a product now called 'engagement custom audiences.' The product, which Mr. Barnes hand-coded, was available to a small group, including Republican and Democratic political clients. (The ad tool was rolled out widely around Election Day.) Within the first few days, every dollar that the Trump campaign spent on these ads yielded $2 to $3 in contributions, said Mr. Barnes, who added that the campaign raised millions of dollars in those first few days. Mr. Barnes frequently flew to Texas, sometimes staying for four days at a time and logging 12-hour days. By July, he says, he was solely focused on the Trump campaign. When on-site in the building that served as the Trump campaign’s digital headquarters in San Antonio, he sometimes sat a few feet from Mr. Parscale. The intense pace reflected Trump officials’ full embrace of Facebook’s platform, in the absence of a more traditional campaign structure including donor files and massive email databases."

Facebook removes Donald Trump post regarding children "almost immune" for violating rules on COVID misinformation; Twitter temporarily suspends Trump campaign account for same COVID misinformation

On August 5, 2020, as reported by the Wall St. Journal, Facebook removed a post from Donald Trump that contained a video of an interview he did with Fox News in which he reportedly said that children are "almost immune from this disease." Trump also said COVID-19 “is going to go away,” and that “schools should open” because “this it will go away like things go away.” A Facebook spokesperson explained to the Verge: "This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation." 

Twitter temporarily suspended the @TeamTrump campaign account from tweeting because of the same content. "The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.” The Trump campaign resumed tweeting so it appears it complied and removed the tweet. 

Neither Facebook nor Twitter provided much explanation of their decisions on their platforms, at least based on our search. They likely interpreted "almost immune from this disease" as misleading because children of every age can be infected by coronavirus and suffer adverse effects, including death (e.g., 6 year old, 9 year old, and 11 year old). In Florida, 23,170 minors tested positive for coronavirus by July 2020, for example. The CDC just published a study on the spread of coronavirus among children at summer camp in Georgia and found extensive infection spread among the children: 

These findings demonstrate that SARS-CoV-2 spread efficiently in a youth-centric overnight setting, resulting in high attack rates among persons in all age groups, despite efforts by camp officials to implement most recommended strategies to prevent transmission. Asymptomatic infection was common and potentially contributed to undetected transmission, as has been previously reported (1–4). This investigation adds to the body of evidence demonstrating that children of all ages are susceptible to SARS-CoV-2 infection (1–3) and, contrary to early reports (5,6), might play an important role in transmission (7,8). 

Experts around the world are conducting studies to learn more about how COVID-19 affects children.  The Smithsonian Magazine compiles a summary of the some of these studies and is well worth reading.  One of the studies from the Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine did examine the hypothesis: "Decreased susceptibility could result from immune cross-protection from other coronaviruses9,10,11, or from non-specific protection resulting from recent infection by other respiratory viruses12, which children experience more frequently than adults." But the study noted: "Direct evidence for decreased susceptibility to SARS-CoV-2 in children has been mixed, but if true could result in lower transmission in the population overall." This inquiry was undertaken because, thus far, children have reported fewer positive tests than adults. According to the Mayo Clinic Staff: "Children of all ages can become ill with coronavirus disease 2019 (COVID-19). But most kids who are infected typically don't become as sick as adults and some might not show any symptoms at all." Moreover, a study from researchers in Berlin found that children "carried the same viral load, a signal of infectiousness." The Smithsonian Magazine article underscores that experts believe more data and studies are needed to understand how COVID-19 affects children.

Speaking of the Facebook removal, Courtney Parella, a spokesperson for the Trump campaign, said: "The President was stating a fact that children are less susceptible to the coronavirus. Another day, another display of Silicon Valley's flagrant bias against this President, where the rules are only enforced in one direction. Social media companies are not the arbiters of truth."

Cleaning house: Twitter suspends 7,000 accounts of QAnon conspiracy theory supporters

On July 21, 2020, Twitter suspended 7,000 accounts spreading QAnon conspiracy theories. In a tweet about the banning of these QAnon accounts, Twitter reiterated their commitment to taking "strong enforcement actions on behavior that has the potential to lead to offline harm." Twitter identified the QAnon accounts' violations of its community standards against "multi-account[s]," "coordinating abuse around individual victims," and "evad[ing] a previous suspension." In addition to the permanent suspensions, Twitter also felt it necessary to ban content and accounts "associated with Qanon" from the Trends and recommendations on Twitter, as well as to avoid "highlighting this activity in search and conversations." Further, Twitter will block "URLs associated with QAnon from being shared on Twitter." 

These actions by Twitter are a bold step in what has been a highly contentious area concerning the role of social media platforms in moderating hateful or harmful content. Some critics suggested that Twitter's QAnon decision lacked notice and transparency.  Other critics contended that Twitter's actions were too little to stop the "omnisconpiracy theory" that QAnon has become across multiple platforms.

So what exactly is QAnon? CNN describes the origins of QAnon, which began as a single conspiracy theory: QAnon "claim dozens of politicians and A-list celebrities work in tandem with governments around the globe to engage in child sex abuse. Followers also believe there is a 'deep state' effort to annihilate President Donald Trump."  Forbes similarly describes: "Followers of the far-right QAnon conspiracy believe a “deep state” of federal bureaucrats, Democratic politicians and Hollywood celebrities are plotting against President Trump and his supporters while also running an international sex-trafficking ring." In 2019, an internal FBI memo reportedly identified QAnon as a domestic terrorism threat.

Followers of QAnon are also active on Facebook, Reddit, and YouTube. The New York Times reported that Facebook was considering takeingsteps to limit the reach QAnon content had on its platform. Facebook is coordinating with Twitter and other platforms in considering its decision; an announcement is expected in the next month. Facebook has long been criticized for its response, or lack of response, to disinformation being spread on its platform. Facebook is now the subject of a boycott, Stop Hate for Profit, calling for a stop to advertising until steps are taken to halt the spread of disinformation on the social media juggernaut. Facebook continues to allow political ads using these conspiracies on its site. Forbes reports that although Facebook has seemingly tried to take steps to remove pages containing conspiracy theories, a number of pages still remain. Since 2019, Facebook has allowed 144 ads promoting QAnon on its platform, according to Media Matters. Facebook has continuously provided a platform for extremist content; it even allowed white nationalist content until officially banning it in March 2019.

Twitter's crack down on QAnon is a step in the right direction, but it signals how little companies like Twitter and Facebook have done to stop disinformation and pernicious conspiracy theories in the past. As conspiracy theories can undermine effective public health campaigns to stop the spread of the coronavirus and foreign interference can undermine elections, social media companies appear to be playing a game of catch-up.  Social media companies would be well-served by devoting even greater resources to the problem, with more staff and clearer articulation of its policies and enforcement procedures. In the era of holding platforms and individuals accountable for actions that spread hate, social media companies now appear to realize that they have greater responsibilities for what happens on their platforms.

--written by Bisola Oni

Facebook's Oversight Board for content moderation--too little, too late to combat interference in 2020 election

Facebook for has been under fire over the spread of misinformation connected with Russian involvement in the 2016 U.S. presidential election. In April 2018, the idea for an independent oversight board was discussed when CEO Mark Zuckerberg testified before Congress.

Meeting Between Facebook, Zuckerberg and Stop Hate for Profit Boycott Group Turns into a Big Fail

Facebook has come under scrutiny due to its handling of hate speech and disinformation posted on the platform. With the Stop Hate for Profit movement, corporations have begun to take steps to hold Facebook accountable for the disinformation that is spread on the platform. So far, more than 400 advertisers, from Coca-Cola to Ford and Lego, have made the pledge to stop advertising on the social media platform, according to NPR. Facebook has faced intense backlash, particularly since the 2016 election, for allowing disinformation and propaganda to be posted freely. The disinformation and hate, or “Fake News” as many may call it, is aimed at misinforming the voters and spreading hateful propaganda, potentially dampening voter participation.

A broad coalition of groups including Color for Change, the Anti-defamation league, and the NAACP, started the campaign Stop Hate for Profit. (For more on the origin, read Politico.) The goal of the campaign is to push Facebook to make much needed changes in its policy guidelines as well as change within the company executive employees. The boycott targets the advertising dollars for which the social media juggernaut relies upon. The campaign has begun to pick up steam with new companies announcing an end to Facebook Ads every day. With this momentum, the group behind the boycott have released a list 10 first steps Facebook can take.   

Stop Hate for Profit is asking that Facebook take accountability, have decency, and provide support to groups most affected by the hate that is spread on the platform. The civil rights leaders behind this movement are focused on making changes at the executive level as well as holding Facebook more accountable for their lackluster terms of service. The top execs currently at Facebook may have a conflict of interests. People contend that Facebook has a duty to make sure misinformation and hate is not spread, but Facebook does not exercise that to the fullest capacity because of their relationships with politicians. Rashad Robinson, president of Color of Change, contends that there needs to be a separation between the people in charge of the content allowed on Facebook and those who are aligned with political figures. The group is asking Facebook to hire an executive with a civil rights background, who can evaluate discriminatory policies and products. Additionally, the group is asking Facebook to expand on what they consider hate speech. The current terms of service that Facebook currently employs are criticized for being ineffective and problematic.   

Facebooks policies and algorithms are among the things the group asks to be changed. Current Facebook policies allow public and private hate groups to exist and also recommend them to many users.  The campaign asks that Facebook remove far-right groups that spread conspiracies, such as QAnon, from the platform. The labeling of inauthentic information that will cause hate and disinformation is also requested. In contrast, Twitter has taken small steps to label hateful content themselves. While many criticize Twitters actions not being far enough, they have taken steps Facebook has yet to take. Through this entire process, Facebook should make transparent to the public all the steps--in the number of ads rejected for hate or disinformation and in the third-party audit of hate spread on the site.  

The group also made a connection between the hate on the Facebook platform and race issues within the company. Stop Hate for Profit, provided a staggering statistic that 42% of Facebook users experience harassment on the platform. This along with the former black employee and two job candidates who filed EEOC complaints points to a culture at Facebook that goes far beyond allowing far-right propaganda and misinformation on the site but highlights a lack of support for users and employees of color. All of this is used to backup why it is essential that Facebook goes beyond making simple statements and actually make steps to create change.

Facebook CEO and cofounder Mark Zuckerberg agreed to meet with the civil rights groups behind the boycott amid the growing number of companies getting behind Stop Profit for Hate. Many have voiced their concerns that Facebook and CEO Zuckerberg are more concerned about messaging that legitimately fixing the underlying problems.  Upon meeting with Mark Zuckerberg on July 7, Stop Hate for Profit released a statement about what they felt was a disappointing and uneventful meeting. The group asserted that Facebook did what they previously feared, only providing surface level rhetoric with no real interest in committing to any real change. Of the ten recommendations, Zuckerberg was only open to addressing hiring a person with a civil rights background. Although he declined to fully commit to that position, if it is created, being a C-suite executive level position. Rashad Robinson tweeted a direct statement, saying that Facebook was not ready to make any changes despite knowing the demands of the group. That view appears to be consistent with a July 2, 2020 report of a remark by Zuckerberg to employees at a virtual town hall: "We're not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue."

For now, it remains to be seen if the increased pressure from companies pulling advertisements will eventually cause Facebook and Zuckerberg to institute changes that progressive groups have been pushing for years. So far, it appears not.   

--written by Bisola Oni


Blog Search

Blog Archive