The Free Internet Project

Facebook

Mark Zuckerberg: Facebook to suspend political ads week before US election, add a label to premature election claims of victory

On September 3, 2020, Mark Zuckerberg published a lengthy post on his personal Facebook profile, detailing dramatic new measures Facebook is undertaking to safeguard the integrity of the U.S. elections. Zuckerberg wrote [we've added topical descriptions in brackets]:

Today, we're announcing additional steps we're taking at Facebook to encourage voting, connect people with authoritative information, and fight misinformation. These changes reflect what we've learned from our elections work over the past four years and the conversations we've had with voting rights experts and our civil rights auditors:


[Reliable Information at the top of page] We will put authoritative information from our Voting Information Center at the top of Facebook and Instagram almost every day until the election. This will include video tutorials on how to vote by mail, and information on deadlines for registering and voting in your state.

[No political ads starting the week before the election] We're going to block new political and issue ads during the final week of the campaign. It's important that campaigns can run get out the vote campaigns, and I generally believe the best antidote to bad speech is more speech, but in the final days of an election there may not be enough time to contest new claims. So in the week before the election, we won't accept new political or issue ads. Advertisers will be able to continue running ads they started running before the final week and adjust the targeting for those ads, but those ads will already be published transparently in our Ads Library so anyone, including fact-checkers and journalists, can scrutinize them.

[Partnering with state election authorities to identify election misinformation] We're going to extend our work with election officials to remove misinformation about voting. We already committed to partnering with state election authorities to identify and remove false claims about polling conditions in the last 72 hours of the campaign, but given that this election will include large amounts of early voting, we're extending that period to begin now and continue through the election until we have a clear result. We've already consulted with state election officials on whether certain voting claims are accurate.

[Limit the number of chats you can forward on Messenger] We're reducing the risk of misinformation and harmful content going viral by limiting forwarding on Messenger. You'll still be able to share information about the election, but we'll limit the number of chats you can forward a message to at one time. We've already implemented this in WhatsApp during sensitive periods and have found it to be an effective method of preventing misinformation from spreading in many countries.

[Remove both explicit and implicit voting misinformation] No political ads starting the week before the election] We're expanding our voter suppression policies. We already remove explicit misrepresentations about how or when to vote that could cause someone to lose their opportunity to vote -- for example, saying things like "you can send in your mail ballot up to 3 days after election day", which is obviously not true. (In most states, mail-in ballots have to be *received* by election day, not just mailed, in order to be counted.) We're now expanding this policy to include implicit misrepresentations about voting too, like "I hear anybody with a driver's license gets a ballot this year", because it might mislead you about what you need to do to get a ballot, even if that wouldn't necessarily invalidate your vote by itself.

[Remove COVID-misinformation to scare voters from voting] We're putting in place rules against using threats related to Covid-19 to discourage voting. We will remove posts with claims that people will get Covid-19 if they take part in voting. We'll attach a link to authoritative information about Covid-19 to posts that might use the virus to discourage voting, and we're not going to allow this kind of content in ads. Given the unique circumstances of this election, it's especially important that people have accurate information about the many ways to vote safely, and that Covid-19 isn't used to scare people into not exercising their right to vote.

Measure to stop false or premature election results 

Since the pandemic means that many of us will be voting by mail, and since some states may still be counting valid ballots after election day, many experts are predicting that we may not have a final result on election night. It's important that we prepare for this possibility in advance and understand that there could be a period of intense claims and counter-claims as the final results are counted. This could be a very heated period, so we're preparing the following policies to help in the days and weeks after voting ends:

[Facebook Voting Information Center provide information on time it takes to count votes] We'll use the Voting Information Center to prepare people for the possibility that it may take a while to get official results. This information will help people understand that there is nothing illegitimate about not having a result on election night.

[Partner with Reuters and National Election Pool for authoritative information on relection results] We're partnering with Reuters and the National Election Pool to provide authoritative information about election results. We'll show this in the Voting Information Center so it's easily accessible, and we'll notify people proactively as results become available. Importantly, if any candidate or campaign tries to declare victory before the results are in, we'll add a label to their post educating that official results are not yet in and directing people to the official results.

• [Label posts that attempt to deligitimize the election results] We will attach an informational label to content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud. This label will provide basic authoritative information about the integrity of the election and voting methods.

[Expand Facebook policy against content with violence and harm directed at election officials] We'll enforce our violence and harm policies more broadly by expanding our definition of high-risk people to include election officials in order to help prevent any attempts to pressure or harm them, especially while they're fulfilling their critical obligations to oversee the vote counting.

• [Expand Facebook policy against militia and conspiracy groups organizing or supporting violence] We've already strengthened our enforcement against militias, conspiracy networks like QAnon, and other groups that could be used to organize violence or civil unrest in the period after the elections. We have already removed thousands of these groups and removed even more from being included in our recommendations and search results. We will continue to ramp up enforcement against these groups over the coming weeks.

It's important to recognize that there may be legitimate concerns about the electoral process over the coming months. We want to make sure people can speak up if they encounter problems at the polls or have been prevented from voting, but that doesn't extend to spreading misinformation. We'll enforce the policies I outlined above as well as all our existing policies around voter suppression and voting misinformation, but to ensure there are clear and consistent rules, we are not planning to make further changes to our election-related policies between now and the official declaration of the result.

In addition to all of this, four years ago we encountered a new threat: coordinated online efforts by foreign governments and individuals to interfere in our elections. This threat hasn't gone away. Just this week, we took down a network of 13 accounts and 2 pages that were trying to mislead Americans and amplify division. We've invested heavily in our security systems and now have some of the most sophisticated teams and systems in the world to prevent these attacks. We've removed more than 100 networks worldwide engaging in coordinated inauthentic behavior over the past couple of years, including ahead of major democratic elections. However, we're increasingly seeing attempts to undermine the legitimacy of our elections from within our own borders.

I believe our democracy is strong enough to withstand this challenge and deliver a free and fair election -- even if it takes time for every vote to be counted. We've voted during global pandemics before. We can do this. But it's going to take a concerted effort by all of us -- political parties and candidates, election authorities, the media and social networks, and ultimately voters as well -- to live up to our responsibilities. We all have a part to play in making sure that the democratic process works, and that every voter can make their voice heard where it matters most -- at the ballot box.

 

Facebook enlists independent researchers and Social Science One to study how Facebook, Instagram affect 2020 US elections

On Aug. 31, 2020, Facebook announced a new research initiative it started with Social Science One committee chairs, Professors Talia Stroud of University of Texas at Austin and Joshua Tucker of New York University. The researchers will examine "examine the impact of how people interact with our products, including content shared in News Feed and across Instagram, and the role of features like content ranking systems." The research projects conducted on Facebook or via data from Facebook will start soon and end in December, after the November 2020 election. Facebook "expect[s] between 200,000 and 400,000 US adults may choose to participate in the study, which could include things like taking part in surveys or agreeing to see a different product experience. We will also study trends across Facebook and Instagram – but only in aggregate."

Interestingly, Facebook believes that the research projects will not affect the outcome of the U.S. elections: "With billions of dollars spent on ads, direct mail, canvassing, organizing and get out the vote efforts, it is statistically implausible that one research initiative could impact the outcome of an election. The research has been carefully designed to not impact the outcome of the election or harm participants. The sample of participants represents approximately 0.1% of the entire US eligible voting population spread across the US. By better understanding how people use our platform during an election, we can continually enhance the integrity of the platform moving forward." 

Facebook seems to gloss over the fact that a few swing voters in key swing states or precincts could ultimately determine the outcome of some of the elections. Without knowing the details of the various research projects, it's hard to evaluate the potential effect they may have on voters. 

The independent researchers are: 

  • Hunt Allcott, New York University 
  • Deen Freelon, University of North Carolina at Chapel Hill
  • Matthew Gentzkow, Stanford University
  • Sandra Gonzalez-Bailon, University of Pennsylvania
  • Andrew Guess, Princeton University
  • Shanto Iyengar, Stanford University
  • Young Mie Kim, University of Wisconsin-Madison
  • David Lazer, Northeastern University 
  • Neil Malhotra, Stanford University
  • Brendan Nyhan, Dartmouth College
  • Jennifer Pan, Stanford University
  • Jaime Settle, William & Mary
  • Talia Stroud, The University of Texas at Austin
  • Emily Thorson, Syracuse University
  • Rebekah Tromble, The George Washington University
  • Joshua A. Tucker, New York University
  • Magdalena Wojcieszak, University of California, Davis; University of Amsterdam

Facebook describes the scope of research projects as follows:

The independent academics are collaborating with Facebook researchers to design a diverse set of studies to analyze the role of Facebook and Instagram in the US 2020 election. To collect the information for the study, we are partnering with NORC at the University of Chicago, an objective, non-partisan research institution that has been studying public opinion since 1941. NORC possesses deep expertise in survey research, policy evaluation, data collection, advanced analytics and data science. The study was approved by NORC’s Institutional Review Board.

For people who have explicitly opted in to the study, we plan to combine multiple research methods, including surveys and behavioral data analysis, along with targeted changes to some participants’ experiences with Facebook and Instagram. For example, participants could see more or fewer ads in specific categories such as retail, entertainment or politics, or see more or fewer posts in News Feed related to specific topics. Other participants may be asked to stop using Facebook or Instagram for a period of time. A subset of participants may be asked to install an app on their devices – with their permission – that will log other digital media that they consume. This will allow researchers to understand more comprehensively the information environment that people experience. 

Facebook announces content moderation policy change in clamp down on QAnon and movements tied to violence

On August 19, 2020, Facebook announced a change to its community standards in moderating content on Facebook for safety reasons. Facebook's community standards already require the removal of content that calls for and advocates violence and the removal of individuals and groups promoting violence. Facebook now will restrict content that doesn't necessarily advocate violence, but is "tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations and QAnon." Facebook explained, "we have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior." U.S. based militia organizations and far-right conspiracy, QAnon, have begun to grow on the social site. As we reported, earlier in July 2020, Twitter suspended 7,000 users who supported Qnon conspiracy theories. Facebook followed suit by removing 790 QAnon groups on Facebook (including one group that had 200,000 members) and 10,000 Instagram accounts in August 2020.

Facebook listed seven actions they planned to take against movements and organizations tied to violence:

  1. Remove From Facebook: Facebook Pages, Groups, and Instagram accounts that are a part of harmful movements and organizations will be removed from the platform when they discuss potential violence. To help identify when violence is taking place, Facebook plans to study the technology and symbolism these groups use.
  2. Limit Recommendations: Pages, Groups, and Instagram accounts associated with harmful organizations that are not removed will not be recommended to people as Pages, Groups, or accounts they might want to follow.
  3. Reduce Ranking in News Feed: Looking forward to the future, content from these Pages and Groups will be ranked lower in the news feeds. This will lessen the amount of people who see these pages on their news feed on Facebook.
  4. Reduce in Search: Hashtags and titles for related content will be ranked lower in search suggestions and will not be suggested in the Search Typehead.
  5. Reviewing Related Hashtags on Instagram: On Instagram specifically the Related Hashtag feature has been removed. This feature allowed people to view hashtags that were similar to those they use. Facebook is clear that this feature could potentially return in the future once they have introduced better safety measures to protect people when utilizing the feature.
  6. Prohibit Use of Ads, Commerce Surfaces and Monetization Tools: Facebook starting softly has planned a two-step action, against the prohibition of Ads and the use of the Marketplace to in relation to these movements. Currently they have stopped Facebook Pages related to these movements from running Ads or selling products through the Marketplace and Shop. In the future, Facebook plans to take stronger action stopping Ads praising or supporting these movements from being run by anyone.
  7. Prohibit Fundraising: Finally fundraising associated with these groups will be prohibited. Nonprofits who identify with these groups will be disallowed from using the Facebook fundraising tools.

With the new policy, Facebook expands its existing policy against violence to include the removal of groups and individuals that impose a risk to public safety. The threshold previously, according to Facebook, would not have allowed these groups to be removed because they did not meet the rigorous criteria to be deemed dangerous to the platform. Facebook is not banning QAnon content from the site in its entirety, Facebook is restricting the ability of the individuals who follow these groups to organize on the platform. QAnon believers can still post these conspiracies on the platform in an individualized manner.

With the expansion of its policy, Facebook takes an important step in stopping the spread of harmful information on its platform. As a result of the expanded policy, Facebook has already been able to take down hundreds of groups and ads tied to QAnon and militia organizations and thousands tied to these movements on Instagram. Whether these changes are effective enough to keep Facebook from being used as a tool to organize violence remains to be seen, however.

--written by Bisola Oni

NYT: Facebook developing contingency plans and "kill switch" on political ads if Trump tries to "wrongly claim on the site that he won"

On Aug. 21, 2020, Mike Isaac and Sheera Frankel of the New York Times reported that Facebook is developing contingency plans just in case Donald Trump "wrongly claim[s] on the site that he won" contrary to the actual election results should they be against him. Facebook is also weighing how it should deal with Trump's attempts to delegitimize the actual election results by "by declaring that the Postal Service lost mail-in ballots or that other groups meddled with the vote." The source are "people with knowledge of Facebook's plans. Facebook is even considering creating a "kill switch" to remove political ads that contain false election results.

Google is also discussing contingency plans for the U.S. elections, but didn't reveal further details.

It's not hard to envision another nightmare Bush v. Gore scenario, in which the result of the presidential election is contested. Trump has already attacked mail-in voting.  According to the NYT, in part due to the pandemic, 9 states have mailed ballots to all voters, while 34 other states allow voters to elect mail-in voting for any reason and 7 states allow mail-in voting for certain reasons.  Prof. Ned Foley has highlighted one reason this year's election may result in a contested outcome and litigation: mail-in ballots typically result in a "blue shift" with more ballots for Democrats than Republicans in past elections from mail-in ballots for reasons that are not entirely clear.  Thus, in close races, the "blue shift" might flip a state from Republican to Democrat when the mail-in votes are counted, giving rise to unsubstantiaed claims of foul play. For more about this scenario, read this Atlantic article

Political bias?: WSJ reports on Facebook's alleged favoritism in content moderation of Indian politician T. Raja Singh and ruling Hindu nationalist party, Bharatiya Janata Party

 

On Aug. 14, 2020, Newley Purnell and Jeff Horwitz of the Wall Street Journal reported of possible political favoritism shown by Facebook in its content moderation of posts on Facebook by ruling party Hindu nationalist politicians in India. These allegations of political bias come as Facebook faces similar claims of political bias for and against Donald Trump and conservatives in the United States.  The Wall Street Journal article relies on "current and former Facebook employees familiar with the matter." According to the article, in its content moderation, Facebook flagged posts by Bharatiya Janata Party (BJP) politician, T. Raja Singh, and other Hindu nationalist individuals and groups for “promoting violence”--which should have resulted in the suspension of his Facebook account. But Facebook executives allegedly intervened in the content moderation. Facebook's "top public-policy executive in the country, Ankhi Das, opposed applying the hate-speech rules to Mr. Singh and at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence, said the current and former employees." Ankhi Das is a top Facebook official in India and lobbies India’s government on Facebook’s behalf. Das reportedly explained her reasoning to Facebook staff that "punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country, Facebook’s biggest global market by number of users."

According to the Wall Street Journal article, Andy Stone, a Facebook spokesperson, "acknowledged that Ms. Das had raised concerns about the political fallout that would result from designating Mr. Singh a dangerous individual, but said her opposition wasn’t the sole factor in the company’s decision to let Mr. Singh remain on the platform." Facebook said it has not yet decided whether it will ban the BJP politician from the social media platform.

The WSJ article gives examples of alleged political favoritism to the BJP party. Facebook reportedly announced its action to remove inauthentic pages to Pakistan’s military and the Congress party, which is BJP’s rival. However, Facebook made no such announcement when it removed BJP’s inauthentic pages because Das interceded. Facebook's safety staff determined that Singh's posts warranted a permanent ban from Facebook, but Facebook only deleted some of Singh's posts and stripped his account of verified status.  In addition, Facebook's Das praised Modi in an essay in 2017 and she shared on her Facebook page "a post from a former police official, who said he is Muslim, in which he called India’s Muslims traditionally a 'degenerate community' for whom 'Nothing except purity of religion and implementation of Shariah matter.'"

On August 16, 2020, Facebook's Das filed a criminal complaint against journalist Awesh Tiwari for a post he made on his Facebook page about the WSJ article. Das alleges a comment someone posted to Tiwari's page constituted a threat against her. 

--written by Alfa Alemayehu

How are Twitter, Facebook, other Internet platforms going to stop 2020 election misinformation in U.S.?

With millions of users within the United States on Facebook and Twitter, Internet platforms are becoming a common source of current event news, information, and outlets for socialization for many Americans. These social media platforms have been criticized for allowing the spread of misinformation regarding political issues and the COVID-19 pandemic. These criticisms began after misinformation spread during the 2016 U.S. presidential election. Many misinformation sources went unchecked, therefore millions of Americans were convinced they were consuming legitimate news sources when they were actually reading “fake news.” It is safe to say that these platforms do not want to repeat those mistakes. Ahead of the 2020 U.S. elections, both Facebook and Twitter have taken various actions to ensure misinformation is not only identified, but either removed or flagged as untrue with links to easily accessible, credible resources for their users.

Facebook’s Plan to Stop Election Misinformation

Facebook has faced the most criticism regarding the spread of misinformation due. Facebook has created a voting information center, similar to a COVID-19 one, that will appear in the Facebook and Instagram menu.

Facebook's Voting Information Center

This hub will target United States users only and will contain election information based on the user’s geographic location. For example, if you live in Orange County, Florida, information on vote-by-mail options and poll locations in that area will be provided. In addition, Facebook plans on adding notations to some posts containing election misinformation on Facebook with a link to verified information in the voting information center. Facebook will have voting alerts which will “communicate election-related updates to users through the platform.” Only official government accounts will have access to these voting alerts. Yet one sore spot appears to remain for Facebook: Facebook doesn't fact-check the posts or ads of politicians because, as CEO Mark Zuckerberg has repeatedly said, Facebook does not want to be the "arbiter of truth." 

Twitter’s Plan to Stop Election Misinformation

Twitter has been in the press recently with their warnings on the bottom of a few of President Trump’s tweets for misinformation and glorification of violence. These warnings are part of Twitter’s community standards and Civic Integrity Policy, which was enacted in May 2020. This policy prohibits the use of “Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes.” Civic processes include “political elections, censuses, and major referenda and ballot initiatives.” Twitter also banned political campaign ads starting in October 2019. Twitter recently released a statement stating its aim is to “empower every eligible person to register to vote,” by providing various resources for them to educate themselves on issues surrounding our society today. Twitter officials stated to Reuters they will be expanding their Civic Integrity Policy to address election misinformation, such as mischaracterizations of mail-in voting. But, as TechCrunch points out, "hyperpartisan content or making broad claims that elections are 'rigged' ... do not currently constitute a civic integrity policy violation, per Twitter’s guidance." Twitter also announce dthat it will identify with a notation state-affiliated media, such as from Russia: 

In addition to these policies, Facebook and Twitter, as well as other information platforms including Microsoft, Pinterest, Verizon Media, LinkedIn, and Wikimedia Foundation have all decided to work closely with U.S. government agencies to make sure the integrity of the election is not jeopardized. These tech companies have specifically discussed how the upcoming political conventions and specific scenarios arising from the election results will be handled. Though no details have been given, this seems to be a promising start to ensuring the internet does more good than bad in relation to politics.

--written by Mariam Tabrez

Revisiting Facebook's "White Paper" Proposal for "Online Content Regulation"

In the Washington Post last year, Facebook CEO Mark Zuckerberg called for governments to enact new regulations for content moderation. In February 2020, Monika Bickert, the VP for Content Policy at Facebook, published a White Paper "Charting a Way Forward Online Content Regulation" outlining four key questions and recommendations for governments to regulate content moderation. As the U.S. Congress is considering several bills to amend Section 230 of the Communications Decency Act and the controversy over content moderation rages on, we thought it would be worth revsiting Facebook's White Paper. It is not every day that an Internet company asks for government regulation.

The White Paper draws attention to how corporations like Facebook make numerous daily decisions on what speech is disseminated online, marking a dramatic shift from how such decisions in the past were often raised in the context of government regulation and its intersection with the free speech rights of individual. Online content moderation marks a fundamental shift in speech regulation from governments to private corporations or Internet companies: 

For centuries, political leaders, philosophers, and activists have wrestled with the question of how and when governments should place limits on freedom of expression to protect people from content that can contribute to harm. Increasingly, privately run internet platforms are making these determinations, as more speech flows through their systems. Consistent with human rights norms, internet platforms generally respect the laws of the countries in which they operate, and they are also free to establish their own rules about permissible expression, which are often more restrictive than laws. As a result, internet companies make calls every day that influence who has the ability to speak and what content can be shared on their platform. 

With the enormous power over online speech, corporations like Facebook are beset with many demands from users and governments alike:

As a result, private internet platforms are facing increasing questions about how accountable and responsible they are for the decisions they make. They hear from users who want the companies to reduce abuse but not infringe upon freedom of expression. They hear from governments, who want companies to remove not only illegal content but also legal content that may contribute to harm, but make sure that they are not biased in their adoption or application of rules. 

Perhaps surprisingly, Facebook calls upon governments to regulate content moderation by Internet companies:

Facebook has therefore joined the call for new regulatory frameworks for online content—frameworks that ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression. This balance is necessary to protect the open internet, which is increasingly threatened—even walled off—by some regimes. Facebook wants to be a constructive partner to governments as they weigh the most effective, democratic, and workable approaches to address online content governance.

The White Paper then focused on four questions regarding the regulation of online content:

1. How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?

Regulators can aim to achieve the goal of reducing harmful speech in three ways: (1) increase accountability for internet companies by requiring certain systems and procedures in place, (2) require "specific performance targets" for companies to meet in moderating content that violates their policies (given that perfect enforcement is impossible), and (3) requiring that companies restrict certain forms of speech beyond what is already considered illegal content. Generally, Facebook leans towards the first approach as the best way to go. "By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts," Facebook explains. Facebook thinks the 3 approaches can be adopted in combination, and underscores that "the most important elements of any system will be due regard for each of the human rights and values at stake, as well as clarity and precision in the regulation."

2. How should regulation enhance the accountability of internet platforms to the public?

Facebook recommends that regulation require internet content moderation systems follow guidelines of being "consultative, transparent, and subject to independent oversight." "Specifically, procedural accountability regulations could include, at a minimum, requirements that companies publish their content standards, provide avenues for people to report to the company any content that appears to violate the standards, respond to such user reports with a decision, and provide notice to users when removing their content from the site." Facebook recommends that the law can incentivize or require, where appropriate, the following measures: 

  • Insight into a company’s development of its content standards.
  • A requirement to consult with stakeholders when making significant changes to standards.
  • An avenue for users to provide their own input on content standards.
  • A channel for users to appeal the company’s removal (or non-removal) decision on a specific piece of content to some higher authority within the company or some source of authority outside the company.
  • Public reporting on policy enforcement (possibly including how much content was removed from the site and for what reasons, how much content was identified by the company through its own proactive means before users reported it, how often the content appears on its site, etc.). 

Facebook recommends that countries draw upon the existing approaches in the Global Network Initiative Principles and the European Union Code of Conduct on Countering Illegal Hate Speech Online

3. Should regulation require internet companies to meet certain performance targets?

Facebook sees trade-offs in government regulation that would require companies to meet performance targets in enforcing their content moderation rules. This approach would hold companies responsible for the targets which they have met and not for the systems put in place to achieve these standards. Using this metric, the government would focus on specific targets in judging a company’s adherence to content moderation standards. The prevalence of content deemed harmful is a promising area for exploring the development of company standards. Harmful content is harmful because of the number of people who are exposed and engage with it. Monitoring prevalence would allow for regulators to determine the extent to which harm is being done on the platform. In the case of content that is harmful even with a limited audience, such as child sexual exploitation, the metric would be shifted to focus of the timeliness of action taken against such content by companies. Creating thresholds for violating content also requires that companies and regulators agree on which content are deemed harmful. However, Facebook cautions that performance targets can have unintended consequences: "There are significant trade-offs regulators must consider when identifying metrics and thresholds. For example, a requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. The biggest internet companies have developed technology that allows them to detect certain types of content violations with much greater speed and accuracy than human reporting. For instance, from July through September 2019, the vast majority of content Facebook removed for violating its hate speech, self-harm, child exploitation, graphic violence, and terrorism policies was detected by the company’s technology before anyone reported it. A regulatory focus on response to user or government reports must take into account the cost it would pose to these company-led detection efforts."

4. Should regulation define which “harmful content” should be prohibited on internet platforms?

Governments are considering whether to develop regulations that define “harmful content,” requiring that internet platforms remove new categories of harmful speech. Facebook recommends that governments start with the freedom of expression recognized by Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Governments seeking to address internet content moderation have to address the complexities. In creating rules, user preferences must be taken into account, as well as making sure not to undermine the goal of promoting expression. Facebook advises that governments must consider the practicalities of Internet companies moderating content: "Companies use a combination of technical systems and employees and often have only the text in a post to guide their assessment. The assessment must be made quickly, as people expect violations to be removed within hours or days rather than the months that a judicial process might take. The penalty is generally removal of the post or the person’s account." Accordingly, regulations need to be enforced at scale, as well as allow flexibility across language, trends, and content.

According to Facebook, creating regulations for social media companies has to be achieved through the combined efforts of not just lawmakers and private companies, but also through the help of individuals who use the online platforms. Governments must also create incentives by ensuring accountability in content moderations, that allow companies to balance safety, privacy, and freedom of expression. The internet is a global entity, and regulations made must respect the global scale and spread of communication across borders. Freedom of expression cannot be trampled, and any decision made must be made with the impact of these rights in mind. An understanding of technology and the proportionality in which to address harmful content needs to also be taken into account by regulators. Each platform is its own entity and what works best for one may not work best for another. A well-developed framework will make the internet a safer place and allow for continued freedom of expression.

--written by Bisola Oni

 

 

Facebook removes Romanian troll farm fake accounts posing as Black voters for Trump

In July 2020, Facebook reported that it had removed nine networks of fake accounts, pages, and groups for violating its policies against coordinated inauthentic behavior (CIB). As Facebook’s July 2020 CIB report explains, CIB means coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation, including both domestically non-government campaigns and activities on behalf of foreign entities. Facebook removed:

  • 798 Facebook accounts
  • 259 Instagram accounts
  • 669 Facebook pages
  • 69 Facebook groups.

Some of the fake accounts targeted U.S. users, ahead of the 2020 U.S. election. Facebook removed 35 Facebook accounts, 3 pages, and 88 Instagram accounts originating from a suspected Romanian troll farm. Facebook explained: “The people behind this network used fake accounts — some of which had already been detected and disabled by our automated systems — to pose as Americans, amplify and comment on their own content, and manage Pages including some posing as President Trump fan Pages. This network posted about US domestic news and events, including the upcoming November election, the Trump campaign and support for the campaign by African Americans, conservative ideology, Christian beliefs, and Qanon. They also frequently reposted stories by American conservative news networks and the Trump campaign.” According to NBC News, "Troll farms — groups of people that work together to manipulate internet discourse with fake accounts — are often outsourced and purchased by foreign governments or businesses to push specific political talking points."

The Romanian troll farm Facebook accounts were following a similar tactic of Russian operatives who posed as Black Lives Matter supporters to undermine Black voter supporter for Hillary Clinton. Similarly, Facebook found that some of the fake Romanian accounts posed as Black Trump supporters. The Romanian troll farm used hashtags like “Blackpeoplevotefortrump” and "We Love Our President" to post pro-Trump comments, spread information supporting the Republican Party and Qanon, and advertise Trump campaign. Altogether, these Romanian accounts allegedly drew around 1600 followers on Facebook and 7200 followers on Instagram. One example Facebook provided is shown below:

 

Fake "blackpeoplevotefortrump" account run by Romanian troll farm on Facebook

These fake accounts were taken down for engaging in coordinated inauthentic behavior, Facebook explained.

As reported by NBC News, Facebook also removed 303 Facebook accounts, 181 pages, 44 groups, and 31 Instagram accounts that were followed by 2 million people. These accounts were connected to Epoch Media Group, a pro-Trump media outlet. The accounts violated Facebook's policies against coordinated inauthentic behavior and foreign interference. This network operated from many regions around the globe and focused primarily on English and Chinese-speaking audiences globally. These accounts posted about news and comments related to the Chinese government such as the Hong Kong protests, the US administration’s policies towards China, the Falun Gong movement, conspiracy theories behind the US protests and COVID-19 misinformation, according to Facebook.  Additionally, Facebook said it linked this network to Truth Media, which was involved in Facebook’s previous investigation for violating policies against coordinated inauthentic behavior, spam and misrepresentation and which has now been banned on Facebook.

--written by Candice Wang

 

 

 

 

 

Summary: Mounting Allegations Facebook, Zuckerberg Have Political Bias and Favoritism for Trump and conservatives in content moderation

In the past week, more allegations surfaced that Facebook executives have been intervening in questionable ways in the company's content moderation procedure that show favoritism to Donald Trump, Breitbart, and other conservatives. These news reports cut against the narrative that Facebook has an "anti-conservative bias." For example, according to some allegations, Facebook executives didn't want to enforce existing community standards or change the community standards in a way that would flag conservatives for violations, even when the content moderators found violations by conservatives.  Below is a summary of the main allegations that Facebook has been politically biased in favor of Trump and conservatives.  This page will be updated if more allegations are reported.

Ben Smith, How Pro-Trump Forces Work the Refs in Silicon Valley, N.Y. Times (Aug. 9, 2020): "Since then, Facebook has sought to ingratiate itself to the Trump administration, while taking a harder line on Covid-19 misinformation. As the president’s backers post wild claims on the social network, the company offers the equivalent of wrist slaps — a complex fact-checking system that avoids drawing the company directly into the political fray. It hasn’t worked: The fact-checking subcontractors are harried umpires, an easy target for Trump supporters’ ire....In fact, two people close to the Facebook fact-checking process told me, the vast bulk of the posts getting tagged for being fully or partly false come from the right. That’s not bias. It’s because sites like The Gateway Pundit are full of falsehoods, and because the president says false things a lot."

Olivia Solon, Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages, NBC News (Aug. 7, 2020, 2:31 PM): "The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook's fact-checking could go public and fuel allegations that the social network was biased against conservatives. The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias."

Craig Silverman, Facebook Fired an Employee Who Collected Evidence of Right-Wing Page Getting Preferential Treatment, Buzzfeed (Aug. 6, 2020, 4:13 PM): "[S]ome of Facebook’s own employees gathered evidence they say shows Breitbart — along with other right-wing outlets and figures including Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and conservative video production nonprofit Prager University — has received special treatment that helped it avoid running afoul of company policy. They see it as part of a pattern of preferential treatment for right-wing publishers and pages, many of which have alleged that the social network is biased against conservatives." Further: "Individuals that spoke out about the apparent special treatment of right-wing pages have also faced consequences. In one case, a senior Facebook engineer collected multiple instances of conservative figures receiving unique help from Facebook employees, including those on the policy team, to remove fact-checks on their content. His July post was removed because it violated the company’s 'respectful communication policy.'”

Ryan Mac, Instagram Displayed Negative Related Hashtags for Biden, but Hid them for Trump, Buzzfeed (Aug. 5, 2020, 12:17 PM): "For at least the last two months, a key Instagram feature, which algorithmically pushes users toward supposedly related content, has been treating hashtags associated with President Donald Trump and presumptive Democratic presidential nominee Joe Biden in very different ways. Searches for Biden also return a variety of pro-Trump messages, while searches for Trump-related topics only returned the specific hashtags, like #MAGA or #Trump — which means searches for Biden-related hashtags also return counter-messaging, while those for Trump do not."

Ryan Mac & Craig Silverman, "Hurting People at Scale": Facebook's Employees Reckon with the Social Network They've Built, Buzzfeed (July 23, 2020, 12:59 PM): Yaël Eisenstat, Facebook's former election ads integrity lead "said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned."

Elizabeth Dwoskin, Craig Timberg, & Tony Romm, Zuckerberg once wanted to sanction Trump. Then Facebook wrote rules that accommodated him., Wash. Post (June 28, 2020, 6:25 PM): "But that started to change in 2015, as Trump’s candidacy picked up speed. In December of that year, he posted a video in which he said he wanted to ban all Muslims from entering the United States. The video went viral on Facebook and was an early indication of the tone of his candidacy....Ultimately, Zuckerberg was talked out of his desire to remove the post in part by Kaplan, according to the people. Instead, the executives created an allowance that newsworthy political discourse would be taken into account when making decisions about whether posts violated community guidelines....In spring of 2016, Zuckerberg was also talked out of his desire to write a post specifically condemning Trump for his calls to build a wall between the United States and Mexico, after advisers in Washington warned it could look like choosing sides, according to Dex Torricke-Barton, one of Zuckerberg’s former speechwriters."  

Regarding election interference: "Facebook’s security engineers in December 2016 presented findings from a broad internal investigation, known as Project P, to senior leadership on how false and misleading news reports spread so virally during the election. When Facebook’s security team highlighted dozens of pages that had peddled false news reports, senior leaders in Washington, including Kaplan, opposed shutting them down immediately, arguing that doing so would disproportionately impact conservatives, according to people familiar with the company’s thinking. Ultimately, the company shut down far fewer pages than were originally proposed while it began developing a policy to handle these issues."

Craig Timberg, How conservatives learned to wield power inside Facebook, Wash. Post (Feb. 20, 2020, 1:20 PM): "In a world of perfect neutrality, which Facebook espouses as its goal, the political tilt of the pages shouldn’t have mattered. But in a videoconference between Facebook’s Washington office and its Silicon Valley headquarters in December 2016, the company’s most senior Republican, Joel Kaplan, voiced concerns that would become familiar to those within the company. 'We can’t remove all of it because it will disproportionately affect conservatives,; said Kaplan, a former George W. Bush White House official and now the head of Facebook’s Washington office, according to people familiar with the meeting who spoke on the condition of anonymity to protect professional relationships."

Related articles about Facebook

Ben Smith, What's Facebook's Deal with Donald Trump?NY Times (June 21, 2020): "Mr. Trump’s son-in-law, Jared Kushner, pulled together the dinner on Oct. 22 on short notice after he learned that Mr. Zuckerberg, the Facebook founder, and his wife, Priscilla Chan, would be in Washington for a cryptocurrency hearing on Capitol Hill, a person familiar with the planning said. The dinner, the person said, took place in the Blue Room on the first floor of the White House. The guest list included Mr. Thiel, a Trump supporter, and his husband, Matt Danzeisen; Melania Trump; Mr. Kushner; and Ivanka Trump. The president, a person who has spoken to Mr. Zuckerberg said, did most of the talking. The atmosphere was convivial, another person who got an account of the dinner said. Mr. Trump likes billionaires and likes people who are useful to him, and Mr. Zuckerberg right now is both."

Deepa Seetharaman, How a Facebook Employee Helped Trump Win--But Switched Sides for 2020, Wall St. J (Nov. 24, 2019, 3:18 PM): "One of the first things Mr. Barnes and his team advised campaign officials to do was to start running fundraising ads targeting Facebook users who liked or commented on Mr. Trump’s posts over the past month, using a product now called 'engagement custom audiences.' The product, which Mr. Barnes hand-coded, was available to a small group, including Republican and Democratic political clients. (The ad tool was rolled out widely around Election Day.) Within the first few days, every dollar that the Trump campaign spent on these ads yielded $2 to $3 in contributions, said Mr. Barnes, who added that the campaign raised millions of dollars in those first few days. Mr. Barnes frequently flew to Texas, sometimes staying for four days at a time and logging 12-hour days. By July, he says, he was solely focused on the Trump campaign. When on-site in the building that served as the Trump campaign’s digital headquarters in San Antonio, he sometimes sat a few feet from Mr. Parscale. The intense pace reflected Trump officials’ full embrace of Facebook’s platform, in the absence of a more traditional campaign structure including donor files and massive email databases."

Facebook removes Donald Trump post regarding children "almost immune" for violating rules on COVID misinformation; Twitter temporarily suspends Trump campaign account for same COVID misinformation

On August 5, 2020, as reported by the Wall St. Journal, Facebook removed a post from Donald Trump that contained a video of an interview he did with Fox News in which he reportedly said that children are "almost immune from this disease." Trump also said COVID-19 “is going to go away,” and that “schools should open” because “this it will go away like things go away.” A Facebook spokesperson explained to the Verge: "This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation." 

Twitter temporarily suspended the @TeamTrump campaign account from tweeting because of the same content. "The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.” The Trump campaign resumed tweeting so it appears it complied and removed the tweet. 

Neither Facebook nor Twitter provided much explanation of their decisions on their platforms, at least based on our search. They likely interpreted "almost immune from this disease" as misleading because children of every age can be infected by coronavirus and suffer adverse effects, including death (e.g., 6 year old, 9 year old, and 11 year old). In Florida, 23,170 minors tested positive for coronavirus by July 2020, for example. The CDC just published a study on the spread of coronavirus among children at summer camp in Georgia and found extensive infection spread among the children: 

These findings demonstrate that SARS-CoV-2 spread efficiently in a youth-centric overnight setting, resulting in high attack rates among persons in all age groups, despite efforts by camp officials to implement most recommended strategies to prevent transmission. Asymptomatic infection was common and potentially contributed to undetected transmission, as has been previously reported (1–4). This investigation adds to the body of evidence demonstrating that children of all ages are susceptible to SARS-CoV-2 infection (1–3) and, contrary to early reports (5,6), might play an important role in transmission (7,8). 

Experts around the world are conducting studies to learn more about how COVID-19 affects children.  The Smithsonian Magazine compiles a summary of the some of these studies and is well worth reading.  One of the studies from the Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine did examine the hypothesis: "Decreased susceptibility could result from immune cross-protection from other coronaviruses9,10,11, or from non-specific protection resulting from recent infection by other respiratory viruses12, which children experience more frequently than adults." But the study noted: "Direct evidence for decreased susceptibility to SARS-CoV-2 in children has been mixed, but if true could result in lower transmission in the population overall." This inquiry was undertaken because, thus far, children have reported fewer positive tests than adults. According to the Mayo Clinic Staff: "Children of all ages can become ill with coronavirus disease 2019 (COVID-19). But most kids who are infected typically don't become as sick as adults and some might not show any symptoms at all." Moreover, a study from researchers in Berlin found that children "carried the same viral load, a signal of infectiousness." The Smithsonian Magazine article underscores that experts believe more data and studies are needed to understand how COVID-19 affects children.

Speaking of the Facebook removal, Courtney Parella, a spokesperson for the Trump campaign, said: "The President was stating a fact that children are less susceptible to the coronavirus. Another day, another display of Silicon Valley's flagrant bias against this President, where the rules are only enforced in one direction. Social media companies are not the arbiters of truth."

Pages

Blog Search

Blog Archive

Categories