The Free Internet Project

Blog

Revisiting Facebook's "White Paper" Proposal for "Online Content Regulation"

In the Washington Post last year, Facebook CEO Mark Zuckerberg called for governments to enact new regulations for content moderation. In February 2020, Monika Bickert, the VP for Content Policy at Facebook, published a White Paper "Charting a Way Forward Online Content Regulation" outlining four key questions and recommendations for governments to regulate content moderation. As the U.S. Congress is considering several bills to amend Section 230 of the Communications Decency Act and the controversy over content moderation rages on, we thought it would be worth revsiting Facebook's White Paper. It is not every day that an Internet company asks for government regulation.

The White Paper draws attention to how corporations like Facebook make numerous daily decisions on what speech is disseminated online, marking a dramatic shift from how such decisions in the past were often raised in the context of government regulation and its intersection with the free speech rights of individual. Online content moderation marks a fundamental shift in speech regulation from governments to private corporations or Internet companies: 

For centuries, political leaders, philosophers, and activists have wrestled with the question of how and when governments should place limits on freedom of expression to protect people from content that can contribute to harm. Increasingly, privately run internet platforms are making these determinations, as more speech flows through their systems. Consistent with human rights norms, internet platforms generally respect the laws of the countries in which they operate, and they are also free to establish their own rules about permissible expression, which are often more restrictive than laws. As a result, internet companies make calls every day that influence who has the ability to speak and what content can be shared on their platform. 

With the enormous power over online speech, corporations like Facebook are beset with many demands from users and governments alike:

As a result, private internet platforms are facing increasing questions about how accountable and responsible they are for the decisions they make. They hear from users who want the companies to reduce abuse but not infringe upon freedom of expression. They hear from governments, who want companies to remove not only illegal content but also legal content that may contribute to harm, but make sure that they are not biased in their adoption or application of rules. 

Perhaps surprisingly, Facebook calls upon governments to regulate content moderation by Internet companies:

Facebook has therefore joined the call for new regulatory frameworks for online content—frameworks that ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression. This balance is necessary to protect the open internet, which is increasingly threatened—even walled off—by some regimes. Facebook wants to be a constructive partner to governments as they weigh the most effective, democratic, and workable approaches to address online content governance.

The White Paper then focused on four questions regarding the regulation of online content:

1. How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?

Regulators can aim to achieve the goal of reducing harmful speech in three ways: (1) increase accountability for internet companies by requiring certain systems and procedures in place, (2) require "specific performance targets" for companies to meet in moderating content that violates their policies (given that perfect enforcement is impossible), and (3) requiring that companies restrict certain forms of speech beyond what is already considered illegal content. Generally, Facebook leans towards the first approach as the best way to go. "By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts," Facebook explains. Facebook thinks the 3 approaches can be adopted in combination, and underscores that "the most important elements of any system will be due regard for each of the human rights and values at stake, as well as clarity and precision in the regulation."

2. How should regulation enhance the accountability of internet platforms to the public?

Facebook recommends that regulation require internet content moderation systems follow guidelines of being "consultative, transparent, and subject to independent oversight." "Specifically, procedural accountability regulations could include, at a minimum, requirements that companies publish their content standards, provide avenues for people to report to the company any content that appears to violate the standards, respond to such user reports with a decision, and provide notice to users when removing their content from the site." Facebook recommends that the law can incentivize or require, where appropriate, the following measures: 

  • Insight into a company’s development of its content standards.
  • A requirement to consult with stakeholders when making significant changes to standards.
  • An avenue for users to provide their own input on content standards.
  • A channel for users to appeal the company’s removal (or non-removal) decision on a specific piece of content to some higher authority within the company or some source of authority outside the company.
  • Public reporting on policy enforcement (possibly including how much content was removed from the site and for what reasons, how much content was identified by the company through its own proactive means before users reported it, how often the content appears on its site, etc.). 

Facebook recommends that countries draw upon the existing approaches in the Global Network Initiative Principles and the European Union Code of Conduct on Countering Illegal Hate Speech Online

3. Should regulation require internet companies to meet certain performance targets?

Facebook sees trade-offs in government regulation that would require companies to meet performance targets in enforcing their content moderation rules. This approach would hold companies responsible for the targets which they have met and not for the systems put in place to achieve these standards. Using this metric, the government would focus on specific targets in judging a company’s adherence to content moderation standards. The prevalence of content deemed harmful is a promising area for exploring the development of company standards. Harmful content is harmful because of the number of people who are exposed and engage with it. Monitoring prevalence would allow for regulators to determine the extent to which harm is being done on the platform. In the case of content that is harmful even with a limited audience, such as child sexual exploitation, the metric would be shifted to focus of the timeliness of action taken against such content by companies. Creating thresholds for violating content also requires that companies and regulators agree on which content are deemed harmful. However, Facebook cautions that performance targets can have unintended consequences: "There are significant trade-offs regulators must consider when identifying metrics and thresholds. For example, a requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. The biggest internet companies have developed technology that allows them to detect certain types of content violations with much greater speed and accuracy than human reporting. For instance, from July through September 2019, the vast majority of content Facebook removed for violating its hate speech, self-harm, child exploitation, graphic violence, and terrorism policies was detected by the company’s technology before anyone reported it. A regulatory focus on response to user or government reports must take into account the cost it would pose to these company-led detection efforts."

4. Should regulation define which “harmful content” should be prohibited on internet platforms?

Governments are considering whether to develop regulations that define “harmful content,” requiring that internet platforms remove new categories of harmful speech. Facebook recommends that governments start with the freedom of expression recognized by Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Governments seeking to address internet content moderation have to address the complexities. In creating rules, user preferences must be taken into account, as well as making sure not to undermine the goal of promoting expression. Facebook advises that governments must consider the practicalities of Internet companies moderating content: "Companies use a combination of technical systems and employees and often have only the text in a post to guide their assessment. The assessment must be made quickly, as people expect violations to be removed within hours or days rather than the months that a judicial process might take. The penalty is generally removal of the post or the person’s account." Accordingly, regulations need to be enforced at scale, as well as allow flexibility across language, trends, and content.

According to Facebook, creating regulations for social media companies has to be achieved through the combined efforts of not just lawmakers and private companies, but also through the help of individuals who use the online platforms. Governments must also create incentives by ensuring accountability in content moderations, that allow companies to balance safety, privacy, and freedom of expression. The internet is a global entity, and regulations made must respect the global scale and spread of communication across borders. Freedom of expression cannot be trampled, and any decision made must be made with the impact of these rights in mind. An understanding of technology and the proportionality in which to address harmful content needs to also be taken into account by regulators. Each platform is its own entity and what works best for one may not work best for another. A well-developed framework will make the internet a safer place and allow for continued freedom of expression.

--written by Bisola Oni

 

 

Facebook removes Romanian troll farm fake accounts posing as Black voters for Trump

In July 2020, Facebook reported that it had removed nine networks of fake accounts, pages, and groups for violating its policies against coordinated inauthentic behavior (CIB). As Facebook’s July 2020 CIB report explains, CIB means coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation, including both domestically non-government campaigns and activities on behalf of foreign entities. Facebook removed:

  • 798 Facebook accounts
  • 259 Instagram accounts
  • 669 Facebook pages
  • 69 Facebook groups.

Some of the fake accounts targeted U.S. users, ahead of the 2020 U.S. election. Facebook removed 35 Facebook accounts, 3 pages, and 88 Instagram accounts originating from a suspected Romanian troll farm. Facebook explained: “The people behind this network used fake accounts — some of which had already been detected and disabled by our automated systems — to pose as Americans, amplify and comment on their own content, and manage Pages including some posing as President Trump fan Pages. This network posted about US domestic news and events, including the upcoming November election, the Trump campaign and support for the campaign by African Americans, conservative ideology, Christian beliefs, and Qanon. They also frequently reposted stories by American conservative news networks and the Trump campaign.” According to NBC News, "Troll farms — groups of people that work together to manipulate internet discourse with fake accounts — are often outsourced and purchased by foreign governments or businesses to push specific political talking points."

The Romanian troll farm Facebook accounts were following a similar tactic of Russian operatives who posed as Black Lives Matter supporters to undermine Black voter supporter for Hillary Clinton. Similarly, Facebook found that some of the fake Romanian accounts posed as Black Trump supporters. The Romanian troll farm used hashtags like “Blackpeoplevotefortrump” and "We Love Our President" to post pro-Trump comments, spread information supporting the Republican Party and Qanon, and advertise Trump campaign. Altogether, these Romanian accounts allegedly drew around 1600 followers on Facebook and 7200 followers on Instagram. One example Facebook provided is shown below:

 

Fake "blackpeoplevotefortrump" account run by Romanian troll farm on Facebook

These fake accounts were taken down for engaging in coordinated inauthentic behavior, Facebook explained.

As reported by NBC News, Facebook also removed 303 Facebook accounts, 181 pages, 44 groups, and 31 Instagram accounts that were followed by 2 million people. These accounts were connected to Epoch Media Group, a pro-Trump media outlet. The accounts violated Facebook's policies against coordinated inauthentic behavior and foreign interference. This network operated from many regions around the globe and focused primarily on English and Chinese-speaking audiences globally. These accounts posted about news and comments related to the Chinese government such as the Hong Kong protests, the US administration’s policies towards China, the Falun Gong movement, conspiracy theories behind the US protests and COVID-19 misinformation, according to Facebook.  Additionally, Facebook said it linked this network to Truth Media, which was involved in Facebook’s previous investigation for violating policies against coordinated inauthentic behavior, spam and misrepresentation and which has now been banned on Facebook.

--written by Candice Wang

 

 

 

 

 

D.C. Circuit Blocks Trump Appointee Michael Pack from Replacing Open Technology Funds’ Leadership

Granting an emergency motion, the D.C. Circuit Court of Appeals issued an injunction blocking the new CEO of the U.S. Agency for Global Media (USAGM) Michael Pack, appointed by Donald Trump, from replacing the existing Open Technology Fund (OTF) board members with his own replacements.The OTF is a non-profit organization, one of several global media agencies monitored by the US. The OTF has funded programs in sixty countries, providing nearly two billion people, including journalists, with secured and uncensored communication tools, according to the Washington Post.

On July 21, 2020, the D.C. Court of Appeals disagreed with the lower court's rejection of the Open Technology Fund's motion for preliminary injunction. In its order, the D.C. Court of Appeals found that OTF had shown a likelihoood of success that Pack lacked statutory authority "to remove and replace members of OTF's board."  The court also found OTF would suffer irreparable harm without the injunction: “[T]he government's actions have jeopardized OTF’s relationships with its partner organizations, leading its partner organizations to fear for their safety.” The injunction restored the status quo – it reinstated the previous OTF board until the court issues a decision in the expedited appeal.

A lawyer for OTF board members, Deepak Gupta, applauded the decision. Gupta stated Pack’s actions don’t “just harm OTF and the people who work there. OTF works to advance internet freedom in repressive regimes around the world and Pack’s actions have put the safety of activists and journalists, in places like Tehran and Hong Kong, at risk.”

One day after the D.C. Circuit issued its order, the D.C. Attorney General, Karl Racine, also sued the USAGM alleging Pack’s attempted takeover of the OTF violated a local law concerning the governance of non-profit organizations. Plaintiffs in this new lawsuit include OTF board members, former ambassadors Ryan Cocker and Karen Kornbluh, public relations executive Michael Kempner, and Democratic technology policy advisor Ben Scott.

Pack took office in June 2020. Shortly after his confirmation, Pack fired the leadership of taxpayer-funded media organizations the USAGM oversees and replaced them with career officials. Three such international media outlets’ ousted leadership include Radio Free Europe, Radio Free Asia, and Middle East Broadcasting Networks. Pack’s actions in the last month have raised massive concerns from many – from House Representatives, Senators, and others. Pack’s actions are being seen as an effort to foster favorable media coverage of Trump and his Administration.

Eleven Democrats in the U.S. House of Representatives penned a letter to the heads of the House of Appropriations Subcommittee on State, Foreign Operations, and Related Program, stating they were “deeply concerned about the firings of qualified leadership and “reports that USAGM has frozen funds and grants” for programs related to censorship evasion and internet freedom in Hong Kong and elsewhere."

Seven U.S. Senators, including GOP Senators Rubio and Graham, wrote a letter directly to Pack. In the letter, Rubio and his colleagues wrote:

The termination of qualified, expert staff and network heads for no specific reason as well as the removal of their boards raises questions about the preservation of these entities and their ability to implement their statutory missions now and in the future. These actions, which came without any consultation with Congress, let alone notification, raise serious questions about the future of the U.S. Agency for Global Media (USAGM) under your leadership.

Another major worry concerns the switch to closed-source technologies, as opposed to the open-source technologies that OTF has built its reputation on. Open-source software publishes its source code publicly, so it can be studied, verified, and improved upon by others; closed-source software does not.

“This is really the worst-case scenario,” Jillian York, the Director of International Freedom of Expression at the Electronic Frontier Foundation, told VICE News. She continued, “I think the really dangerous thing here is that the new leadership is under pressure to fund these closed-source technologies.”

 

Revisiting the Net Neutrality debate in U.S. ahead of 2020 Election

With the COVID-19 pandemic, people are in their homes more than ever. Whether it be jobs, school, or recreation, the internet is being used more than ever before, as the New York Times reported. The pandemic has pushed society to rely on online technologies, including professional video calls, virtual interviewing, and complex educational courses completely online. Though most of us take the internet’s speed and extensive supply of content on almost all topics for granted, the rules for Internet access providers are controversial. 

The main area of contention is whether the government should require Internet access providers (e.g., Comcast, Verizon) to abide by principles of net neutrality. Net neutrality “is the principle that Internet Service Providers (ISPs) equitably provide consumer access to any legal online content and application, regardless of the source.” Thus, under net neutrality, ISPs including entities like Verizon, AT&T, and Comcast Xfinity “cannot block or slow legal content flowing through their networks" or show favoritism to some websites over others. When net neutrality is discussed by the government and ISPs there are three main actions that are in question: blocking, throttling, and paid prioritization. Blocking refers to an ISP’s ability to prevent its customers from accessing content from legal sources. For example, if Verizon intentionally blocks its customers from seeing the AT&T website. Throttling is when an ISP slows or interferes with the transmission of content a customer is searching for. For example, if AT&T only allows a customer to see half of a website or makes the loading so slow that the customer chooses not to go on it. Throttling tends to go hand-in-hand with the third action called paid prioritization which is when an ISP charges additional fees to content providers if they want their content to be quickly delivered to their customers. For example, if AT&T charges CNN a prioritization fee then AT&T will ensure CNN can be quickly accessed. Also, this could include throttling of competing news outlets’ websites and slowing down customers’ access to them in order to prioritize CNN.

FCC’s Regulatory Structure

The Federal Communications Commission (FCC) is the government entity that monitors the ISPs’ actions and they strategize how best to protect consumers in the internet space. The level of transparency ISPs have to meet in relation to blocking, throttling, and paid prioritization depends on what the FCC requires. The president of the United States chooses who become the chairman of the FCC, therefore the entity’s actions tend to be far from apolitical and change depending on who is in office.

The initial rules governing the telecommunications world were established in 1934 with the Communications Act. The FCC was given the responsibility of overseeing and regulating “telephone, telegraph, and radio communication.” Later, this was updated to technologies such as cable, broadcasting, and satellite television. Within the Communications Act, two regulatory frameworks were established: Information Services and Telecommunication Services (also known as Common Carriers). Information services are “platforms that generate, store, transform, retrieve, and process information via telecommunications.” Information services are given lighter regulations as compared to common carriers. Common carriers are “services transmitting energy for hire, including telecommunication services.” These carriers have historically faced more regulation similar to gas, electric, and telephone providers. These regulations include limitations on prices and the nature of the services provided.

Regulatory Framework Under President Obama

In 2010, the FCC established the Open Internet Order, which included new rules that were meant to provide broad internet access and create a “neutral network.” This order required ISPs to be completely transparent about any and all “blocking and unreasonable discrimination of content” that they were engaging in. An Internet Advisory Committee was established within the FCC that would monitor the ISPs and enforce the rules. Challenge to the Open Internet Order was brought claiming that ISPs were information services and therefore wrongfully heavily regulated. The DC Circuit Court upheld this challenge and ruled that information services cannot be regulated like this. Also, the anti-blocking and nondiscrimination rules were ruled too restrictive for information services. However, the transparency requirement was upheld. 

Afterwards, the FCC revised their policy and in 2015 issued the “Net Neutrality Rules for Open Internet" under FCC Chairman Tom Wheeler. This policy held ISPs to the same standard as telephone companies and therefore, established them as common carriers, which can be regulated to a greater extent. This order prohibited ISPs from any blocking throttling, or paid prioritization. It also established general conduct standards to protect consumers from discriminatory practices as well as continued to enforce transparency rules. The DC Circuit Court ruled this new policy lawful due to ISPs being categorized as common carriers. For a timeline of net neutrality under the Obama administration, visit here.

Regulatory Framework Under President Trump

Shortly after President Trump’s election, the new FCC leadership under Chairman Ajit Pai proceeded to dismantle the prior net neutrality rules. In 2017, a new FCC order titled "Restoring Internet Freedom" was issued. It reclassified ISPs as information services, therefore shielding them from extensive regulation. It also eliminated any rules prohibiting blocking, throttling, and paid prioritization, however transparency regarding these practices is still required. The FCC also gave the Federal Trade Commission the authority to take action against any terms of service violations by the ISPs, thus almost completely eliminating the FCC’s regulatory power. The Trump administration claims that any state interference in the ISPs’ work would go against the federal regulatory framework and be too controlling of the industry. The D.C. Circuit upheld the FCC's repeal of net neutrality, but remanded the case for consideration of several issues.

Since this policy was established, many net neutrality advocates, consumer groups, and other concerned individuals have tried to sue the FCC. Also, states such as California, Oregon, Vermont, and Washington have enacted their own net neutrality rules that promote more government regulation on the ISPs that are providing services in their states. However, the federal government is challenging California's net neutrality law.

Regulatory Framework Proposed by the Biden Campaign

Former Vice President Biden’s campaign crafted a task force document with Senator Bernie Sanders and other left-leaning individuals to create an internet plan that restores net neutrality. As reported by Gizmodo, Biden has committed to investing $20 billion in rural broadband internet access and he believes that more public investment in broadband infrastructure can benefit Americans from all backgrounds. If he is elected, Biden promises to re-establish the rules prohibiting blocking, throttling, and paid prioritization, most likely through the reclassification of ISPs as common carriers. Though as president Biden would not have direct say over net neutrality, he will be able to choose the new FCC chairman who aligns with his policy goals.

Regardless of your political alignment, access to the internet has become more important than ever. Whether the U.S. government should adopt net neutrality is a controversy that divides the two presidential candidates and two political parties. 

--written by Mariam Tabrez

 

Summary: Mounting Allegations Facebook, Zuckerberg Have Political Bias and Favoritism for Trump and conservatives in content moderation

In the past week, more allegations surfaced that Facebook executives have been intervening in questionable ways in the company's content moderation procedure that show favoritism to Donald Trump, Breitbart, and other conservatives. These news reports cut against the narrative that Facebook has an "anti-conservative bias." For example, according to some allegations, Facebook executives didn't want to enforce existing community standards or change the community standards in a way that would flag conservatives for violations, even when the content moderators found violations by conservatives.  Below is a summary of the main allegations that Facebook has been politically biased in favor of Trump and conservatives.  This page will be updated if more allegations are reported.

Ben Smith, How Pro-Trump Forces Work the Refs in Silicon Valley, N.Y. Times (Aug. 9, 2020): "Since then, Facebook has sought to ingratiate itself to the Trump administration, while taking a harder line on Covid-19 misinformation. As the president’s backers post wild claims on the social network, the company offers the equivalent of wrist slaps — a complex fact-checking system that avoids drawing the company directly into the political fray. It hasn’t worked: The fact-checking subcontractors are harried umpires, an easy target for Trump supporters’ ire....In fact, two people close to the Facebook fact-checking process told me, the vast bulk of the posts getting tagged for being fully or partly false come from the right. That’s not bias. It’s because sites like The Gateway Pundit are full of falsehoods, and because the president says false things a lot."

Olivia Solon, Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages, NBC News (Aug. 7, 2020, 2:31 PM): "The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook's fact-checking could go public and fuel allegations that the social network was biased against conservatives. The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias."

Craig Silverman, Facebook Fired an Employee Who Collected Evidence of Right-Wing Page Getting Preferential Treatment, Buzzfeed (Aug. 6, 2020, 4:13 PM): "[S]ome of Facebook’s own employees gathered evidence they say shows Breitbart — along with other right-wing outlets and figures including Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and conservative video production nonprofit Prager University — has received special treatment that helped it avoid running afoul of company policy. They see it as part of a pattern of preferential treatment for right-wing publishers and pages, many of which have alleged that the social network is biased against conservatives." Further: "Individuals that spoke out about the apparent special treatment of right-wing pages have also faced consequences. In one case, a senior Facebook engineer collected multiple instances of conservative figures receiving unique help from Facebook employees, including those on the policy team, to remove fact-checks on their content. His July post was removed because it violated the company’s 'respectful communication policy.'”

Ryan Mac, Instagram Displayed Negative Related Hashtags for Biden, but Hid them for Trump, Buzzfeed (Aug. 5, 2020, 12:17 PM): "For at least the last two months, a key Instagram feature, which algorithmically pushes users toward supposedly related content, has been treating hashtags associated with President Donald Trump and presumptive Democratic presidential nominee Joe Biden in very different ways. Searches for Biden also return a variety of pro-Trump messages, while searches for Trump-related topics only returned the specific hashtags, like #MAGA or #Trump — which means searches for Biden-related hashtags also return counter-messaging, while those for Trump do not."

Ryan Mac & Craig Silverman, "Hurting People at Scale": Facebook's Employees Reckon with the Social Network They've Built, Buzzfeed (July 23, 2020, 12:59 PM): Yaël Eisenstat, Facebook's former election ads integrity lead "said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned."

Elizabeth Dwoskin, Craig Timberg, & Tony Romm, Zuckerberg once wanted to sanction Trump. Then Facebook wrote rules that accommodated him., Wash. Post (June 28, 2020, 6:25 PM): "But that started to change in 2015, as Trump’s candidacy picked up speed. In December of that year, he posted a video in which he said he wanted to ban all Muslims from entering the United States. The video went viral on Facebook and was an early indication of the tone of his candidacy....Ultimately, Zuckerberg was talked out of his desire to remove the post in part by Kaplan, according to the people. Instead, the executives created an allowance that newsworthy political discourse would be taken into account when making decisions about whether posts violated community guidelines....In spring of 2016, Zuckerberg was also talked out of his desire to write a post specifically condemning Trump for his calls to build a wall between the United States and Mexico, after advisers in Washington warned it could look like choosing sides, according to Dex Torricke-Barton, one of Zuckerberg’s former speechwriters."  

Regarding election interference: "Facebook’s security engineers in December 2016 presented findings from a broad internal investigation, known as Project P, to senior leadership on how false and misleading news reports spread so virally during the election. When Facebook’s security team highlighted dozens of pages that had peddled false news reports, senior leaders in Washington, including Kaplan, opposed shutting them down immediately, arguing that doing so would disproportionately impact conservatives, according to people familiar with the company’s thinking. Ultimately, the company shut down far fewer pages than were originally proposed while it began developing a policy to handle these issues."

Craig Timberg, How conservatives learned to wield power inside Facebook, Wash. Post (Feb. 20, 2020, 1:20 PM): "In a world of perfect neutrality, which Facebook espouses as its goal, the political tilt of the pages shouldn’t have mattered. But in a videoconference between Facebook’s Washington office and its Silicon Valley headquarters in December 2016, the company’s most senior Republican, Joel Kaplan, voiced concerns that would become familiar to those within the company. 'We can’t remove all of it because it will disproportionately affect conservatives,; said Kaplan, a former George W. Bush White House official and now the head of Facebook’s Washington office, according to people familiar with the meeting who spoke on the condition of anonymity to protect professional relationships."

Related articles about Facebook

Ben Smith, What's Facebook's Deal with Donald Trump?NY Times (June 21, 2020): "Mr. Trump’s son-in-law, Jared Kushner, pulled together the dinner on Oct. 22 on short notice after he learned that Mr. Zuckerberg, the Facebook founder, and his wife, Priscilla Chan, would be in Washington for a cryptocurrency hearing on Capitol Hill, a person familiar with the planning said. The dinner, the person said, took place in the Blue Room on the first floor of the White House. The guest list included Mr. Thiel, a Trump supporter, and his husband, Matt Danzeisen; Melania Trump; Mr. Kushner; and Ivanka Trump. The president, a person who has spoken to Mr. Zuckerberg said, did most of the talking. The atmosphere was convivial, another person who got an account of the dinner said. Mr. Trump likes billionaires and likes people who are useful to him, and Mr. Zuckerberg right now is both."

Deepa Seetharaman, How a Facebook Employee Helped Trump Win--But Switched Sides for 2020, Wall St. J (Nov. 24, 2019, 3:18 PM): "One of the first things Mr. Barnes and his team advised campaign officials to do was to start running fundraising ads targeting Facebook users who liked or commented on Mr. Trump’s posts over the past month, using a product now called 'engagement custom audiences.' The product, which Mr. Barnes hand-coded, was available to a small group, including Republican and Democratic political clients. (The ad tool was rolled out widely around Election Day.) Within the first few days, every dollar that the Trump campaign spent on these ads yielded $2 to $3 in contributions, said Mr. Barnes, who added that the campaign raised millions of dollars in those first few days. Mr. Barnes frequently flew to Texas, sometimes staying for four days at a time and logging 12-hour days. By July, he says, he was solely focused on the Trump campaign. When on-site in the building that served as the Trump campaign’s digital headquarters in San Antonio, he sometimes sat a few feet from Mr. Parscale. The intense pace reflected Trump officials’ full embrace of Facebook’s platform, in the absence of a more traditional campaign structure including donor files and massive email databases."

Claiming "national emergency," Trump Issues Executive Order Banning US transactions with TikTok and WeChat

Late on Thursday, Aug. 6, 2020, Donald Trump issued two executive orders, one against TikTok and the other against Tencent's messaging platform WeChat.  Claiming a "national emergency," Trump invoked the authority of the "President by the Constitution and the laws of the United States of America, including the International Emergency Economic Powers Act (50 U.S.C. 1701 et seq.) (IEEPA), the National Emergencies Act (50 U.S.C. 1601 et seq.), and section 301 of title 3, United States Code." For a good summary of the International Emergency Economic Powers Act, read Anupam Chander's recent Washington Post op-ed and this NPR interview with Elizabeth Goitein.

The Executive Orders prohibits, "to the exent permitted under applicable law," any transactions with "ByteDance Ltd. (a.k.a. Zìjié Tiàodòng), Beijing, China, or its subsidiaries, in which any such company has any interest" (ByteDance owns TikTok) and with WeChat in 45 days.  The Secretary of Commerce is to identify the transactions prohibited by the order 45 days after the date of the order.  The Excecutive Order also prohibits "[a]ny transaction by a United States person or within the United States that evades or avoids, has the purpose of evading or avoiding, causes a violation of, or attempts to violate the prohibition." To justify this emergency action, Trump claimed the following charges on Bytedance and TikTok:

I, DONALD J. TRUMP, President of the United States of America, find that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China (China) continues to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by one mobile application in particular, TikTok.

TikTok, a video-sharing mobile application owned by the Chinese company ByteDance Ltd., has reportedly been downloaded over 175 million times in the United States and over one billion times globally.  TikTok automatically captures vast swaths of information from its users, including Internet and other network activity information such as location data and browsing and search histories.  This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

TikTok also reportedly censors content that the Chinese Communist Party deems politically sensitive, such as content concerning protests in Hong Kong and China’s treatment of Uyghurs and other Muslim minorities.  This mobile application may also be used for disinformation campaigns that benefit the Chinese Communist Party, such as when TikTok videos spread debunked conspiracy theories about the origins of the 2019 Novel Coronavirus.

These risks are real.  The Department of Homeland Security, Transportation Security Administration, and the United States Armed Forces have already banned the use of TikTok on Federal Government phones.  The Government of India recently banned the use of TikTok and other Chinese mobile applications throughout the country; in a statement, India’s Ministry of Electronics and Information Technology asserted that they were “stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India.”  American companies and organizations have begun banning TikTok on their devices.  The United States must take aggressive action against the owners of TikTok to protect our national security.

Trump made similar charges against WeChat:

WeChat, a messaging, social media, and electronic payment application owned by the Chinese company Tencent Holdings Ltd., reportedly has over one billion users worldwide, including users in the United States.  Like TikTok, WeChat automatically captures vast swaths of information from its users.  This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information.  In addition, the application captures the personal and proprietary information of Chinese nationals visiting the United States, thereby allowing the Chinese Communist Party a mechanism for keeping tabs on Chinese citizens who may be enjoying the benefits of a free society for the first time in their lives.  For example, in March 2019, a researcher reportedly discovered a Chinese database containing billions of WeChat messages sent from users in not only China but also the United States, Taiwan, South Korea, and Australia.  WeChat, like TikTok, also reportedly censors content that the Chinese Communist Party deems politically sensitive and may also be used for disinformation campaigns that benefit the Chinese Communist Party.  These risks have led other countries, including Australia and India, to begin restricting or banning the use of WeChat.  The United States must take aggressive action against the owner of WeChat to protect our national security.

In a company blog post, TikTok said: "We will pursue all remedies available to us in order to ensure that the rule of law is not discarded and that our company and our users are treated fairly – if not by the Administration, then by the US courts." TikTok also called upon its 100 users in the U.S. to make their voices heard in the White House: "We want the 100 million Americans who love our platform because it is your home for expression, entertainment, and connection to know: TikTok has never, and will never, waver in our commitment to you. We prioritize your safety, security, and the trust of our community – always. As TikTok users, creators, partners, and family, you have the right to express your opinions to your elected representatives, including the White House. You have the right to be heard."

In a report by CNN, a Tencent spokesperson said it it reviewing the Executive Order. There is some confusion on the scope of the Executive Order, which names any transactions with "Tencent Holdings" (not just WeChat). Tencent is a massive global conglomerate with many products and services (e.g., videogames by Riot Games, such as "League of Legends"), not just WeChat. A White House representative later confirmed to the LA Times that the order only applies to WeChat, not all of Tencent.

Meanwhile, according to the Wall St. Journal, bills have passed in the House and Senate that, if enacted, would ban federal employees from using TikTok on government devices. To pass, Congress would have to agree on the same bill.

Facebook removes Donald Trump post regarding children "almost immune" for violating rules on COVID misinformation; Twitter temporarily suspends Trump campaign account for same COVID misinformation

On August 5, 2020, as reported by the Wall St. Journal, Facebook removed a post from Donald Trump that contained a video of an interview he did with Fox News in which he reportedly said that children are "almost immune from this disease." Trump also said COVID-19 “is going to go away,” and that “schools should open” because “this it will go away like things go away.” A Facebook spokesperson explained to the Verge: "This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation." 

Twitter temporarily suspended the @TeamTrump campaign account from tweeting because of the same content. "The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.” The Trump campaign resumed tweeting so it appears it complied and removed the tweet. 

Neither Facebook nor Twitter provided much explanation of their decisions on their platforms, at least based on our search. They likely interpreted "almost immune from this disease" as misleading because children of every age can be infected by coronavirus and suffer adverse effects, including death (e.g., 6 year old, 9 year old, and 11 year old). In Florida, 23,170 minors tested positive for coronavirus by July 2020, for example. The CDC just published a study on the spread of coronavirus among children at summer camp in Georgia and found extensive infection spread among the children: 

These findings demonstrate that SARS-CoV-2 spread efficiently in a youth-centric overnight setting, resulting in high attack rates among persons in all age groups, despite efforts by camp officials to implement most recommended strategies to prevent transmission. Asymptomatic infection was common and potentially contributed to undetected transmission, as has been previously reported (1–4). This investigation adds to the body of evidence demonstrating that children of all ages are susceptible to SARS-CoV-2 infection (1–3) and, contrary to early reports (5,6), might play an important role in transmission (7,8). 

Experts around the world are conducting studies to learn more about how COVID-19 affects children.  The Smithsonian Magazine compiles a summary of the some of these studies and is well worth reading.  One of the studies from the Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine did examine the hypothesis: "Decreased susceptibility could result from immune cross-protection from other coronaviruses9,10,11, or from non-specific protection resulting from recent infection by other respiratory viruses12, which children experience more frequently than adults." But the study noted: "Direct evidence for decreased susceptibility to SARS-CoV-2 in children has been mixed, but if true could result in lower transmission in the population overall." This inquiry was undertaken because, thus far, children have reported fewer positive tests than adults. According to the Mayo Clinic Staff: "Children of all ages can become ill with coronavirus disease 2019 (COVID-19). But most kids who are infected typically don't become as sick as adults and some might not show any symptoms at all." Moreover, a study from researchers in Berlin found that children "carried the same viral load, a signal of infectiousness." The Smithsonian Magazine article underscores that experts believe more data and studies are needed to understand how COVID-19 affects children.

Speaking of the Facebook removal, Courtney Parella, a spokesperson for the Trump campaign, said: "The President was stating a fact that children are less susceptible to the coronavirus. Another day, another display of Silicon Valley's flagrant bias against this President, where the rules are only enforced in one direction. Social media companies are not the arbiters of truth."

China enacts controversial "national security law" for Hong Kong, with greater power against protesters in Hong Kong

 

On June 30, 2020, China engaged in a secret process to enact a new national security law that would significantly impact the way Hong Kong uses the Internet, as reported by Forbes. Hong Kong’s chief executive, Carrie Lam, was not allowed to see a draft of the law before its passage. China has justified the law as a way to safeguard Hong Kong's economic development and political stability. The law prevents and punishes any act that would put China’s national security at risk, including secession, terrorism, subversion, and collusion with foreign forces. The vagueness of the four crimes means that they may be used broadly to silence any dissent or protesters in Hong Kong, including in content posted on social media, against China’s rule, according to NPR. Margaret Lewis, a law professor at Seton Hall Law School and a specialist on Hong Kong and Taiwan told NPR, “What we do know is that Beijing now has an efficient, official tool for silencing critics who step foot in Hong Kong." 

The national security law is written very broadly. The law penalizes even offenses committed “outside the region by a person who is not a permanent resident of the region.” The text of the law seems to purport extraterritorial application to any person in the world who writes something in violation of the law, including Americans. If someone violates this new national security law, they would be facing a penalty of a life sentence in prison, according to Forbes. What’s more concerning is the new law authorizes China to set up a "National Security Committee" to oversee the investigation and prosecution of any violations without any judicial review. Michael C. Davis, a fellow at the Wilson Center, told NPR: “With this law being superior to all local law and the Basic Law (Hong Kong's constitution) itself, there is no avenue to challenge the vague definitions of the four crimes in the law as violating basic rights.” 

Logistically, cases classified as “complex” or “serious” will be tried in mainland Chinese courts by Chinese judges, according to NPR. This would push aside Hong Kong’s judicial system, waive trial by jury, and deny public access to the trial if the case contains sensitive information. One fear is that people arrested in Hong Kong would be extradited to China to face trial. 

Foreign tech firms fear that Beijing’s law will severely control or restrict the Internet that have remarkably shaped and helped Hong Kong’s growth, as Forbes reports. Just within hours of the law being passed, two opposition political parties in Hong Kong announced they were voluntarily disbanding. People in Hong Kong have been deleting their social media accounts in fear that their speech could be considered “subversive” or “secessionist”. While Twitter has refused to comment on Hong Kong users dropping off of the social platform, there have been multiple public sign-offs from some top pro-democracy figures in Hong Kong. The BBC reported on July 30, 2020 that four students in Hong Kong have been arrested for "inciting secession" on social media. 

 

Some people in Hong Kong may try to protect their identity on social media accounts by using VPNs. In comparison to June 2020, there’s been a 321% spike in VPN’s (virtual private networks) in July. A VPN conceals a user’s online activity.  One Hong Kong protester told Fortune that he downloaded a VPN after the announcement of the new law because he was “really afraid that the [Chinese Communist Party] will get my personal information.” Police have already made a handful of arrests including one man who displayed a flag advocating for Hong Kong’s independence from China. Yet, platforms like LIHKG (similar to the online messaging board – Reddit), are still active with protesters expressing their anti-government criticisms.

--written by Alfa Alemayehu

 

 

Facebook and Instagram Studying If Racial Bias in Their Algorithms After Years of Ignoring Issue

Facebook announced it will create teams to study if there is any racial bias within Facebook's and Instagram's algorithms that negatively impact the experience of its minority users experience within the social media platforms. The Equity and Inclusion team at Instagram and the and Inclusivity Product Team for Facebook will tackle a large issue that Facebook has largely ignored in the past. Facebook is under intense scrutiny.  Since July 2020, Facebook has faced a massive advertising boycott called Stop Hate for Profit from over five hundred companies such as Coca-Cola, Disney, and Unilever. Facebook has been criticized for a lack of initiative in handling hate speech and attempts to sow racial discord on their platforms, including to suppress Black voters. An independent audit by civil rights experts found the prevalence of hate speech targeting Blacks, Jews, and Muslims on Facebook "especially acute." “The racial justice movement is a moment of real significance for our company,” said Vishal Shah, Instagram’s product director told the Wall Street Journal. “Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves.” 

The new research teams will cover what has been was blind spot for Facebook. ln 2019, Facebook employees found that a computer automated moderation algorithm on Instagram was 50 percent more likely to suspend the accounts of Black users compared to white users, according to the Wall Street Journal. This finding was supported by user complaints to the company. After employees reported these findings, they were sworn to secrecy and no further research on the algorithm was done by Facebook. Ultimately, the algorithm was changed, but it was not tested any further for racial bias. Facebook officially stated that that research was stopped because an improper methodology was being applied at the time. As reported by NBC News, employees of Facebook leaked that the automated moderation algorithm automatically detects and deletes hate-speech against white users more effectively then it moderates hate speech against black users.  

Facebook's announcement of these teams to study racial bias in these social media platforms are only in the infancy stage. The Instagram Equity and Inclusion team does not have a team leader announced yet. The Inclusivity Product Team will supposedly work closely with a group of Black users and cultural experts to make effective changes. However, Facebook employees previously working on this issue have stated anonymously that they were ignored and discouraged to continue their work. The culture of Facebook as a company and previous inaction to address racial issues have raised skepticism of Facebook's recent initiatives. Time will tell if Facebook is serious about the problem.  

--written by Sean Liu  

 

Cleaning house: Twitter suspends 7,000 accounts of QAnon conspiracy theory supporters

On July 21, 2020, Twitter suspended 7,000 accounts spreading QAnon conspiracy theories. In a tweet about the banning of these QAnon accounts, Twitter reiterated their commitment to taking "strong enforcement actions on behavior that has the potential to lead to offline harm." Twitter identified the QAnon accounts' violations of its community standards against "multi-account[s]," "coordinating abuse around individual victims," and "evad[ing] a previous suspension." In addition to the permanent suspensions, Twitter also felt it necessary to ban content and accounts "associated with Qanon" from the Trends and recommendations on Twitter, as well as to avoid "highlighting this activity in search and conversations." Further, Twitter will block "URLs associated with QAnon from being shared on Twitter." 

These actions by Twitter are a bold step in what has been a highly contentious area concerning the role of social media platforms in moderating hateful or harmful content. Some critics suggested that Twitter's QAnon decision lacked notice and transparency.  Other critics contended that Twitter's actions were too little to stop the "omnisconpiracy theory" that QAnon has become across multiple platforms.

So what exactly is QAnon? CNN describes the origins of QAnon, which began as a single conspiracy theory: QAnon "claim dozens of politicians and A-list celebrities work in tandem with governments around the globe to engage in child sex abuse. Followers also believe there is a 'deep state' effort to annihilate President Donald Trump."  Forbes similarly describes: "Followers of the far-right QAnon conspiracy believe a “deep state” of federal bureaucrats, Democratic politicians and Hollywood celebrities are plotting against President Trump and his supporters while also running an international sex-trafficking ring." In 2019, an internal FBI memo reportedly identified QAnon as a domestic terrorism threat.

Followers of QAnon are also active on Facebook, Reddit, and YouTube. The New York Times reported that Facebook was considering takeingsteps to limit the reach QAnon content had on its platform. Facebook is coordinating with Twitter and other platforms in considering its decision; an announcement is expected in the next month. Facebook has long been criticized for its response, or lack of response, to disinformation being spread on its platform. Facebook is now the subject of a boycott, Stop Hate for Profit, calling for a stop to advertising until steps are taken to halt the spread of disinformation on the social media juggernaut. Facebook continues to allow political ads using these conspiracies on its site. Forbes reports that although Facebook has seemingly tried to take steps to remove pages containing conspiracy theories, a number of pages still remain. Since 2019, Facebook has allowed 144 ads promoting QAnon on its platform, according to Media Matters. Facebook has continuously provided a platform for extremist content; it even allowed white nationalist content until officially banning it in March 2019.

Twitter's crack down on QAnon is a step in the right direction, but it signals how little companies like Twitter and Facebook have done to stop disinformation and pernicious conspiracy theories in the past. As conspiracy theories can undermine effective public health campaigns to stop the spread of the coronavirus and foreign interference can undermine elections, social media companies appear to be playing a game of catch-up.  Social media companies would be well-served by devoting even greater resources to the problem, with more staff and clearer articulation of its policies and enforcement procedures. In the era of holding platforms and individuals accountable for actions that spread hate, social media companies now appear to realize that they have greater responsibilities for what happens on their platforms.

--written by Bisola Oni

Pages

Blog Search

Blog Archive

Categories