The Free Internet Project

Blog

Facebook enlists independent researchers and Social Science One to study how Facebook, Instagram affect 2020 US elections

On Aug. 31, 2020, Facebook announced a new research initiative it started with Social Science One committee chairs, Professors Talia Stroud of University of Texas at Austin and Joshua Tucker of New York University. The researchers will examine "examine the impact of how people interact with our products, including content shared in News Feed and across Instagram, and the role of features like content ranking systems." The research projects conducted on Facebook or via data from Facebook will start soon and end in December, after the November 2020 election. Facebook "expect[s] between 200,000 and 400,000 US adults may choose to participate in the study, which could include things like taking part in surveys or agreeing to see a different product experience. We will also study trends across Facebook and Instagram – but only in aggregate."

Interestingly, Facebook believes that the research projects will not affect the outcome of the U.S. elections: "With billions of dollars spent on ads, direct mail, canvassing, organizing and get out the vote efforts, it is statistically implausible that one research initiative could impact the outcome of an election. The research has been carefully designed to not impact the outcome of the election or harm participants. The sample of participants represents approximately 0.1% of the entire US eligible voting population spread across the US. By better understanding how people use our platform during an election, we can continually enhance the integrity of the platform moving forward." 

Facebook seems to gloss over the fact that a few swing voters in key swing states or precincts could ultimately determine the outcome of some of the elections. Without knowing the details of the various research projects, it's hard to evaluate the potential effect they may have on voters. 

The independent researchers are: 

  • Hunt Allcott, New York University 
  • Deen Freelon, University of North Carolina at Chapel Hill
  • Matthew Gentzkow, Stanford University
  • Sandra Gonzalez-Bailon, University of Pennsylvania
  • Andrew Guess, Princeton University
  • Shanto Iyengar, Stanford University
  • Young Mie Kim, University of Wisconsin-Madison
  • David Lazer, Northeastern University 
  • Neil Malhotra, Stanford University
  • Brendan Nyhan, Dartmouth College
  • Jennifer Pan, Stanford University
  • Jaime Settle, William & Mary
  • Talia Stroud, The University of Texas at Austin
  • Emily Thorson, Syracuse University
  • Rebekah Tromble, The George Washington University
  • Joshua A. Tucker, New York University
  • Magdalena Wojcieszak, University of California, Davis; University of Amsterdam

Facebook describes the scope of research projects as follows:

The independent academics are collaborating with Facebook researchers to design a diverse set of studies to analyze the role of Facebook and Instagram in the US 2020 election. To collect the information for the study, we are partnering with NORC at the University of Chicago, an objective, non-partisan research institution that has been studying public opinion since 1941. NORC possesses deep expertise in survey research, policy evaluation, data collection, advanced analytics and data science. The study was approved by NORC’s Institutional Review Board.

For people who have explicitly opted in to the study, we plan to combine multiple research methods, including surveys and behavioral data analysis, along with targeted changes to some participants’ experiences with Facebook and Instagram. For example, participants could see more or fewer ads in specific categories such as retail, entertainment or politics, or see more or fewer posts in News Feed related to specific topics. Other participants may be asked to stop using Facebook or Instagram for a period of time. A subset of participants may be asked to install an app on their devices – with their permission – that will log other digital media that they consume. This will allow researchers to understand more comprehensively the information environment that people experience. 

Facebook announces content moderation policy change in clamp down on QAnon and movements tied to violence

On August 19, 2020, Facebook announced a change to its community standards in moderating content on Facebook for safety reasons. Facebook's community standards already require the removal of content that calls for and advocates violence and the removal of individuals and groups promoting violence. Facebook now will restrict content that doesn't necessarily advocate violence, but is "tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations and QAnon." Facebook explained, "we have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior." U.S. based militia organizations and far-right conspiracy, QAnon, have begun to grow on the social site. As we reported, earlier in July 2020, Twitter suspended 7,000 users who supported Qnon conspiracy theories. Facebook followed suit by removing 790 QAnon groups on Facebook (including one group that had 200,000 members) and 10,000 Instagram accounts in August 2020.

Facebook listed seven actions they planned to take against movements and organizations tied to violence:

  1. Remove From Facebook: Facebook Pages, Groups, and Instagram accounts that are a part of harmful movements and organizations will be removed from the platform when they discuss potential violence. To help identify when violence is taking place, Facebook plans to study the technology and symbolism these groups use.
  2. Limit Recommendations: Pages, Groups, and Instagram accounts associated with harmful organizations that are not removed will not be recommended to people as Pages, Groups, or accounts they might want to follow.
  3. Reduce Ranking in News Feed: Looking forward to the future, content from these Pages and Groups will be ranked lower in the news feeds. This will lessen the amount of people who see these pages on their news feed on Facebook.
  4. Reduce in Search: Hashtags and titles for related content will be ranked lower in search suggestions and will not be suggested in the Search Typehead.
  5. Reviewing Related Hashtags on Instagram: On Instagram specifically the Related Hashtag feature has been removed. This feature allowed people to view hashtags that were similar to those they use. Facebook is clear that this feature could potentially return in the future once they have introduced better safety measures to protect people when utilizing the feature.
  6. Prohibit Use of Ads, Commerce Surfaces and Monetization Tools: Facebook starting softly has planned a two-step action, against the prohibition of Ads and the use of the Marketplace to in relation to these movements. Currently they have stopped Facebook Pages related to these movements from running Ads or selling products through the Marketplace and Shop. In the future, Facebook plans to take stronger action stopping Ads praising or supporting these movements from being run by anyone.
  7. Prohibit Fundraising: Finally fundraising associated with these groups will be prohibited. Nonprofits who identify with these groups will be disallowed from using the Facebook fundraising tools.

With the new policy, Facebook expands its existing policy against violence to include the removal of groups and individuals that impose a risk to public safety. The threshold previously, according to Facebook, would not have allowed these groups to be removed because they did not meet the rigorous criteria to be deemed dangerous to the platform. Facebook is not banning QAnon content from the site in its entirety, Facebook is restricting the ability of the individuals who follow these groups to organize on the platform. QAnon believers can still post these conspiracies on the platform in an individualized manner.

With the expansion of its policy, Facebook takes an important step in stopping the spread of harmful information on its platform. As a result of the expanded policy, Facebook has already been able to take down hundreds of groups and ads tied to QAnon and militia organizations and thousands tied to these movements on Instagram. Whether these changes are effective enough to keep Facebook from being used as a tool to organize violence remains to be seen, however.

--written by Bisola Oni

Key Takeaways from Senate Intelligence Committee Report, Vol. 5: Russian Active Measures Campaigns and Interference in the 2016 U.S. Election

With the 2020 U.S. presidential election less than 80 days away and the COVID-19 pandemic affecting the country’s daily operations, the political world has been center stage. Last week, the Senate Intelligence Committee released its lengthy, final report, Volume 5, on the Russian Active Measures Campaigns and Interference in the 2016 U.S. Election. Though the Intelligence Committee was Republican-led, the report is bipartisan, involving both Democrat and Republican input. The report is a three-year culmination of probing countless documents, interviews, and emails. The investigation resulted in what the committee considers, “the most comprehensive description to date of Russia’s activities and the threat they posed.” The report concluded: "The Committee found that the Russian government engaged in an aggressive, multifaceted effort to influence, or attempt to influence, the outcome of the 2016 presidential election." The extensive, nearly 1,000 page report focused on various aspects of Russian interference in the 2016 election. Here are some key issues and takeaways of Volume 5.  (For our prior summaries, visit here for Volume 4 and Volume 1.

Paul Manafort's relationship with Russian individuals was a "grave counterintelligence threat"

Paul Manafort is a political consultant and former lobbyist who is known for having multiple ties to powerful politicians and businesspeople throughout the world, specifically Russia and Ukraine. In March 2016, Manafort was hired by the Trump Campaign as convention manager and by May 2016, he was promoted to chairman and chief strategist. The report stated the Trump campaign conducted “no known vetting of him, including his financial situation or vulnerability to foreign influence” (p. 27). The report further highlights that once Manafort was hired onto the Trump campaign, he quickly sought to use his position to resolve his various multi-million-dollar foreign disputes and acquire other work in Ukraine and other locations. Also, Manafort hired Russian national Konstantin Kilimnik, who served as a close liaison between Manafort and Russian oligarch Oleg Deripaska and other pro-Russia Ukrainian oligarchs. The committee found that “on numerous occasions, Manafort sought to pass sensitive internal polling data and campaign strategy to Kilimnik.” Though the committee was not able to find what Manafort and Kilimnik did with this information, there was continuous engagement between Manafort, an American campaign official and various Russian intelligence personnel. Moreover, "[a]fter the election, Manafort continued to coordinate with Russian persons, particularly Kilimnik and other individuals close to Deripaska, in an effort to undertake activities on their behalf. Manafort worked with Kilimnik starting in 2016 on narratives that sought to undermine evidence that Russia interfered in the 2016 U.S. election." (p.6)  The bipartisan committee described Manafort's relationship “a grave counterintelligence threat” (p. vii). (As a part of the Mueller investigation, Manafort was convicted by a jury of various charges of tax and bank fraud.)

Michael Cohen negotiated with Russians to help secure approval for Trump Tower Moscow; Trump praised Putin

In September 2015, shortly after Donald Trump announced his candidacy, Trump and the Trump organization pursued a business deal in Russia. Michael Cohen, the Trump organization’s then executive vice president and Trump’s personal attorney, received two offers from Russian businessmen to create a Trump Tower in Moscow. The negotiations for this deal commenced from September 2015 to October 2015 until both parties agreed to a Letter of Intent (LOI) that laid out terms involving millions of dollars paid to the Trump organization. In November 2015, the LOI was signed. Cohen kept Trump informed on the status of the deal and while these negotiations were taking place, Trump made positive public comments throughout his campaign about Vladimir Putin, the Russian president. The report states that Cohen, “at various times understood that both Putin’s and Trump’s comments could be useful in advancing the deal,” (p. 407). Even dating back to 2007, Trump wrote a letter to Putin congratulating him on being named Time's "Man of the Year." However, the deal for Trump Tower Moscow fell through in summer of 2016.

2016 Trump Tower Meeting among Donald Trump Jr., Jared Kushner, Paul Manafort, Russian lawyer Natalia Veselnitskaya

In June 2016, a meeting was held in the Trump Tower in New York City. This meeting involved Donald Trump Jr., Jared Kushner, Paul Manafort, Natalia Veselnitskaya, a Russian lawyer, and her four associates. The Committee found it very difficult to find evidence of the contents of this meeting and had to rely on conflicting witness testimony. However, the committee found that Donald Trump Jr. was expecting to receive “derogatory information” that would come from persons connected to the Russian government; however this information was never transmitted. Another alarming aspect of this meeting is the presence of Veselnitskaya. She is not only a Russian lawyer, but she has previously worked with and keeps in contact with various senior members of the Russian government. She has worked as defense counsel for the Russian government and a Russian business, Prevezon Holdings, in cases involving millions of dollars of fraud. The committee found her ties with powerful Russians, “extensive and concerning,” (p. ix).

Putin Ordered the Hack of DNC Computers; Wikileaks helped; Roger Stone was a liaison 

"The Committee found that Russian President Vladimir Putin ordered the Russian effort to hack computer networks and accounts affiliated with the Democratic Party and leak information damaging to Hillary Clinton and her campaign for president. Moscow's intent was to harm the Clinton Campaign, tarnish an expected Clinton presidential administration, help the Trump Campaign after Trump became the presumptive Republican nominee, and undermine the U.S. democratic process." (p. vii) Moreover, "Wikileaks actively sought, and played, a key role in the Russian influence campaign and very likely knew it was assisting Russian intelligence influence effort." (p. vii) In addition, "Trump and senior Campaign offici.als sought to obtain advance information about WikiLeaks's planned releases through Roger Stone." (p. vii)

The FBI’s Response to the DNC Hack

The committee also investigated an aspect of this issue that was not explored in Robert Mueller’s report from 2019. The committee investigated the FBI’s response to the DNC’s computer network hacking issue. From July 2015 to October 2016, the DNC was subject to Russian hacking. This was a novel issue given that a nation-state had never intentionally hacked an American private political party, taken information, and used it against a nation through leaking it publicly. The DNC was hesitant to have the FBI address this issue due to their ongoing investigation of Hillary Clinton’s email servers. What makes this issue even more complicated is during this hacking, the FBI was already looking into the domestic impact of foreign threats during a U.S. presidential election, so why did they not help the DNC sooner? The committee found that both the DNC and the FBI were not as communicative as they needed to be. The FBI also “lacked an appropriate process to escalate their warnings of the DNC hack within the DNC,” (p. 816).

FBI's Reliance on Steele Dossier 

The Committee also found the FBI should have verified the controversial Steele dossier that formed the basis of its investigation of Trump's possible connection with Russia: "Regarding the Steele Dossier, FBI gave Steele's allegations unjustified credence, based on an incomplete understanding of Steele's past reporting record. FBI used the Dossier in a FISA application and renewals and advocated for it to be included in the ICA before taking the necessary steps to validate assumptions about Steele's credibility. Further, FBI did not effectively adjust its approach to Steele's reporting once one of Steele's subsources provided information that raised serious concerns about the source descriptions in the Steele Dossier. The Corhinittee further found that Steele's reporting lacked rigor and transparency about the quality of the sourcing." (p. xix)

Committee Avoids Issue of "Collusion" with Russia

The one major issue that the report avoids specifically addressing is whether the Trump campaign colluded with Russia in the interference. Some Senators appended "Additional Views" on that issue. See pp. 941-952. 

--written by Mariam Tabrez

FCC request for comments on issuing regulations on Section 230 of the Communications Decency Act

Earlier in August 2020, the Federal Communications Commission opened a public comment period for people to express their views on "Petition for Rulemaking recently filed by the Department of Commerce regarding Section 230 of the Communications Decency Act of 1996." The inquiry was prompted by Donald Trump's Executive Order on Preventing Online Censorship, issued on May 28, 2020. Trump has accused social media sites of suppressing conservative speech after Twitter flagged some of his tweets for violating their community standards. In the Executive Order, Trump takes the view that Internet companies that "engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints" should lose immunity under Section 230 of the CDA. That provision states in part: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230). For more about how Section 230 operates, read our prior explanation. This legislation from a time when internet communications were in their infancy has been a vital protection invoked by social media sites that enable their users to exchange information. In the Executive Order, Trump called upon "the Secretary of Commerce (Secretary), in consultation with the Attorney General, and acting through the National Telecommunications and Information Administration (NTIA), [to] file a petition for rulemaking with the Federal Communications Commission (FCC) requesting that the FCC expeditiously propose regulations to clarify" Section 230. The NTIA did so, largely along the lines suggested by the Executive Order. 

Now, the FCC has opened up public comments on the NTIA petition. As of Aug. 24, 2020, the FCC has received 619 comments.  The FCC's involvement has already drawn controversy. In a speech in May, Republican FCC Commissioner Michael O'Reilly expressed "deep reservations" about whether the FCC had any authority to issue regulations on Section 230. On August 4, the White House announced it was withdrawing O'Reilly's nomination for another term on the FCC, meaning his tenure will end before the new Congress starts next year, according to the Wall Street Journal. 

Whether or not the FCC has legal authority to issue regulations related to Section 230 (which it hasn't done so far) is likely to be contested. It its petition, the NTIA argues: 

Section 201(b) of the Communications Act (Act) empowers the Commission to “prescribe such rules and regulations as may be necessary in the public interest to carry out this chapter.” Under this authority, the FCC should promulgate rules to resolve ambiguities in Section 230. The Supreme Court has confirmed that “the grant in section 201(b) means what it says: The FCC has rulemaking authority to carry out the ‘provisions of this Act.’” Section 230, in turn, was incorporated into the Act – in the same portion of the Act, Title II, as section 201(b) – by the Telecommunications Act of 1996 (1996 Act). The fact that section 230 was enacted after section 201(b) is of no consequence; the Supreme Court repeatedly has held that the Commission’s section 201(b) rulemaking power extends to all subsequently enacted provisions of the Act, specifically identifying those added by the Telecommunications Act of 1996. Thus, the Commission has authority under section 201(b) to initiate a rulemaking to implement section 230. That broad rulemaking authority includes the power to clarify the language of that provision, as requested in the petition.

 

 

NYT: Facebook developing contingency plans and "kill switch" on political ads if Trump tries to "wrongly claim on the site that he won"

On Aug. 21, 2020, Mike Isaac and Sheera Frankel of the New York Times reported that Facebook is developing contingency plans just in case Donald Trump "wrongly claim[s] on the site that he won" contrary to the actual election results should they be against him. Facebook is also weighing how it should deal with Trump's attempts to delegitimize the actual election results by "by declaring that the Postal Service lost mail-in ballots or that other groups meddled with the vote." The source are "people with knowledge of Facebook's plans. Facebook is even considering creating a "kill switch" to remove political ads that contain false election results.

Google is also discussing contingency plans for the U.S. elections, but didn't reveal further details.

It's not hard to envision another nightmare Bush v. Gore scenario, in which the result of the presidential election is contested. Trump has already attacked mail-in voting.  According to the NYT, in part due to the pandemic, 9 states have mailed ballots to all voters, while 34 other states allow voters to elect mail-in voting for any reason and 7 states allow mail-in voting for certain reasons.  Prof. Ned Foley has highlighted one reason this year's election may result in a contested outcome and litigation: mail-in ballots typically result in a "blue shift" with more ballots for Democrats than Republicans in past elections from mail-in ballots for reasons that are not entirely clear.  Thus, in close races, the "blue shift" might flip a state from Republican to Democrat when the mail-in votes are counted, giving rise to unsubstantiaed claims of foul play. For more about this scenario, read this Atlantic article

Political bias?: WSJ reports on Facebook's alleged favoritism in content moderation of Indian politician T. Raja Singh and ruling Hindu nationalist party, Bharatiya Janata Party

 

On Aug. 14, 2020, Newley Purnell and Jeff Horwitz of the Wall Street Journal reported of possible political favoritism shown by Facebook in its content moderation of posts on Facebook by ruling party Hindu nationalist politicians in India. These allegations of political bias come as Facebook faces similar claims of political bias for and against Donald Trump and conservatives in the United States.  The Wall Street Journal article relies on "current and former Facebook employees familiar with the matter." According to the article, in its content moderation, Facebook flagged posts by Bharatiya Janata Party (BJP) politician, T. Raja Singh, and other Hindu nationalist individuals and groups for “promoting violence”--which should have resulted in the suspension of his Facebook account. But Facebook executives allegedly intervened in the content moderation. Facebook's "top public-policy executive in the country, Ankhi Das, opposed applying the hate-speech rules to Mr. Singh and at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence, said the current and former employees." Ankhi Das is a top Facebook official in India and lobbies India’s government on Facebook’s behalf. Das reportedly explained her reasoning to Facebook staff that "punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country, Facebook’s biggest global market by number of users."

According to the Wall Street Journal article, Andy Stone, a Facebook spokesperson, "acknowledged that Ms. Das had raised concerns about the political fallout that would result from designating Mr. Singh a dangerous individual, but said her opposition wasn’t the sole factor in the company’s decision to let Mr. Singh remain on the platform." Facebook said it has not yet decided whether it will ban the BJP politician from the social media platform.

The WSJ article gives examples of alleged political favoritism to the BJP party. Facebook reportedly announced its action to remove inauthentic pages to Pakistan’s military and the Congress party, which is BJP’s rival. However, Facebook made no such announcement when it removed BJP’s inauthentic pages because Das interceded. Facebook's safety staff determined that Singh's posts warranted a permanent ban from Facebook, but Facebook only deleted some of Singh's posts and stripped his account of verified status.  In addition, Facebook's Das praised Modi in an essay in 2017 and she shared on her Facebook page "a post from a former police official, who said he is Muslim, in which he called India’s Muslims traditionally a 'degenerate community' for whom 'Nothing except purity of religion and implementation of Shariah matter.'"

On August 16, 2020, Facebook's Das filed a criminal complaint against journalist Awesh Tiwari for a post he made on his Facebook page about the WSJ article. Das alleges a comment someone posted to Tiwari's page constituted a threat against her. 

--written by Alfa Alemayehu

How are Twitter, Facebook, other Internet platforms going to stop 2020 election misinformation in U.S.?

With millions of users within the United States on Facebook and Twitter, Internet platforms are becoming a common source of current event news, information, and outlets for socialization for many Americans. These social media platforms have been criticized for allowing the spread of misinformation regarding political issues and the COVID-19 pandemic. These criticisms began after misinformation spread during the 2016 U.S. presidential election. Many misinformation sources went unchecked, therefore millions of Americans were convinced they were consuming legitimate news sources when they were actually reading “fake news.” It is safe to say that these platforms do not want to repeat those mistakes. Ahead of the 2020 U.S. elections, both Facebook and Twitter have taken various actions to ensure misinformation is not only identified, but either removed or flagged as untrue with links to easily accessible, credible resources for their users.

Facebook’s Plan to Stop Election Misinformation

Facebook has faced the most criticism regarding the spread of misinformation due. Facebook has created a voting information center, similar to a COVID-19 one, that will appear in the Facebook and Instagram menu.

Facebook's Voting Information Center

This hub will target United States users only and will contain election information based on the user’s geographic location. For example, if you live in Orange County, Florida, information on vote-by-mail options and poll locations in that area will be provided. In addition, Facebook plans on adding notations to some posts containing election misinformation on Facebook with a link to verified information in the voting information center. Facebook will have voting alerts which will “communicate election-related updates to users through the platform.” Only official government accounts will have access to these voting alerts. Yet one sore spot appears to remain for Facebook: Facebook doesn't fact-check the posts or ads of politicians because, as CEO Mark Zuckerberg has repeatedly said, Facebook does not want to be the "arbiter of truth." 

Twitter’s Plan to Stop Election Misinformation

Twitter has been in the press recently with their warnings on the bottom of a few of President Trump’s tweets for misinformation and glorification of violence. These warnings are part of Twitter’s community standards and Civic Integrity Policy, which was enacted in May 2020. This policy prohibits the use of “Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes.” Civic processes include “political elections, censuses, and major referenda and ballot initiatives.” Twitter also banned political campaign ads starting in October 2019. Twitter recently released a statement stating its aim is to “empower every eligible person to register to vote,” by providing various resources for them to educate themselves on issues surrounding our society today. Twitter officials stated to Reuters they will be expanding their Civic Integrity Policy to address election misinformation, such as mischaracterizations of mail-in voting. But, as TechCrunch points out, "hyperpartisan content or making broad claims that elections are 'rigged' ... do not currently constitute a civic integrity policy violation, per Twitter’s guidance." Twitter also announce dthat it will identify with a notation state-affiliated media, such as from Russia: 

In addition to these policies, Facebook and Twitter, as well as other information platforms including Microsoft, Pinterest, Verizon Media, LinkedIn, and Wikimedia Foundation have all decided to work closely with U.S. government agencies to make sure the integrity of the election is not jeopardized. These tech companies have specifically discussed how the upcoming political conventions and specific scenarios arising from the election results will be handled. Though no details have been given, this seems to be a promising start to ensuring the internet does more good than bad in relation to politics.

--written by Mariam Tabrez

Belarus shuts down Internet after disputed election of Pres. Lukashenko

 

Protests over a disputed presidential election in Belarus continue. The country's Central Election Commission said President Aleksandr Lukashenko, in power since 1994, had won 80.1% of the vote and Svetlana Tikhanovskaya only 10.12% of the vote. But the election was disputed from the start. On Aug. 9, 2020, Belarusians experienced a massive communications blackout involving disruptions to internet connectivity and cellular devices, as reported by Wired. Belarus’ President Lukashenko claims the blackout resulted from foreign interference; however, experts and human rights organizations allege the Belarusian government imposed the blackout that took place amidst Belarus’ tense presidential election, with Svetlana Tikhanovskaya as the opposition candidate. The blackout affected virtual private networks (VPNs), which usually are immune from filtering protocols.

According to Netblocks, a connectivity monitoring group, the first outage in Belarus started right after midnight on Sunday and went unnoticed. As election polls opened later that morning, outages and connectivity disruptions became increasingly severe. Netblocks also said the blocking strategy it observed, and continues to observe, started with a “deep packet inspection,” which allows a censor to filter web traffic and block access to specific sites. Alp Toker, director of Netblocks, told WIRED, “the network layer distributions were introduced after the platform filters were gradually rolled out. So much was filtered by the time the blackouts started that they were difficult to distinguish and report.” Toker further noted a brute-force blocking strategy at the network layer, rather than a more refined filtering system at the application layer, is likely the cause. A refined filtering system can disable applications, such as social media and communication apps, without affecting general internet connectivity. Countries like Iraq, Liberia, and Venezuela have used such filtering techniques to block access to apps like Facebook, Instagram, Snapchat, and WhatsApp, according to Wired.

Belarusian officials claim foreign attacks caused the blackout

On election day morning, the National Center for Response to Computer Incidents of the Republic of Belarus, a task force created by President Lukashenko, stated that the country’s infrastructure was experiencing a “massive wave of DDoS-attacks.” Belarus’ Computer Emergency Response Team supported that declaration. ISP RUE Beltelecom, a government-owned telecommunications company that owns roughly ninety percent of all fixed communication channels in Belarus, released a statement suggesting the outages were due to “multiple cyberattacks of varying intensity.” Similarly, President Lukashenko declared the government didn’t impose the blackout in a recent statement. “Our specialists are now determining where this shutdown is coming from. So, if the Internet is working poorly, that’s not our doing but an initiative from abroad,” the president said.

The Belarus government is suspected of imposing the blackout to control the election information and counter any opposition

Neutral observers remain skeptical. These skeptics offer three reasons to support their claim the blackout was government-imposed.

First, there isn’t evidence for the government’s official narrative. “There’s no indication of a DDoS attack. It can’t be ruled out, but there’s no external sign of it that we see,” Alp Toker told WIRED. Mikhail Kilmarev, the executive director of the Internet Protection Society told Meduza the Belarusian authorities’ explanation for the nationwide blackout doesn’t add up. To completely disrupt Belarus’ internet from the outside "you’d need every country connected to Belarus to come to an agreement and turn it off together. As for DDoS-attacks, there are plenty of anti-DDoS technologies that successfully repel attacks without disabling an entire country’s Internet access. Moreover, ISPs usually counter powerful DDoS-attacks collectively, Kilmarev added.

Second, there are allegations the Belarusian government forewarned some businesses and institutions about the planned connectivity disruptions. On August 4, 2020, an internet post claimed to show an email from a Belarusian bank warning its customers of the impending digital outages. That same day, Nexta Live published an image of a letter where a Belarusian company’s manager informed his staff about the likelihood of an internet shutdown and listed instructions to prepare for it. Reports from other major media outlets corroborate this letter. On August 8, 2020, a Russian newspaper published an interview with a cellphone company employee, who confirmed that Belarus would disconnect all communications.

Third, because Belarus has a largely centralized internet infrastructure, it would be simple for the government to impose connectivity disruptions or blackouts, if the appropriate groundwork was put in place.

Moreover, Pres. Lukashenko reportedly had the Internet shut down yesterday after a video of him getting booed by factory workers at a speech he gave on Monday. 

Belarus’ blackout smacks of digital authoritarianism

Digital authoritarianism – the use of digital information technology by authoritarian regimes to surveil, suppress, and manipulate domestic and foreign populations – has helped repressive regimes enforce extreme limits on the free flow of information and technologies their citizens use. Alarmingly, the blackout seen in Belarus is nothing new – repressive regimes, such as Iran, Egypt, Ethiopia, and India, have used connectivity blackouts as a tool to crush dissent. This past year, governments in Burundi, Guinea, Togo, and Venezuela have imposed social media blackouts during their elections. 

This past Monday, United States Secretary of State Mike Pompeo said in a statement that the Belarusian election “was not free and fair,” and added, “We strongly condemn ongoing violence against protestors and the detention of opposition supporters, as well as the use of internet shutdowns to hinder the ability of the Belarusian people to share information about the election and the demonstrations.”

Belarus’ nationwide blackout matters because digital authoritarianism is on the rise around the world, which puts democracies at risk. If left unchecked, such regimes will continue to violate the rights of domestic and foreign populations, especially using technology to wield their authoritarian swords. In a world where most things are connected to the internet, a government should never have the power to impose a blackout to centralize its control and deny rights to its citizens.

--written by Allison Hedrick

Trump issues Executive Order Requiring China-owned ByteDance to divest ownership of TikTok in 90 Days

Executive Order Regarding the Acquisition of Musical.ly by Bytedance Ltd., Aug. 14, 2020

On Aug. 14, 2020, Donald Trump issued a second Executive Order directed at Chinese media company ByteDance.  The second Executive Order requires ByteDance to divest its ownership of TikTok, the popular social media platform involving short videos, within 90 days.  Trump's reason: "There is credible evidence that leads me to believe that ByteDance Ltd., an exempted company with limited liability incorporated under the laws of the Cayman Islands ("ByteDance"), through acquiring all interests in musical.ly, an exempted company with limited liability incorporated under the laws of the Cayman Islands ("Musical.ly"), might take action that threatens to impair the national security of the United States."

Earlier, President Trump and Secretary of State Mike Pompeo announced their desire to ban “untrusted” Chinese tech apps such as, TikTok in an effort to lower national security risks. Pompeo stated that Trump would take action against Chinese software companies that dispersed user data with the Chinese government. Several days later, Microsoft confirmed that it sought out the Chinese technology company that owns TikTok, ByteDance, to acquire parts of TikTok in the U.S. Microsoft ensured that it would work with the U.S. government regarding the deal of acquiring TikTok. Microsoft agreed to provide “proper economic benefits to the United States, including the United States Treasury.” (Read Microsoft's Aug. 2, 2020 statement.) Avery Gardiner, general counsel and senior fellow at the Center for Democracy and Technology, points out how deals with conditions set a dangerous precedent for future mergers. CNN suggests that Trump’s demand that the U.S. government receive a “substantial amount of money” from any TikTok deal will set a dangerous example for years to come. More particularly, President Trump hopes to receive a “very large percentage” in any deal made regarding this matter. Concerns arise about what that percentage might look like considering TikTok is estimated to be worth $50 billion. Ultimately, President Trump ordered the deal to be set by September 15, otherwise the app would be banned in the U.S.

China made it clear that TikTok will remain with the Chinese technology company, ByteDance – not the United States.  China appears to be contemplating various responses if the Trump Administration decided to “smash and grab” TikTok.

The Committee on Foreign Investment in the United States (CFIUS) grants the U.S. government the power to force foreign firms to sell their business to an American company.  CFIUS did so with Grindr, a gay dating app. The Chinese owners of Grindr were compelled to sell over the app due to national security concerns. Additionally, CFIUS is already investigating ByteDance, the Chinese tech firm that owns TikTok, over a 2016 acquisition of the app Musical.ly.  You can learn more about CFIUS's powers in this review by a US law firm Cooley.

Mike Jones, co-founder and managing partner at Science Inc., told CNN: “The recent events around TikTok will change the way we look at companies that are based in China or have interest in expanding to China, which is often one of the most interesting markets to expand into from the US. The recent developments give us pause and change the way we think about company growth and development when the government could block them from crossing into markets.” 

--written by Alfa Alemayehu

 

 

Turkey amends Internet Law to Impose Stiff Requirements on "Social Network Providers"

The Turkish government is set to amend the existing Internet Law No. 5651 (on the Regulation of Broadcasts via the Internet and the Prevention of Crimes Committed Through such Broadcasts). According to VOA, after his daughter and her husband were insulted on social media, President Recep Tayyip Erdogan declared in July 2020 that social media are “immoral” and will be “completely banned or controlled.” 

Turkey's new Internet law, which goes into effect October 1, requires social media companies--called "social network providers" under the new law--like TikTok and Facebook to register local offices in Turkey, subjecting them to local laws and tax regulations. Social media companies would face crippling restriction on their bandwidths if the authority found them noncompliant with the new regulation. Failure to comply will also result in substantial fines issued to their mandatory offices in Turkey, once the new legislation has passed.

Social network platforms will also have to store data of Turkish users in Turkey (i.e., data localization). In addition, the social network providers that are accessed more than 1 million times daily are required (1) to have a notice-and-takedown procedure in which people can submit a notice of a violation of rights based on content on the network, and the company must remove the offending material within 48 hours, and (2) to publish transparency reports regarding the notices and takedowns. Accordingly to Lexology, "An administrative fine of TRY 5 million (approx. EUR 615,000) may be imposed for incompliance with takedown request handling and TRY 10 million may be imposed for incompliance with the reporting requirements." Finally, the new law recognizes that people in Turkey have a right to be forgotten and can request their names be removed from webpages as ordered by a court.

Critics of the new law fear that it will be used to censor political dissent. "If the social media platforms decide to establish offices in Turkey, then they will be compelled to remove the content . . . [subject to] so-called personal rights violations," said Professor Yaman Akdeniz, co-founder of the Freedom of Expression Society, an advocacy group in Istanbul, told VOA.

Such attempt to curtail the access to online medias is not unprecedented in Turkey, with over 400,000 web pages banned and thousands of people prosecuted for their posts, according to VOA. In response, people in Turkey utilized VPNs and proxies to counteract the suppression of Internet censorship. However, limiting bandwidths will likely overpower the use of VPN in restraining the accessibility to online contents that the government deems problematic. Devlet Bahçeli, the president of the MHP (Nationalist Movement Party), has already called for state intervention in the use of VPN and proxies, to ensure “the clean use of social media.” He also promised his staunch support for any law proposal in the wake of the Turkish Grand National Assembly, further dimming the light of hope for an accessible and free internet.

The fast-growing popularity of social media in Turkey has drawn people from mainstream medias for their news updates. Atilla Yesilada of Global Source Partners told VOA that one of Erdogan’s primary incentives to propose the stringent legislation is to regain the control of “the flow of information.” Professor Akdeniz also observed that news websites are at risk of facing state censorship and manipulation to “the government’s past injustices, corruption, and irregularity allegations.”  However, as the young generation in Turkey has grown fond of the social media, the attempt to restrain the internet may backfire and alienate those young voters from President Erdogan, warned Yesilada.

Turkey's new Internet law is modeled on Germany's controversial Network Enforcement Act, or NetzDG for short, as explained by EFF: The German "law mandates social media platforms with two million users to name a local representative authorized to act as a focal point for law enforcement and receive content take down requests from public authorities. The law mandates social media companies with more than two million German users to remove or disable content that appears to be “manifestly illegal” within 24 hours of having been alerted of the content."

In summary, Turkey's new Internet law has the following components:

  • Social network providers that are accessed in Turkey more than 1 million times daily must have a local office in Turkey.
  • They must have local storage of data of users in Turkey.
  • They must have a notice and takedown process that allows users to send a notice of unlawful material or violation of rights, after which the company has 48 hours to remove it.
  • They must process right-to-be-forgotten removal of content as ordered by a court. 

For more about Turkey's controversial new law for social network providers, visit Lexology.

--written by Yucheng Cui

Pages

Blog Search

Blog Archive

Categories