The Free Internet Project

Project Safeguarding Elections

FBI Confirms Russian Government Hacked Voting Data of Two Florida Counties

In the Mueller Report, Special Counsel Robert Mueller III concluded that the “Russian government interfered in the 2016 presidential election in sweeping and systematic fashion.” [Mueller Report link] While exposing the details of these Russian efforts, the Mueller Report identified one state in particular—Florida—as a key target of the Russian hackers (at p. 51). In Volume I of the Mueller Report, the Special Counsel’s Office indicated that the FBI believed the Russian government had gained access to voting data possessed by “at least one Florida county government.” In recent days, however, Florida Governor Ron DeSantis and other top officials learned in a series of confidential briefings that the FBI and Department of Homeland Security believe two Florida counties were hacked prior to the 2016 election.

According to the Mueller Report, a Russian intelligence service, known as GRU, sent spearphishing emails to over 120 email accounts used by Florida county officials responsible for administering the 2016 U.S. election. The spearphishing emails contained an attached document coded with malicious software (commonly referred to as a Trojan) that permitted the GRU to access the infected computer. In spite of the breaches, the FBI have not found any evidence that there was any manipulation of voter data, vote counts, or election results in 2016.

Following the confidential briefings, a bipartisan choir of both officials and constituents demanded the identity of the counties that fell victim to Russian interference. In response, Gov. DeSantis acknowledged that he was required to accept the terms a non-disclosure agreement prior to being briefed by the FBI. The terms of the NDA reportedly prohibit DeSantis from confirming or repeating the confidential information to unauthorized individuals. Since publicizing this agreement, DeSantis has received significant criticism from an array of officials who believe the Governor should have pushed back at the request to agree to the NDA. However, the terms of a 2003 executive order require the FBI to obtain an NDA before people without security clearances, such as DeSantis and his staff, are briefed on sensitive or classified information.

Many advocates of government transparency have questioned DeSantis’s legal standing to sign an NDA on the matter due to the broad nature of Florida’s public record laws. Barbara Petersen, president of the First Amendment Foundation, said that a long line of past court rulings makes it clear that Florida officials cannot agree to keep a document confidential if it is shared with them, even if the official does not retain possession of the documents. However, Petersen concedes that an NDA would may be appropriate to protect confidential information given to DeSantis verbally.

With the next election approaching quickly, many Floridians are less worried about what happened in 2016 and more worried about how to prevent this meddling in the 2020 elections. Last year, the Florida Department of State distributed more than $14.5 million in cybersecurity grants for federal elections to the state’s Supervisors of Elections. In addition, the Supervisors of Elections were given $1.9 million dollars in state funding to purchase and install Albert network monitoring sensors. These sensors are used by election organizations to detect cyber threats and quickly alert officials when data may be at risk. Albert sensors were developed as a supplemental form of the DHS’s Einstein project, which focuses on detecting and blocking cyberattacks within federal agencies.

[Sources: Politico, Palm Beach Post, My Sun Coast, GovTech.com, Orlando Sentinel, Learn.cisecurity.org]

 

U.S. Cyber Command works with foreign nations to defend election security from Russian interference

On May 7, 2019, Maj. Gen. Charles L. Moore, the director of operations for Cyber Command, and other Cyber Command officers gave a rare briefing at its new Joint Operation Center.  According to the New York Times:  "American officials deployed last year to Ukraine, Macedonia and Montenegro, and United States Cyber Command officials said that their missions included defending elections and uncovering information about Russia’s newest abilities. Cyber Command will continue some of those partnerships and expand its work to other countries under attack from Russia, officials said Tuesday. The deployments, officials said, are meant to impose costs on Moscow, to make Russia’s attempts to mount online operations in Europe and elsewhere more difficult and to potentially bog down Moscow’s operatives and degrade their ability to interfere in American elections."

In an operation named "Synthetic Theology," Cyber Command took proactive measures to neutralize Russian efforts to interfere with the 2018 U.S. midterm elections by

  1. taking offline temporarily the Internet Research Agency, a Russian trollfarm and source of disinformation,
  2. sending direct messages to Russians propagating disinformation to identify them, and
  3. deploying U.S. officers in Ukraine, North Macedonia, and Montenegro to defend their networks and gather intelligence on Russian activities.  The commander of Cyber Command’s cyber national mission force, Brig. Gen. Timothy Haugh said the U.S. would continue such joint efforts with foreign countries.  [sources: cyberscoop and NYT]

FBI Director Wray Warns of Russian Interference in 2020 U.S. Elections

On April 26, 2019, FBI Director Christopher Wray warned of Russian interference in the 2020 U.S. elections.  The threat is significant and constant.  "“What has pretty much continued unabated is the use of social media, fake news, propaganda, false personas, etc. to spin us up, pit us against each other, to sow divisiveness and discord, to undermine America’s faith in democracy,” said Wray. “That is not just an election-cycle threat. It is pretty much a 365-day-a-year threat.” 

The FBI, Department of Homeland Security, and NSA have all allocated resources to counter the Russian threat: According to the New York Times: "In response to growing threats from Russia and other adversaries, the F.B.I. recently moved nearly 40 agents and analysts to the counterintelligence division, the senior bureau official said in an interview this month. Many of the agents will work on the Foreign Influence Task Force, a group of cyber, counterintelligence and criminal experts. Officials have made that task force, initially formed on a temporary basis before the midterm elections, permanent. The Department of Homeland Security made its midterm election task forces permanent, folding them into an election security initiative at their National Risk Management Center. And the National Security Agency and the United States Cyber Command have also expanded and made permanent their joint task force aimed at identifying, and stopping, Russian malign influence, officials said."

Mueller Report on Russian interference in 2016 election released

April 18, 2019 - The Report on the Investigation into Russian Interference in the 2016 Presidential Election by Special Counsel Robert S. Mueller, III was released, in redacted form, to the public today.  The Report concludes: "The Russian government interfered in the 2016 presidential election in sweeping and systematic fashion."

"As set forth in detail in this report, the Special Counsel's investigation established that Russia interfere~ in the 2016 presidential election principally through two operations. First, a Russian entity carried out a social media campaign that favored presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton.

Second, a Russian intelligence service conducted computer-intrusion operations against entities, employees, and volunteers working on the Clinton Campaign and then released stolen documents. The investigation also identified numerous links between the Russian government and the Trump Campaign. Although the investigation established that the Russian government perceived it would benefit from a Trump presidency and worked to secure that outcome, and that the Campaign expected it would benefit electorally from information stolen and released through Russian efforts, the investigation did not establish that members of the Trump Campaign conspired or coordinated with the Russian government in its election interference activities."

Download the Mueller Report here

Singapore set to enact "fake news" law, Protection from Online Falsehoods and Manipulation Act

Singapore's government is set to enact a controversial bill titled Protection from Online Falsehoods and Manipulation Act that would recognize broad authority for the government to order individuals and ISPs to remove "false statements of fact" aka "fake news" online.  The bill can be dowloaded here.  The Parliament is expected to pass the bill next month, ahead of the upcoming elections.   Commentators and human rights organizations expressed concern that the bill authorizes the government to decide what content is false and to order corrections and removals of such content. 

Section 7 of Part 2 of the law makes it a crime for people from doing an act "in or outside Singapore" "in order to communicate in Singapore a statement knowing or having reason to believe that--(a) it is a false statement of fact;" provided that it meets one of the following conditions in subsection (b): 

Section 8 makes it a crime to make or alter bots "with the intention of (a) communicating, by means of a bot, a false statement of fact in Singapore; or (b) enabling any other person to communicate, by means of a bot, a false statement of fact in Singapore.

Part 3 of the Act grants broad powers for "any Minister" to issue a "Part 3 Direction" requiring correction or stop communication of the offending content.  If the person does not abide by the order, the Ministry may order ISPs to block access to the content.  

LIkewise, Part 4 authorizes any Minister to issue "Part 4 Directions" to ISPs to comply with a "targeted correction direction," "disabling direction," or "general correction direction."  Both Parts 3 and 4 recognize the right to appeal the Directions to the High Court.  

Disinformation in the 2018 U.S. Midterm Elections: Identifying Misattributed Photos and Visual Propaganda against the October 2018 Migrant Caravan

In the final weeks of the 2018 midterm campaign, the GOP turn-out effort increasingly focused on a caravan of migrant asylum seekers making their way to the United States’ southern border from Honduras.[1] To emphasize the danger posed to the United States, an intense misinformation campaign centered on misattributed images began. Conservative Politicians and right-leaning media pushed out numerous false narratives about the caravan,[2] while right wing Twitter posters circulated numerous misattributed images, copied and described in detail below.

Can the U.S. Government Prohibit Deepfake Videos Intended to Deceive Voters?

As the United States nears closer to the 2020 presidential election, lawmakers, policymakers, and activists are raising increasing concern about the possible deployment of "deepfake" videos that falsely depict political candidates, news broadcasters, or other people to deceive voters and influence their votes.  Deepfake videos rely on artificial intelligence (AI) programs that use neural networks to replicate faces based on accessing a database of images of faces of the person being depicted.  The neural network can swap the faces of different people in videos (now popular in deepfake pornographic videos that falsely depict famous celebrities having sex) to alter the face or voice of the same person to make them say or do things they, in fact, did not say or do.

For example, filmmaker Jordan Peele created the below deepfake video of President Obama as a public service announcement to warn voters of the use of deepfake videos in the next election.  The video shows how easily an unsuspecting viewer could be duped into believing the deepfake is a real video of President Obama. 

The Defense Advanced Research Projects Agency (DARPA) in the Department of Defense is working on "deepfake" detection technology, but it is not clear whether it will be ready for full deployment before the 2020 election.  Even if it is deployed, detection of deepfakes doesn't necessarily guarantee that deepfakes won't still affect voters during the time they videos are online and accessible to the public.    

Lawmakers have begun sounding the alarm about deepfake videos intended to interfere with U.S. elections. But can Congress restrict or outright prohibit deepfake videos in a way that does not run afoul of the First Amendment's guarantee of speech?  Difficult question. Below I offer some preliminary thoughts.  

1. Deepfake videos from foreign sources outside the U.S. 

Congress has wide latitude to enact laws to protect U.S. elections from foreign interference.  Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.").  Congress has wide latitude to enact laws to protect U.S. elections from foreign interference.  Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.").  Congress probably could prohibit foreign deepfake videos originating from abroad but disseminated in the U.S. if the foreign national knowingly and intentionally designed the video to deceive the public that the contents are true, in order to affect an election in the United States.  At least outside the U.S., foreign nationals do not have any First Amendment rights.   At least outside the U.S., foreign nationals do not have any First Amendment rights.  

2. Deepfake videos from sources within the U.S.

The more difficult question is whether deepfake videos that are created by citizens or legal residents of the United States could be restricted or prohibited, consistent with the First Amendment.  Imagine Congress enacted the following law:  "It shall be unlawful for any person to knowingly create and disseminate to the public, in connection with a federal election, a deepfake video falsely depicting a political candidate, reporter, or other public figure, with the intent to influence the election by deceiving the public that such video is a truthful or accurate depiction of such person."  Would this law survive First Amendment scrutiny? 

Potentially, yes.  The Supreme Court has recognized that fraud, such as in advertising, can be proscribed as a category of "unprotected speech."  See United States v. Alvarez, 567 U.S. 709, 717 (2012) (citing Virginia Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 448, 771 (1976); Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190 (1948).  In Illinois ex rel. Madigan v. Telemarketing Assoc., Inc., 538 U.S. 600 (2003), the Court unanimously ruled that a state fraud claim may be maintained against fundraisers for making false or misleading statements intended to deceive donors on how their donations will be used.  Writing for the Court, Justice Ginsburg explained:

  • The First Amendment protects the right to engage in charitable solicitation. See Schaumburg, 444 U.S., at 632, 100 S.Ct. 826 (“charitable appeals for funds ... involve a variety of speech interests—communication of information, the dissemination and propagation of views and ideas, and the advocacy of *612 causes—that are within the protection of the First Amendment”); Riley, 487 U.S., at 788–789, 108 S.Ct. 2667. But the First Amendment does not shield fraud. See, e.g., Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190, 68 S.Ct. 591, 92 L.Ed. 628 (1948) (the government's power “to protect people against fraud” has “always been recognized in this country and is firmly established”); Gertz v. Robert Welch, Inc., 418 U.S. 323, 340, 94 S.Ct. 2997, 41 L.Ed.2d 789 (1974) (the “intentional lie” is “no essential part of any exposition of ideas” (internal quotation marks omitted)). Like other forms of public deception, fraudulent charitable solicitation is unprotected speech. See, e.g., Schneider v. State (Town of Irvington), 308 U.S. 147, 164, 60 S.Ct. 146, 84 L.Ed. 155 (1939) (“Frauds,” including “fraudulent appeals ... made in the name of charity and religion,” may be “denounced as offenses and punished by law.”); Donaldson, 333 U.S., at 192, 68 S.Ct. 591 (“A contention cannot be seriously considered which assumes that freedom of the press includes a right to raise money to promote circulation by deception of the public.”).

By analogy, one can argue that the proposed federal law can prohibit persons who make deceptive deepfake videos intended to deceive voters on the political candidates in the election.  

On the other hand, the Supreme Court during Chief Justice Roberts' tenure has been very protective speech in a variety of cases finding unconstitutional federal laws that made illegal (i) virtual child pornography that depicted sex with minors via computer-generated technology, Ashcroft v. The Free Speech Coalition, 535 U.S. 234 (2002); (ii) a false statement of receiving a medal by Congress, United States v. Alvarez, 567 U.S. 709 (2012);  (iii) depictions of animal cruelty, United States v. Stevens, 559 U.S. 460 (2010); and (iv) independent expenditures by corporations to create speech expressly advocating the election or defeat of a political candidate, Citizens United v. FEC, 558 U.S. 310 (2010).  

These latter cases did not involve defrauding or deceiving the public, however.  The potential harm with a deepfake video of or about a political candidate, intended to deceive the public, is not merely the falsehood (as was the only harm at issue in the Stolen Valor Act, Alvarez, 567 U.S. at 719).  It is also the potential impact the falsehood may have on voters who cast their ballot in the election--and thus on their constitutional right to vote.  Given the fundamental importance of the right to vote, the Court has recognized that states can prohibit campaigning, such as campaign posters, near polling places, consistent with the First Amendment. See Burson v. Freeman, 504 U.S. 191, 209-10 (1992).  

Yet even if Congress can prohibit fraudulent deepfake videos, some deepfake creators may attempt to argue that they only intended to make a parody and not anything deceptive.  The First Amendment would likely protect parodies, so assuming parody deepfakes must be permitted, then wouldn't that open a whole Pandora's box, making it very difficult to differentiate between fraudulent and parody deepfakes--in which case the Court's overbreadth doctrine might make a prohibition unconstitutional?  It raises at least a potential concern.  If Congress drafted a clear exemption for parody deepfakes, perhaps that would mitigate the problem.  However, even an effective parody might be deceiving to some audiences, who might believe it to be accurate or real.  Just imagine someone watching a video without audio, but closed-captioning.  Or, imagine that the video stated, only at the end, that it was a parody, but audiences did not watch the entire video or the ending disclaimer.  

Of course, tech companies such as Facebook, Twitter, and YouTube are not state actors, so, whatever their own users' policies, they can restrict deepfake videos without First Amendment scrutiny.  What a federal criminal law, as proposed above, adds is the greater potential deterrence of dissemination of fraudulent deepfake videos in the first instance.

[by Prof. Edward Lee]

 

The Free Internet Project Announces New Project on Election Security

OVERVIEW 

The Internet has been championed as an instrument to promote democracy, in part due to its open and decentralized nature that enables millions to organize and spread their views, including dissent.  Over the past few years, however, many fear that the Internet is being “weaponized” by governments, foreign and domestic groups, and even by large tech companies, in ways that threaten democracy, particularly free and fair elections—which are the bedrock of democracy.*   The Free Internet Project is undertaking a new initiative to analyze and address this problem, to provide people with objective analysis of and proposed solutions to the issues countries face in safeguarding elections from interference.  To that end, the nonprofit The Free Internet Project announces the launching of Project Safeguarding Elections (PSE).  PSE has two main objectives:

1.  To track, report, and analyze major incidents of and responses to election interference around the world on a dedicated blog or website.  At least five types of issues will be covered:

  • Fake news: the spread of disinformation and false information online to interfere with an election;
  • Hacking of political candidates: the hacking of emails and communications of political parties and candidates;
  • Hacking of voting machines: the hacking of voting machines and tabulation of results
  • Fake results: the spread of false election results to undermine the true result; and
  • Duties of corporations and governments: the roles and responsibilities (if any) of the law, governments, and companies to address these problems.

2.  To convene experts from different relevant fields to provide opinion pieces and proposed best practices to address these issues around the world.

*See, e.g., Nicolas Weaver, Our Government Has Weaponized the Internet. Here’s How They Did It, Wired, Nov. 13, 2013; Tim Berners-Lee, Tim  Berner-Lee is fighting for the web’s future, and he wants you to join him, Quartz, March 12, 2018.