The Free Internet Project

disinformation

Review of EU's Rapid Alert System to protect elections from disinformation and interference: did it work?

 

The New York Times has a front page article critiquing the EU's new Rapid Alert System (RAS), which was established to identify disinformation related to elections and to issue a rapid alert, warning voters in the EU.  Any EU member country can notify the EU office of possible election disinformation.  The Rapid Alert System was set up as a part of the EU's Action Plan against disinformation, which followed on the heels of the East Stratcom Task Force, which is tasked with countering Russian disinformation. The NYT article reports that some in Brussels, where the EU's disinformation analysts are stationed, jokingly describe the Rapid Alert System as follows: "It's not rapid. There are no alerts. And there's no system."  The NYT article includes one incident in which EU officials identified suspicious tweets about an "Austrian political scandal," which may have been from Russian trolls, but the EU officials--for whatever reason--did not issue an alert.  In fact, the office never issued any alerts during the last election season, although officials claim that they were successful in protecting the EU elections from interference.  

One expert the NYT quotes, Jakub Janda, the executive director of a Czech-policy group, described the Rapid Alert System as a failure: "It's a Potemkin village. People in the know, they don't take it seriously."  Few countries have contributed to the RAS, although it is not clear the reason for the lack of submissions stems from members' low view of the RAS or simply the lack of problematic cases of election disinformation.  EU officials defend the system as the first of its kind and the office is cautious about issuing an alert.  Presumably, too many alerts would undermine its effectiveness.  

Although the NYT article provides helpful information about the RAS, it seems too early to tell how well it operates after just one election.  The fact that no alerts issued during the past election is not evidence, in itself, of a failure of the system.  The EU officials' cautiousness in issuing alerts seems wise as their effectiveness would likely be diminished if alerts were frequently issued for every single piece of election disinformation.  In some cases, an alert might give more viewers to a piece of disinformation, also known as the Streisand effect. More generally, the experience shows the complex set of issues regulators face in trying to ensure the integrity of elections.  The EU does not take the same broad approach to free speech as the U.S. does, so the EU regulators have more authority to combat disinformation.  Yet, even with more expansive power, it's not clear how EU regulators can best fight election disinformation online, where posts and ads can have an effect on people as soon as they are viewed.  

 

 

Disinformation in the 2018 U.S. Midterm Elections: Identifying Misattributed Photos and Visual Propaganda against the October 2018 Migrant Caravan

In the final weeks of the 2018 midterm campaign, the GOP turn-out effort increasingly focused on a caravan of migrant asylum seekers making their way to the United States’ southern border from Honduras.[1] To emphasize the danger posed to the United States, an intense misinformation campaign centered on misattributed images began. Conservative Politicians and right-leaning media pushed out numerous false narratives about the caravan,[2] while right wing Twitter posters circulated numerous misattributed images, copied and described in detail below.

Can the U.S. Government Prohibit Deepfake Videos Intended to Deceive Voters?

As the United States nears closer to the 2020 presidential election, lawmakers, policymakers, and activists are raising increasing concern about the possible deployment of "deepfake" videos that falsely depict political candidates, news broadcasters, or other people to deceive voters and influence their votes.  Deepfake videos rely on artificial intelligence (AI) programs that use neural networks to replicate faces based on accessing a database of images of faces of the person being depicted.  The neural network can swap the faces of different people in videos (now popular in deepfake pornographic videos that falsely depict famous celebrities having sex) to alter the face or voice of the same person to make them say or do things they, in fact, did not say or do.

For example, filmmaker Jordan Peele created the below deepfake video of President Obama as a public service announcement to warn voters of the use of deepfake videos in the next election.  The video shows how easily an unsuspecting viewer could be duped into believing the deepfake is a real video of President Obama. 

The Defense Advanced Research Projects Agency (DARPA) in the Department of Defense is working on "deepfake" detection technology, but it is not clear whether it will be ready for full deployment before the 2020 election.  Even if it is deployed, detection of deepfakes doesn't necessarily guarantee that deepfakes won't still affect voters during the time they videos are online and accessible to the public.    

Lawmakers have begun sounding the alarm about deepfake videos intended to interfere with U.S. elections. But can Congress restrict or outright prohibit deepfake videos in a way that does not run afoul of the First Amendment's guarantee of speech?  Difficult question. Below I offer some preliminary thoughts.  

1. Deepfake videos from foreign sources outside the U.S. 

Congress has wide latitude to enact laws to protect U.S. elections from foreign interference.  Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.").  Congress has wide latitude to enact laws to protect U.S. elections from foreign interference.  Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.").  Congress probably could prohibit foreign deepfake videos originating from abroad but disseminated in the U.S. if the foreign national knowingly and intentionally designed the video to deceive the public that the contents are true, in order to affect an election in the United States.  At least outside the U.S., foreign nationals do not have any First Amendment rights.   At least outside the U.S., foreign nationals do not have any First Amendment rights.  

2. Deepfake videos from sources within the U.S.

The more difficult question is whether deepfake videos that are created by citizens or legal residents of the United States could be restricted or prohibited, consistent with the First Amendment.  Imagine Congress enacted the following law:  "It shall be unlawful for any person to knowingly create and disseminate to the public, in connection with a federal election, a deepfake video falsely depicting a political candidate, reporter, or other public figure, with the intent to influence the election by deceiving the public that such video is a truthful or accurate depiction of such person."  Would this law survive First Amendment scrutiny? 

Potentially, yes.  The Supreme Court has recognized that fraud, such as in advertising, can be proscribed as a category of "unprotected speech."  See United States v. Alvarez, 567 U.S. 709, 717 (2012) (citing Virginia Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 448, 771 (1976); Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190 (1948).  In Illinois ex rel. Madigan v. Telemarketing Assoc., Inc., 538 U.S. 600 (2003), the Court unanimously ruled that a state fraud claim may be maintained against fundraisers for making false or misleading statements intended to deceive donors on how their donations will be used.  Writing for the Court, Justice Ginsburg explained:

  • The First Amendment protects the right to engage in charitable solicitation. See Schaumburg, 444 U.S., at 632, 100 S.Ct. 826 (“charitable appeals for funds ... involve a variety of speech interests—communication of information, the dissemination and propagation of views and ideas, and the advocacy of *612 causes—that are within the protection of the First Amendment”); Riley, 487 U.S., at 788–789, 108 S.Ct. 2667. But the First Amendment does not shield fraud. See, e.g., Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190, 68 S.Ct. 591, 92 L.Ed. 628 (1948) (the government's power “to protect people against fraud” has “always been recognized in this country and is firmly established”); Gertz v. Robert Welch, Inc., 418 U.S. 323, 340, 94 S.Ct. 2997, 41 L.Ed.2d 789 (1974) (the “intentional lie” is “no essential part of any exposition of ideas” (internal quotation marks omitted)). Like other forms of public deception, fraudulent charitable solicitation is unprotected speech. See, e.g., Schneider v. State (Town of Irvington), 308 U.S. 147, 164, 60 S.Ct. 146, 84 L.Ed. 155 (1939) (“Frauds,” including “fraudulent appeals ... made in the name of charity and religion,” may be “denounced as offenses and punished by law.”); Donaldson, 333 U.S., at 192, 68 S.Ct. 591 (“A contention cannot be seriously considered which assumes that freedom of the press includes a right to raise money to promote circulation by deception of the public.”).

By analogy, one can argue that the proposed federal law can prohibit persons who make deceptive deepfake videos intended to deceive voters on the political candidates in the election.  

On the other hand, the Supreme Court during Chief Justice Roberts' tenure has been very protective speech in a variety of cases finding unconstitutional federal laws that made illegal (i) virtual child pornography that depicted sex with minors via computer-generated technology, Ashcroft v. The Free Speech Coalition, 535 U.S. 234 (2002); (ii) a false statement of receiving a medal by Congress, United States v. Alvarez, 567 U.S. 709 (2012);  (iii) depictions of animal cruelty, United States v. Stevens, 559 U.S. 460 (2010); and (iv) independent expenditures by corporations to create speech expressly advocating the election or defeat of a political candidate, Citizens United v. FEC, 558 U.S. 310 (2010).  

These latter cases did not involve defrauding or deceiving the public, however.  The potential harm with a deepfake video of or about a political candidate, intended to deceive the public, is not merely the falsehood (as was the only harm at issue in the Stolen Valor Act, Alvarez, 567 U.S. at 719).  It is also the potential impact the falsehood may have on voters who cast their ballot in the election--and thus on their constitutional right to vote.  Given the fundamental importance of the right to vote, the Court has recognized that states can prohibit campaigning, such as campaign posters, near polling places, consistent with the First Amendment. See Burson v. Freeman, 504 U.S. 191, 209-10 (1992).  

Yet even if Congress can prohibit fraudulent deepfake videos, some deepfake creators may attempt to argue that they only intended to make a parody and not anything deceptive.  The First Amendment would likely protect parodies, so assuming parody deepfakes must be permitted, then wouldn't that open a whole Pandora's box, making it very difficult to differentiate between fraudulent and parody deepfakes--in which case the Court's overbreadth doctrine might make a prohibition unconstitutional?  It raises at least a potential concern.  If Congress drafted a clear exemption for parody deepfakes, perhaps that would mitigate the problem.  However, even an effective parody might be deceiving to some audiences, who might believe it to be accurate or real.  Just imagine someone watching a video without audio, but closed-captioning.  Or, imagine that the video stated, only at the end, that it was a parody, but audiences did not watch the entire video or the ending disclaimer.  

Of course, tech companies such as Facebook, Twitter, and YouTube are not state actors, so, whatever their own users' policies, they can restrict deepfake videos without First Amendment scrutiny.  What a federal criminal law, as proposed above, adds is the greater potential deterrence of dissemination of fraudulent deepfake videos in the first instance.

[by Prof. Edward Lee]

 

Blog Search

Blog Archive

Categories