The Free Internet Project

fake news

The Free Internet Project Announces New Project on Election Security

OVERVIEW 

The Internet has been championed as an instrument to promote democracy, in part due to its open and decentralized nature that enables millions to organize and spread their views, including dissent.  Over the past few years, however, many fear that the Internet is being “weaponized” by governments, foreign and domestic groups, and even by large tech companies, in ways that threaten democracy, particularly free and fair elections—which are the bedrock of democracy.*   The Free Internet Project is undertaking a new initiative to analyze and address this problem, to provide people with objective analysis of and proposed solutions to the issues countries face in safeguarding elections from interference.  To that end, the nonprofit The Free Internet Project announces the launching of Project Safeguarding Elections (PSE).  PSE has two main objectives:

1.  To track, report, and analyze major incidents of and responses to election interference around the world on a dedicated blog or website.  At least five types of issues will be covered:

  • Fake news: the spread of disinformation and false information online to interfere with an election;
  • Hacking of political candidates: the hacking of emails and communications of political parties and candidates;
  • Hacking of voting machines: the hacking of voting machines and tabulation of results
  • Fake results: the spread of false election results to undermine the true result; and
  • Duties of corporations and governments: the roles and responsibilities (if any) of the law, governments, and companies to address these problems.

2.  To convene experts from different relevant fields to provide opinion pieces and proposed best practices to address these issues around the world.

*See, e.g., Nicolas Weaver, Our Government Has Weaponized the Internet. Here’s How They Did It, Wired, Nov. 13, 2013; Tim Berners-Lee, Tim  Berner-Lee is fighting for the web’s future, and he wants you to join him, Quartz, March 12, 2018.

New study by Alto Data Analytics casts doubt on effectiveness of fact checking to combat published fake news

As “fake news” continues to plague digital socio-political space, a new form of investigative reporter has risen to combat this disinformation: the fact-checker. Generally, fact-checkers are defined as journalists working to verify digital content by performing additional research on the content’s claim. Whenever a fact-checker uncovers a falsity masquerading as fact (aka fake news), they rebut this deceptive representation through articles, blog posts, and other explanatory comments that illustrate how the statement misleads the public. [More from Reuters] As of 2019, the number of fact-checking outlets across the globe has grown to 188 across 60 countries, according to the Reporters Lab.  

But recent research reveals that this upsurge in fact-checkers may not have that great an impact on defeating digital disinformation. From December 2018 to the European Parliamentary elections in May 2019, big-data firm Alto Data Analytics collected socio-political debate data from across a variety of digital media platforms. This survey served as one of the first studies assessing the success of fact-checking efforts.  Alto’s study examined five European countries: France, Germany, Italy, Poland, and Spain. Focusing on verified fact-checkers in each of these countries, Alto’s Alto Analyzer cloud-based software tracked how users interacted with these trustworthy entities in digital space. Basing their experiment exclusively on Twitter interactions, the Analyzer platform recorded how users interacted with the fact-checkers’ tweets through re-tweets, replies, and mentions. After noting this information, the data-scientists calculated the fact-checkers’ effectiveness in reaching communities most affected by disinformation.

Despite its limitation to 5 select countries, the study yielded discouraging results. In total, the fact-checking outlets in these countries only amounted to between 0.1% and 0.3% of total number of Twitter activity during the period.  Across the five countries in the study, fact-checkers penetrated least successfully in Germany, followed closely by Italy. Conversely, fact-checkers experienced the greatest distributive effect in France. Fact-checkers’ digital presence tended to reach only a few online communities.  The study found that “fact-checkers . . . [were] unable to significantly penetrate the communities which tend to be exposed most frequently to disinformation content.”  In other words, fact-checking efforts reached few individuals, and the ones they did reach were other fact-checkers.  Alto Data notes, however, that their analysis “doesn’t show that the fact-checkers are not effective in the broader socio-political conversation.” But “the reach of fact-checkers is limited, often to those digital communities which are not targets for or are propagating disinformation.”  [Alto Data study]

Alto proposed ideas for future research models on this topic: expanding the study beyond one social media site; conducting research to find effectual discrepancies between various types of digital content—memes, videos, picture, and articles; taking search engine comparisons into account; and providing causal explanations for penetration differences between countries.

Research studies in the United States have also produced results doubting the effectiveness of fact-checkers. A Tulane University study discovered that citizens were more likely to alter their views from reading ideologically consistent media outlets than neutral fact-checking entities. Some studies even suggest that encounters with corrective fact-checking pieces have undesired psychological effects on content consumers, hardening individuals’ partisan positions and perceptions instead of dispelling them. 

These studies suggest that it's incredibly difficult to "unring the bell" of fake news, so to speak.  That is why the proactive efforts of social media companies and online sites to minimize the spread of blatantly fake news related to elections may be the only hope of minimizing its deleterious effects on voters.  

Singapore set to enact "fake news" law, Protection from Online Falsehoods and Manipulation Act

Singapore's government is set to enact a controversial bill titled Protection from Online Falsehoods and Manipulation Act that would recognize broad authority for the government to order individuals and ISPs to remove "false statements of fact" aka "fake news" online.  The bill can be dowloaded here.  The Parliament is expected to pass the bill next month, ahead of the upcoming elections.   Commentators and human rights organizations expressed concern that the bill authorizes the government to decide what content is false and to order corrections and removals of such content. 

Section 7 of Part 2 of the law makes it a crime for people from doing an act "in or outside Singapore" "in order to communicate in Singapore a statement knowing or having reason to believe that--(a) it is a false statement of fact;" provided that it meets one of the following conditions in subsection (b): 

Section 8 makes it a crime to make or alter bots "with the intention of (a) communicating, by means of a bot, a false statement of fact in Singapore; or (b) enabling any other person to communicate, by means of a bot, a false statement of fact in Singapore.

Part 3 of the Act grants broad powers for "any Minister" to issue a "Part 3 Direction" requiring correction or stop communication of the offending content.  If the person does not abide by the order, the Ministry may order ISPs to block access to the content.  

LIkewise, Part 4 authorizes any Minister to issue "Part 4 Directions" to ISPs to comply with a "targeted correction direction," "disabling direction," or "general correction direction."  Both Parts 3 and 4 recognize the right to appeal the Directions to the High Court.  

Disinformation in the 2018 U.S. Midterm Elections: Identifying Misattributed Photos and Visual Propaganda against the October 2018 Migrant Caravan

In the final weeks of the 2018 midterm campaign, the GOP turn-out effort increasingly focused on a caravan of migrant asylum seekers making their way to the United States’ southern border from Honduras.[1] To emphasize the danger posed to the United States, an intense misinformation campaign centered on misattributed images began. Conservative Politicians and right-leaning media pushed out numerous false narratives about the caravan,[2] while right wing Twitter posters circulated numerous misattributed images, copied and described in detail below.

Can the U.S. Government Prohibit Deepfake Videos Intended to Deceive Voters?

As the United States nears closer to the 2020 presidential election, lawmakers, policymakers, and activists are raising increasing concern about the possible deployment of "deepfake" videos that falsely depict political candidates, news broadcasters, or other people to deceive voters and influence their votes.  Deepfake videos rely on artificial intelligence (AI) programs that use neural networks to replicate faces based on accessing a database of images of faces of the person being depicted.  The neural network can swap the faces of different people in videos (now popular in deepfake pornographic videos that falsely depict famous celebrities having sex) to alter the face or voice of the same person to make them say or do things they, in fact, did not say or do.

For example, filmmaker Jordan Peele created the below deepfake video of President Obama as a public service announcement to warn voters of the use of deepfake videos in the next election.  The video shows how easily an unsuspecting viewer could be duped into believing the deepfake is a real video of President Obama. 

The Defense Advanced Research Projects Agency (DARPA) in the Department of Defense is working on "deepfake" detection technology, but it is not clear whether it will be ready for full deployment before the 2020 election.  Even if it is deployed, detection of deepfakes doesn't necessarily guarantee that deepfakes won't still affect voters during the time they videos are online and accessible to the public.    

Lawmakers have begun sounding the alarm about deepfake videos intended to interfere with U.S. elections. But can Congress restrict or outright prohibit deepfake videos in a way that does not run afoul of the First Amendment's guarantee of speech?  Difficult question. Below I offer some preliminary thoughts.  

1. Deepfake videos from foreign sources outside the U.S. 

Congress has wide latitude to enact laws to protect U.S. elections from foreign interference.  Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.").  Congress has wide latitude to enact laws to protect U.S. elections from foreign interference.  Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.").  Congress probably could prohibit foreign deepfake videos originating from abroad but disseminated in the U.S. if the foreign national knowingly and intentionally designed the video to deceive the public that the contents are true, in order to affect an election in the United States.  At least outside the U.S., foreign nationals do not have any First Amendment rights.   At least outside the U.S., foreign nationals do not have any First Amendment rights.  

2. Deepfake videos from sources within the U.S.

The more difficult question is whether deepfake videos that are created by citizens or legal residents of the United States could be restricted or prohibited, consistent with the First Amendment.  Imagine Congress enacted the following law:  "It shall be unlawful for any person to knowingly create and disseminate to the public, in connection with a federal election, a deepfake video falsely depicting a political candidate, reporter, or other public figure, with the intent to influence the election by deceiving the public that such video is a truthful or accurate depiction of such person."  Would this law survive First Amendment scrutiny? 

Potentially, yes.  The Supreme Court has recognized that fraud, such as in advertising, can be proscribed as a category of "unprotected speech."  See United States v. Alvarez, 567 U.S. 709, 717 (2012) (citing Virginia Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 448, 771 (1976); Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190 (1948).  In Illinois ex rel. Madigan v. Telemarketing Assoc., Inc., 538 U.S. 600 (2003), the Court unanimously ruled that a state fraud claim may be maintained against fundraisers for making false or misleading statements intended to deceive donors on how their donations will be used.  Writing for the Court, Justice Ginsburg explained:

  • The First Amendment protects the right to engage in charitable solicitation. See Schaumburg, 444 U.S., at 632, 100 S.Ct. 826 (“charitable appeals for funds ... involve a variety of speech interests—communication of information, the dissemination and propagation of views and ideas, and the advocacy of *612 causes—that are within the protection of the First Amendment”); Riley, 487 U.S., at 788–789, 108 S.Ct. 2667. But the First Amendment does not shield fraud. See, e.g., Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190, 68 S.Ct. 591, 92 L.Ed. 628 (1948) (the government's power “to protect people against fraud” has “always been recognized in this country and is firmly established”); Gertz v. Robert Welch, Inc., 418 U.S. 323, 340, 94 S.Ct. 2997, 41 L.Ed.2d 789 (1974) (the “intentional lie” is “no essential part of any exposition of ideas” (internal quotation marks omitted)). Like other forms of public deception, fraudulent charitable solicitation is unprotected speech. See, e.g., Schneider v. State (Town of Irvington), 308 U.S. 147, 164, 60 S.Ct. 146, 84 L.Ed. 155 (1939) (“Frauds,” including “fraudulent appeals ... made in the name of charity and religion,” may be “denounced as offenses and punished by law.”); Donaldson, 333 U.S., at 192, 68 S.Ct. 591 (“A contention cannot be seriously considered which assumes that freedom of the press includes a right to raise money to promote circulation by deception of the public.”).

By analogy, one can argue that the proposed federal law can prohibit persons who make deceptive deepfake videos intended to deceive voters on the political candidates in the election.  

On the other hand, the Supreme Court during Chief Justice Roberts' tenure has been very protective speech in a variety of cases finding unconstitutional federal laws that made illegal (i) virtual child pornography that depicted sex with minors via computer-generated technology, Ashcroft v. The Free Speech Coalition, 535 U.S. 234 (2002); (ii) a false statement of receiving a medal by Congress, United States v. Alvarez, 567 U.S. 709 (2012);  (iii) depictions of animal cruelty, United States v. Stevens, 559 U.S. 460 (2010); and (iv) independent expenditures by corporations to create speech expressly advocating the election or defeat of a political candidate, Citizens United v. FEC, 558 U.S. 310 (2010).  

These latter cases did not involve defrauding or deceiving the public, however.  The potential harm with a deepfake video of or about a political candidate, intended to deceive the public, is not merely the falsehood (as was the only harm at issue in the Stolen Valor Act, Alvarez, 567 U.S. at 719).  It is also the potential impact the falsehood may have on voters who cast their ballot in the election--and thus on their constitutional right to vote.  Given the fundamental importance of the right to vote, the Court has recognized that states can prohibit campaigning, such as campaign posters, near polling places, consistent with the First Amendment. See Burson v. Freeman, 504 U.S. 191, 209-10 (1992).  

Yet even if Congress can prohibit fraudulent deepfake videos, some deepfake creators may attempt to argue that they only intended to make a parody and not anything deceptive.  The First Amendment would likely protect parodies, so assuming parody deepfakes must be permitted, then wouldn't that open a whole Pandora's box, making it very difficult to differentiate between fraudulent and parody deepfakes--in which case the Court's overbreadth doctrine might make a prohibition unconstitutional?  It raises at least a potential concern.  If Congress drafted a clear exemption for parody deepfakes, perhaps that would mitigate the problem.  However, even an effective parody might be deceiving to some audiences, who might believe it to be accurate or real.  Just imagine someone watching a video without audio, but closed-captioning.  Or, imagine that the video stated, only at the end, that it was a parody, but audiences did not watch the entire video or the ending disclaimer.  

Of course, tech companies such as Facebook, Twitter, and YouTube are not state actors, so, whatever their own users' policies, they can restrict deepfake videos without First Amendment scrutiny.  What a federal criminal law, as proposed above, adds is the greater potential deterrence of dissemination of fraudulent deepfake videos in the first instance.

[by Prof. Edward Lee]

 

Blog Search

Blog Archive

Categories