As the United States nears closer to the 2020 presidential election, lawmakers, policymakers, and activists are raising increasing concern about the possible deployment of "deepfake" videos that falsely depict political candidates, news broadcasters, or other people to deceive voters and influence their votes. Deepfake videos rely on artificial intelligence (AI) programs that use neural networks to replicate faces based on accessing a database of images of faces of the person being depicted. The neural network can swap the faces of different people in videos (now popular in deepfake pornographic videos that falsely depict famous celebrities having sex) to alter the face or voice of the same person to make them say or do things they, in fact, did not say or do.
For example, filmmaker Jordan Peele created the below deepfake video of President Obama as a public service announcement to warn voters of the use of deepfake videos in the next election. The video shows how easily an unsuspecting viewer could be duped into believing the deepfake is a real video of President Obama.
The Defense Advanced Research Projects Agency (DARPA) in the Department of Defense is working on "deepfake" detection technology, but it is not clear whether it will be ready for full deployment before the 2020 election. Even if it is deployed, detection of deepfakes doesn't necessarily guarantee that deepfakes won't still affect voters during the time they videos are online and accessible to the public.
Lawmakers have begun sounding the alarm about deepfake videos intended to interfere with U.S. elections. But can Congress restrict or outright prohibit deepfake videos in a way that does not run afoul of the First Amendment's guarantee of speech? Difficult question. Below I offer some preliminary thoughts.
1. Deepfake videos from foreign sources outside the U.S.
Congress has wide latitude to enact laws to protect U.S. elections from foreign interference. Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate."). Congress has wide latitude to enact laws to protect U.S. elections from foreign interference. Current federal election laws already prohibit a range of foreign activities related to U.S. elections, including "a foreign national ... mak[ing]... an expenditure ... for an electioneering communication" (i.e., "An electioneering communication is any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate."). Congress probably could prohibit foreign deepfake videos originating from abroad but disseminated in the U.S. if the foreign national knowingly and intentionally designed the video to deceive the public that the contents are true, in order to affect an election in the United States. At least outside the U.S., foreign nationals do not have any First Amendment rights. At least outside the U.S., foreign nationals do not have any First Amendment rights.
2. Deepfake videos from sources within the U.S.
The more difficult question is whether deepfake videos that are created by citizens or legal residents of the United States could be restricted or prohibited, consistent with the First Amendment. Imagine Congress enacted the following law: "It shall be unlawful for any person to knowingly create and disseminate to the public, in connection with a federal election, a deepfake video falsely depicting a political candidate, reporter, or other public figure, with the intent to influence the election by deceiving the public that such video is a truthful or accurate depiction of such person." Would this law survive First Amendment scrutiny?
Potentially, yes. The Supreme Court has recognized that fraud, such as in advertising, can be proscribed as a category of "unprotected speech." See United States v. Alvarez, 567 U.S. 709, 717 (2012) (citing Virginia Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 448, 771 (1976); Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190 (1948). In Illinois ex rel. Madigan v. Telemarketing Assoc., Inc., 538 U.S. 600 (2003), the Court unanimously ruled that a state fraud claim may be maintained against fundraisers for making false or misleading statements intended to deceive donors on how their donations will be used. Writing for the Court, Justice Ginsburg explained:
The First Amendment protects the right to engage in charitable solicitation. See Schaumburg, 444 U.S., at 632, 100 S.Ct. 826 (“charitable appeals for funds ... involve a variety of speech interests—communication of information, the dissemination and propagation of views and ideas, and the advocacy of *612 causes—that are within the protection of the First Amendment”); Riley, 487 U.S., at 788–789, 108 S.Ct. 2667. But the First Amendment does not shield fraud. See, e.g., Donaldson v. Read Magazine, Inc., 333 U.S. 178, 190, 68 S.Ct. 591, 92 L.Ed. 628 (1948) (the government's power “to protect people against fraud” has “always been recognized in this country and is firmly established”); Gertz v. Robert Welch, Inc., 418 U.S. 323, 340, 94 S.Ct. 2997, 41 L.Ed.2d 789 (1974) (the “intentional lie” is “no essential part of any exposition of ideas” (internal quotation marks omitted)). Like other forms of public deception, fraudulent charitable solicitation is unprotected speech. See, e.g., Schneider v. State (Town of Irvington), 308 U.S. 147, 164, 60 S.Ct. 146, 84 L.Ed. 155 (1939) (“Frauds,” including “fraudulent appeals ... made in the name of charity and religion,” may be “denounced as offenses and punished by law.”); Donaldson, 333 U.S., at 192, 68 S.Ct. 591 (“A contention cannot be seriously considered which assumes that freedom of the press includes a right to raise money to promote circulation by deception of the public.”).
By analogy, one can argue that the proposed federal law can prohibit persons who make deceptive deepfake videos intended to deceive voters on the political candidates in the election.
On the other hand, the Supreme Court during Chief Justice Roberts' tenure has been very protective speech in a variety of cases finding unconstitutional federal laws that made illegal (i) virtual child pornography that depicted sex with minors via computer-generated technology, Ashcroft v. The Free Speech Coalition, 535 U.S. 234 (2002); (ii) a false statement of receiving a medal by Congress, United States v. Alvarez, 567 U.S. 709 (2012); (iii) depictions of animal cruelty, United States v. Stevens, 559 U.S. 460 (2010); and (iv) independent expenditures by corporations to create speech expressly advocating the election or defeat of a political candidate, Citizens United v. FEC, 558 U.S. 310 (2010).
These latter cases did not involve defrauding or deceiving the public, however. The potential harm with a deepfake video of or about a political candidate, intended to deceive the public, is not merely the falsehood (as was the only harm at issue in the Stolen Valor Act, Alvarez, 567 U.S. at 719). It is also the potential impact the falsehood may have on voters who cast their ballot in the election--and thus on their constitutional right to vote. Given the fundamental importance of the right to vote, the Court has recognized that states can prohibit campaigning, such as campaign posters, near polling places, consistent with the First Amendment. See Burson v. Freeman, 504 U.S. 191, 209-10 (1992).
Yet even if Congress can prohibit fraudulent deepfake videos, some deepfake creators may attempt to argue that they only intended to make a parody and not anything deceptive. The First Amendment would likely protect parodies, so assuming parody deepfakes must be permitted, then wouldn't that open a whole Pandora's box, making it very difficult to differentiate between fraudulent and parody deepfakes--in which case the Court's overbreadth doctrine might make a prohibition unconstitutional? It raises at least a potential concern. If Congress drafted a clear exemption for parody deepfakes, perhaps that would mitigate the problem. However, even an effective parody might be deceiving to some audiences, who might believe it to be accurate or real. Just imagine someone watching a video without audio, but closed-captioning. Or, imagine that the video stated, only at the end, that it was a parody, but audiences did not watch the entire video or the ending disclaimer.
Of course, tech companies such as Facebook, Twitter, and YouTube are not state actors, so, whatever their own users' policies, they can restrict deepfake videos without First Amendment scrutiny. What a federal criminal law, as proposed above, adds is the greater potential deterrence of dissemination of fraudulent deepfake videos in the first instance.
The Internet has been championed as an instrument to promote democracy, in part due to its open and decentralized nature that enables millions to organize and spread their views, including dissent. Over the past few years, however, many fear that the Internet is being “weaponized” by governments, foreign and domestic groups, and even by large tech companies, in ways that threaten democracy, particularly free and fair elections—which are the bedrock of democracy.* The Free Internet Project is undertaking a new initiative to analyze and address this problem, to provide people with objective analysis of and proposed solutions to the issues countries face in safeguarding elections from interference. To that end, the nonprofit The Free Internet Project announces the launching of Project Safeguarding Elections (PSE). PSE has two main objectives:
1. To track, report, and analyze major incidents of and responses to election interference around the world on a dedicated blog or website. At least five types of issues will be covered:
Fake news: the spread of disinformation and false information online to interfere with an election;
Hacking of political candidates: the hacking of emails and communications of political parties and candidates;
Hacking of voting machines: the hacking of voting machines and tabulation of results
Fake results: the spread of false election results to undermine the true result; and
Duties of corporations and governments: the roles and responsibilities (if any) of the law, governments, and companies to address these problems.
2. To convene experts from different relevant fields to provide opinion pieces and proposed best practices to address these issues around the world.
Tim Berners-Lee, the inventor of the World Wide Web, announced in October 2018 the launch of a new project called Inrupt to launch a new, open-source technology or platform called "Solid." Solid is an ecosystem that enables users to store anything, including personal data, in their own Solid POD (personal online data store), which acts as the interface with other sites. So, instead of "logging in with Facebook" or "logging in with Google," people can "Log in with your own Solid POD." Through your POD, you can control the permission of how much other sites or apps can "read or write" to your POD.
The main thrust of the POD approach is that it enables people to control and limit any sharing of their personal data via their single POD, instead of being beholden to websites such as Facebook or an app's user policy. As the Solid site describes:
You give people and your apps permission to read or write to parts of your Solid POD. So whenever you’re opening up a new app, you don’t have to fill out your details ever again: they are read from your POD with your permission. Things saved through one app are available in another: you never have to sync, because your data stays with you.
This approach protects your privacy and is also great for developers: they can build cool apps without harvesting massive amounts of data first. Anyone can create an app that leverages what is already there.
Berner-Lee took time off from his position at MIT to work on the commercial venture Inrupt. Inrupt is meant to facilitate developers to adopt Solid in their programming and develop new programs incorporating the Solid approach. It's too early to tell how much adoption Solid will get, but it has the potential to radically transform the Internet. For example, in an interview with Fast Compnay, Berners-Lee gave one example of a new home assistant that protects people's privacy, instead of tracking their every word: "one idea Berners-Lee is currently working on is a way to create a decentralized version of Alexa, Amazon’s increasingly ubiquitous digital assistant. He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporation to individuals."
The new EU General Data Protection Regulation goes into effect May 25th, 2018. You may have recently received notices of changes to privacy policies by Google, Twitter, and other tech companies. The reason: the GDPR. It attempts to create uniform rules for how personal data is managed in EU countries. The European continent’s first piece of legislation pertaining to the protection of personal data was the “Convention 108”, adopted in 1981 by the Council of Europe (a different international institution that the EU which brings together 47 countries). Later, in 1995, the European Union passed its directive “on the protection of individuals with regard to the processing of personal data and on the free movement of such data”. Unlike the 1995 personal data directive, which must be implemented by EU countries in their nationals laws, the new GDPR is EU law that applies without reliance on national implementing laws. The GDPR is also broader than the personal data directive. The key changes are discussed below.
OVERVIEW OF KEY CHANGES BY GDPR
1. Extensive territorial scope: controllers of data with no establishment in the EU can still be subject to the Regulation for processing related to the offering of goods and services in the EU, or to the monitoring of the behavior of data subjects located in the EU.
No longer matters whether controllers actually process data within the EU.
If an EU citizen's data is processed, the controller is subject to the GDPR.
2. Enhanced rights of data subjects:
New right to ‘data portability’: in certain situations, controllers will be bound to transmit personal data to new controllers, on the request of data subjects who may wish to switch from on service to another;
Upgraded rights to erasure (‘right to be forgotten’) and to restriction of processing;
Substantial increase of the number of information items which must be provided to data subjects, including in particular the retention period of the collected data;
More stringent conditions for a valid consent (where required): it will have to be freely given, specific, informed and unambiguous, by statement or by affirmative action.
3. Redesigned obligations for controllers and processors:
Auto-compliance and accountability: controllers and processors must ensure and be able to demonstrate that they have implemented any technical and organizational measures in order to ensure that the processing carried out comply with the Regulation. Such demonstration may be helped through adhesion to codes of conducts, or through certifications;
The end of prior notifications: the obligation to notify the competent supervising authority prior to each processing is replaced by an obligation to keep detailed records of processing activities;
Data by design and by default: controllers and processors will be expressly bound to respect these principles which is viewed as an effective means for compliance;
Specific measures to be implemented in certain situations: (i) appointment of a data protection officer; (ii) privacy impact assessments; and (iii) notification of data breaches to supervising authorities and to concerned data subjects;
Other new obligations related in particular to the (i) joint controller regime (the breakdown of the different responsibilities will have to be determined); and to (ii) the choice of data processors and to the contracts between controllers and processors.
4. Reinforcement and clarification of the supervising authorities’ roles and powers:
Administrative fines up to 20 million Euros or 4% of the worldwide annual turnover of the preceding financial year;
For cross-border processing, a lead authority will handle issues in accordance to a new co-operation procedure between it and other concerned supervising authorities (which will remain competent alone in certain situations);
Supervisory authorities will have to offer each other mutual assistance, and may conduct joint operations when necessary;
A new entity, the “European Data Protection Board”, will replace the Article 29 Working Party and will be in charge of providing opinions to supervising authorities on certain matters, of ensuring consistent application of the Regulation (by supervising authorities) in particular through a dispute resolution mechanisms, of issuing guidelines, of encouraging the drawing-up of codes of conducts etc.
HB 4819 requires that all internet service providers (ISPs) who do business with state agencies adhere to basic net neutrality principles. The bill requires that the ISPs make “clear and conspicuous” statement disclosing the ISP’s network management practices. Most importantly, the bill aims to restore basic net neutrality rules by mandating that all ISPs providing service to Illinois agencies “shall not block” users from accessing lawful content. The bill also provides that ISPs shall not “impair” or “degrade” lawful traffic to the user, based on content. Finally, the bill prohibits ISPs from manipulating broadband service to favor certain traffic, and from unreasonably interfering with either the end user’s ability to access desired content or content producer’s ability to make content available to users.
China partially blocked the popular messaging service called What's App, owned by Facebook. China reportedly blocked photos and videos from being shared on the service, and, in some cases, even messages.
The New York Times stated: "According to the analysis that we ran today on WhatsApp’s infrastructure, it seems that the Great Firewall is imposing censorship that selectively targets WhatsApp functionalities,' said Nadim Kobeissi, an applied cryptographer at Symbolic Software, a cryptography research start-up." The conjecture was that the censorship was part of the government's leadup to the upcoming selections in Congress: "To complicate matters, the 19th Party Congress — where top leadership positions are determined — is just months away. The government puts an increased emphasis on stability in the run up to the event, which happens every five years, often leading to a tightening of internet controls."
This week, in Google Inc. v. Equustek Solutions, Inc., Canada's Supreme Court upheld (in a 7-2 decision) the grant of a preliminary injunction against Google to remove a link to a website that allegedly infringed the intellectual property of a small tech company. The IP controversy was not against Google as a party, but the lower court ordered Google to remove the link to the defendant's website from access worldwide. Google had delisted the website from searches in Canada (at google.ca). But the Supreme Court of Canada upheld the grant of a worldwide preliminary injunction that affects people around the world. [Download the decision.]
The Supreme Court reasoned:
Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction, is theoretical. If Google has evidence that complying with such an injunction would require it to violate the laws of another jurisdiction, including interfering with freedom of expression, it is always free to apply to the British Columbia courts to vary the interlocutory order accordingly. To date, Google has made no such application. In the absence of an evidentiary foundation, and given Google’s right to seek a rectifying order, it is not equitable to deny E the extraterritorial scope it needs to make the remedy effective, or even to put the onus on it to demonstrate, country by country, where such an order is legally permissible.
D and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. E has made efforts to locate D with limited success. D is only able to survive — at the expense of E’s survival — on Google’s search engine which directs potential customers to D’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to E pending the trial, the only way, in fact, to preserve E itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.
Commentators, such as Michael Geist, pointed out the danger in the Canadian approach: if each country (isuch as China or Iran) used the same power to issue worldwide injunctions against Google, there would be a race to the bottom and massive censorship online.
From Fight for the Future: Thousands of websites plan massive online protest for July 12th. Twitter, Soundcloud, Medium, Twilio, Plays.tv, and Adblock are among latest major web platforms to join the Internet-Wide Day of Action to Save Net Neutrality scheduled for July 12th to oppose the FCC’s plan to slash Title II, the legal framework for net neutrality rules that protect online free speech and innovation. Companies participating will display prominent messages on their homepages on July 12 or encourage users to take action in other ways, like through push notifications and emails. The momentum comes against the backdrop of a recent Morning Consult / POLITICO poll that shows broad bipartisan support for net neutrality rules. “This protest is gaining so much momentum because no one wants their cable company to charge them extra fees or have the power to control what they can see and do on the Internet,” said Evan Greer, campaign director of Fight for the Future, “Congress and the FCC need to listen to the public, not just lobbyists. The goal of this day of action is to make them listen.”
More than 40,000 people, sites, and organizations have signed up to participate in the effort overall, and more announcements from major companies are expected in the coming days. Many popular online personalities including YouTuber Philip DeFranco, and dozens of major online forums and subreddits have also announced their participation. The effort is led by many of the grassroots groups behind the largest online protests in history including the SOPA blackout and the Internet Slowdown. The day of action will focus on grassroots mobilization, with public interest groups activating their members and major web platforms providing their visitors with tools to contact Congress and the FCC.
Companies participating include Amazon, Netflix, OK Cupid, Kickstarter, Etsy, Reddit, Mozilla, Vimeo, Y Combinator, GitHub, Private Internet Access, Pantheon, Bittorrent Inc., Shapeways, Nextdoor, Patreon, Dreamhost, and CREDO Mobile, Goldenfrog, Fark, Chess.com, Imgur, Namecheap, DuckDuckGo, Checkout.com, Sonic, Brave, Ting, ProtonMail, O’Reilly Media, Discourse, and Union Square Ventures. Organizations participating include Fight for the Future, Free Press Action Fund, Demand Progress, Center for Media Justice, EFF, Internet Association, Internet Archive, World Wide Web Foundation, Creative Commons, National Hispanic Media Coalition, Greenpeace, Common Cause, ACLU, Rock the Vote, American Library Association, Daily Kos, OpenMedia, The Nation, PCCC, MoveOn, OFA, Public Knowledge, OTI, Color of Change, MoveOn, Internet Creators Guild, and many others. See the full list here.
In a survey of 65 countries, Freedom House has reported that internet freedom across the world has been in a decline for the past six years, with two-thirds of internet users living under a government that has censored internet. Freedom House has ranked China, Syria, and Iran as countries with the most restrictive internet laws.
Many countries have blocked secure messaging apps. WhatsApp was blocked most of all messaging apps, being restricted in 12 countries this year. The number of governments that restricted access to social media and communication services increased from 15 to 24. Governments have also blocked materials related to LGBT rights and photos that made fun of countries' leaders. [Download the Report]