The Free Internet Project

EU

Schrems II: EU Court of Justice strikes down US-EU "Privacy Shield," which allowed businesses to transfer data despite lower privacy protections in US

 On July 16, 2020, the European Union’s top court, the Court of Justice, struck down the trans-Atlantic data privacy transfer pact in a case called Schrems II. The agreement bewteen the US and EU known as the Privacy Shield, allows businesses to transfer data between the United States and European Union, even though U.S. privacy laws do not meet the higher level of data protection of EU law. Data transfer is essential for businesses that rely on the pact to operate their businesses across the Atlantic. For example, multi-national corporations routlinely obtain shipping consumer data from the EU for further use in the US. The Court of Justice ruled that the transfer of data leaves European citizens exposed to US government surveillance and did not comply with EU data privacy law. The Court explained: "although not requiring a third country to ensure a level of protection identical to that guaranteed in the EU legal order, the term ‘adequate level of protection’ must, as confirmed by recital 104 of that regulation, be understood as requiring the third country in fact to ensure, by reason of its domestic law or its international commitments, a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union by virtue of the regulation, read in the light of the Charter."

Companies in the U.S. can work out privacy protections by contract, but such contracts also must comply with EU privacy standards. The Court explained: "the assessment of the level of protection afforded in the context of such a transfer must, in particular, take into consideration both the contractual clauses agreed between the controller or processor established in the European Union and the recipient of the transfer established in the third country concerned and, as regards any access by the public authorities of that third country to the personal data transferred, the relevant aspects of the legal system of that third country, in particular those set out, in a non-exhaustive manner, in Article 45(2) of that regulation."

Ars Technica explains the origins of Privacy Shield and the troubles that have long existed with the agreement. Prior to Privacy Shield being adopted, the agreement governing the sharing of consumer data across the Atlantic was called the Safe Harbor. In 2015, the Safe Harbor was invalidated after being challenged by Maximillian Schrems, an Austrian privacy advocate, because it conflicted with EU law. After the Safe Harbor was struck down by the Court of Justice, EU lawmakers and the US Department of Commerce negotiated the Privacy Shield, which went effect in 2016. But many in the EU questioned its validity and lawfulness.

In Schrems II, the Court of Justice agreed. According to Axios, Schrems complained that the clause in Facebook's data contract was insufficient to protect Europeans from US government surveillance. The Court agreed, ruling that once the data entered the US, it was impossible to adequately ensure the protection of the data.  European citizens would have no redress in the US for violations of the EU standards of privacy. The Privacy Shield did not provide equivalent privacy protection. 

So what happens next? EU and the US officials must negotiate a new data sharing agreement between the EU and the US that can be equivalent to the level of privacy protection in the EU. Tech companies like Google and Facebook have issued assurances that this decision will not affect their operations in Europe because the companies have alternative data-transfer contracts, according to Ars Technica. It remains to be seen whether a new transatlantic data sharing agreement can be reached in a way that comports with EU privacy law.

-written by Bisola Oni

New study by Alto Data Analytics casts doubt on effectiveness of fact checking to combat published fake news

As “fake news” continues to plague digital socio-political space, a new form of investigative reporter has risen to combat this disinformation: the fact-checker. Generally, fact-checkers are defined as journalists working to verify digital content by performing additional research on the content’s claim. Whenever a fact-checker uncovers a falsity masquerading as fact (aka fake news), they rebut this deceptive representation through articles, blog posts, and other explanatory comments that illustrate how the statement misleads the public. [More from Reuters] As of 2019, the number of fact-checking outlets across the globe has grown to 188 across 60 countries, according to the Reporters Lab.  

But recent research reveals that this upsurge in fact-checkers may not have that great an impact on defeating digital disinformation. From December 2018 to the European Parliamentary elections in May 2019, big-data firm Alto Data Analytics collected socio-political debate data from across a variety of digital media platforms. This survey served as one of the first studies assessing the success of fact-checking efforts.  Alto’s study examined five European countries: France, Germany, Italy, Poland, and Spain. Focusing on verified fact-checkers in each of these countries, Alto’s Alto Analyzer cloud-based software tracked how users interacted with these trustworthy entities in digital space. Basing their experiment exclusively on Twitter interactions, the Analyzer platform recorded how users interacted with the fact-checkers’ tweets through re-tweets, replies, and mentions. After noting this information, the data-scientists calculated the fact-checkers’ effectiveness in reaching communities most affected by disinformation.

Despite its limitation to 5 select countries, the study yielded discouraging results. In total, the fact-checking outlets in these countries only amounted to between 0.1% and 0.3% of total number of Twitter activity during the period.  Across the five countries in the study, fact-checkers penetrated least successfully in Germany, followed closely by Italy. Conversely, fact-checkers experienced the greatest distributive effect in France. Fact-checkers’ digital presence tended to reach only a few online communities.  The study found that “fact-checkers . . . [were] unable to significantly penetrate the communities which tend to be exposed most frequently to disinformation content.”  In other words, fact-checking efforts reached few individuals, and the ones they did reach were other fact-checkers.  Alto Data notes, however, that their analysis “doesn’t show that the fact-checkers are not effective in the broader socio-political conversation.” But “the reach of fact-checkers is limited, often to those digital communities which are not targets for or are propagating disinformation.”  [Alto Data study]

Alto proposed ideas for future research models on this topic: expanding the study beyond one social media site; conducting research to find effectual discrepancies between various types of digital content—memes, videos, picture, and articles; taking search engine comparisons into account; and providing causal explanations for penetration differences between countries.

Research studies in the United States have also produced results doubting the effectiveness of fact-checkers. A Tulane University study discovered that citizens were more likely to alter their views from reading ideologically consistent media outlets than neutral fact-checking entities. Some studies even suggest that encounters with corrective fact-checking pieces have undesired psychological effects on content consumers, hardening individuals’ partisan positions and perceptions instead of dispelling them. 

These studies suggest that it's incredibly difficult to "unring the bell" of fake news, so to speak.  That is why the proactive efforts of social media companies and online sites to minimize the spread of blatantly fake news related to elections may be the only hope of minimizing its deleterious effects on voters.  

Review of EU's Rapid Alert System to protect elections from disinformation and interference: did it work?

 

The New York Times has a front page article critiquing the EU's new Rapid Alert System (RAS), which was established to identify disinformation related to elections and to issue a rapid alert, warning voters in the EU.  Any EU member country can notify the EU office of possible election disinformation.  The Rapid Alert System was set up as a part of the EU's Action Plan against disinformation, which followed on the heels of the East Stratcom Task Force, which is tasked with countering Russian disinformation. The NYT article reports that some in Brussels, where the EU's disinformation analysts are stationed, jokingly describe the Rapid Alert System as follows: "It's not rapid. There are no alerts. And there's no system."  The NYT article includes one incident in which EU officials identified suspicious tweets about an "Austrian political scandal," which may have been from Russian trolls, but the EU officials--for whatever reason--did not issue an alert.  In fact, the office never issued any alerts during the last election season, although officials claim that they were successful in protecting the EU elections from interference.  

One expert the NYT quotes, Jakub Janda, the executive director of a Czech-policy group, described the Rapid Alert System as a failure: "It's a Potemkin village. People in the know, they don't take it seriously."  Few countries have contributed to the RAS, although it is not clear the reason for the lack of submissions stems from members' low view of the RAS or simply the lack of problematic cases of election disinformation.  EU officials defend the system as the first of its kind and the office is cautious about issuing an alert.  Presumably, too many alerts would undermine its effectiveness.  

Although the NYT article provides helpful information about the RAS, it seems too early to tell how well it operates after just one election.  The fact that no alerts issued during the past election is not evidence, in itself, of a failure of the system.  The EU officials' cautiousness in issuing alerts seems wise as their effectiveness would likely be diminished if alerts were frequently issued for every single piece of election disinformation.  In some cases, an alert might give more viewers to a piece of disinformation, also known as the Streisand effect. More generally, the experience shows the complex set of issues regulators face in trying to ensure the integrity of elections.  The EU does not take the same broad approach to free speech as the U.S. does, so the EU regulators have more authority to combat disinformation.  Yet, even with more expansive power, it's not clear how EU regulators can best fight election disinformation online, where posts and ads can have an effect on people as soon as they are viewed.  

 

 

The EU's GDPR (General Data Protection Regulation) goes into effect May 25, 2018

The new EU General Data Protection Regulation goes into effect May 25th, 2018.  You may have recently received notices of changes to privacy policies by Google, Twitter, and other tech companies.  The reason: the GDPR.  It attempts to create uniform rules for how personal data is managed in EU countries. The European continent’s first piece of legislation pertaining to the protection of personal data was the “Convention 108”, adopted in 1981 by the Council of Europe (a different international institution that the EU which brings together 47 countries). Later, in 1995, the European Union passed its directive “on the protection of individuals with regard to the processing of personal data and on the free movement of such data”.  Unlike the 1995 personal data directive, which must be implemented by EU countries in their nationals laws, the new GDPR is EU law that applies without reliance on national implementing laws.  The GDPR is also broader than the personal data directive.  The key changes are discussed below.  

 

 

OVERVIEW OF KEY CHANGES BY GDPR

 

1. Extensive territorial scope: controllers of data with no establishment in the EU can still be subject to the Regulation for processing related to the offering of goods and services in the EU, or to the monitoring of the behavior of data subjects located in the EU.

  • No longer matters whether controllers actually process data within the EU.
  • If an EU citizen's data is processed, the controller is subject to the GDPR.  

2. Enhanced rights of data subjects:

  • New right to ‘data portability’: in certain situations, controllers will be bound to transmit personal data to new controllers, on the request of data subjects who may wish to switch from on service to another;
  • Upgraded rights to erasure (‘right to be forgotten’) and to restriction of processing;
  • Substantial increase of the number of information items which must be provided to data subjects, including in particular the retention period of the collected data;
  • More stringent conditions for a valid consent (where required): it will have to be freely given, specific, informed and unambiguous, by statement or by affirmative action.

3. Redesigned obligations for controllers and processors:

  • Auto-compliance and accountability: controllers and processors must ensure and be able to demonstrate that they have implemented any technical and organizational measures in order to ensure that the processing carried out comply with the Regulation. Such demonstration may be helped through adhesion to codes of conducts, or through certifications;
  • The end of prior notifications: the obligation to notify the competent supervising authority prior to each processing is replaced by an obligation to keep detailed records of processing activities;
  • Data by design and by default: controllers and processors will be expressly bound to respect these principles which is viewed as an effective means for compliance;
  • Specific measures to be implemented in certain situations: (i) appointment of a data protection officer; (ii) privacy impact assessments; and (iii) notification of data breaches to supervising authorities and to concerned data subjects;
  • Other new obligations related in particular to the (i) joint controller regime (the breakdown of the different responsibilities will have to be determined); and to (ii) the choice of data processors and to the contracts between controllers and processors.

4. Reinforcement and clarification of the supervising authorities’ roles and powers:

  • Administrative fines up to 20 million Euros or 4% of the worldwide annual turnover of the preceding financial year;
  • For cross-border processing, a lead authority will handle issues in accordance to a new co-operation procedure between it and other concerned supervising authorities (which will remain competent alone in certain situations);
  • Supervisory authorities will have to offer each other mutual assistance, and may conduct joint operations when necessary;
  • A new entity, the “European Data Protection Board”, will replace the Article 29 Working Party and will be in charge of providing opinions to supervising authorities on certain matters, of ensuring consistent application of the Regulation (by supervising authorities) in particular through a dispute resolution mechanisms, of issuing guidelines, of encouraging the drawing-up of codes of conducts etc.

80 Professors Ask Google to Release Data on How It Decides Right to Be Forgotten Requests

Professor Ellen Goodman of Rutgers University (US) and Researcher Julia Powles of Cambridge University (UK) co-authored an open letter, signed by 80 professors from the US and Europe, asking Google to release data on how it decides EU right to be forgotten requests.

Google wins first appeal of right to be forgotten rejection in Finland

After Google decides whether to accept or reject a right to be forgotten removal request from a person in the European Union, the claimant can appeal any adverse decision to the national data protection authority.  Finland just reported its first appeal.  Finnish Data Protection Ombudsman Reijo Aarnio agreed with Google's decision to reject a businessman's attempt to remove links to articles about his past business mistakes.

Google tells EU regulators of problems in implementing "right to be forgotten" requests

After losing a landmark decision before the Court of Justice of the European Union (CJEU) in May 2014 (Google Spain v. Costeja, C-131/12), Google has begun enforcing the controversial EU right to be forgotten.  Under the right that is recognized in EU countries, people in the EU may request search engines such as Google to remove links to web pages describing them (presumably in an unflattering way) after a certain period of time.  For example, imagine that a Google search for your name resulted in the first entry being an old article about your arrest for drunk driving as a teenager.  Since the decision, Google has received over 90,000 requests from Europeans to remove links from search terms involving people's names.

Blog Search

Blog Archive

Categories