The Free Internet Project

June 2020

Why Voters Should Beware: Lessons from Russian Interference in 2016 Election, Political and Racial Polarization on Social Media

Overview of the Russian Interference Issue

The United States prides itself on having an open democracy, with free and fair elections decided by American voters. If Americans want a policy change, then the remedy most commonly called upon is political participation--and the vote. If Americans want change, then they should vote out the problematic politicians and choose public officials to carry out the right policies. However, what if the U.S. voting system is skewed by foreign interference? 

American officials are nearly unanimous in concluding, based on U.S. intelligence, that Russia interfered with the 2016 presidential elections [see, e.g., here; here; and Senate Intelligence Report].  “[U]ndermining confidence in America’s democratic institutions” is what Russia seeks. In 2016, few in the U.S. were even thinking about this type of interference. The US’s guard was down. Russia interfered with the election in various ways including fake campaign advertisements, bots on Twitter and Facebook that pumped out emotionally and politically charged content, and through spread of disinformation or “fake news.” Social media hacking, as opposed to physical-polling-center hacking, is at the forefront of discussion because it can not only change who is in office, but it also can shift American voters’ political beliefs and understanding of political topics or depress voters from voting. 

And, if you think Russia is taking a break this election cycle, you'd be wrong. According to a March 10, 2020 New York Times article, David Porter of the FBI Foreign Influence Task Force says: "We see Russia is willing to conduct more brazen and disruptive influence operations because of how it perceives its conflict with the West."

What Inteference Has to Do with Political Polarization

Facebook and Twitter have been criticized countless times by various organizations, politicians, and the media for facilitating political polarization. The U.S. political system of mainly two dominamnt parties is especially susceptible to political polarization. Individuals belonging to either party become so invested in those party’s beliefs that they do not just see the other party’s members as different but also wrong and detrimental to the future of the country. In the past twenty years, the amount of people who consistently hold conservative views or liberal views went from 10% to 20%, thus showing the increasing division, according to an article in Greater Good Magazine.

Political polarization is facilitated by platforms like Facebook and Twitter because of their content algorithms, which are designed to make the website experience more enjoyable. The Facebook News Feed “ranks stories based on a variety of factors including their history of clicking on links for particular websites,” as described by a Brookings article. Under the algorithm, if a liberal user frequently clicks on liberally skewed content, that is what they are going to see the most. Research shows this algorithm reduced the cross-cutting of political “content by 5 percent for conservatives and 8 percent for liberals.” Thus, the algorithm limits your view of other opinions.

So, you might ask, “Why is that bad? I want to see content more aligned with my beliefs.” Democracy is built on the exchange of varying political views and dissenting opinions. The US has long stood by the reputation of freedom of speech and encouraging a free flow of ideas. This algorithmic grouping of like-minded people can be useful when it comes to hobbies and interests, however when it comes to consistently grouping individuals based on political beliefs, it can have a negative impact on democracy. This grouping causes American users to live in “filter bubbles” that only expose them to content that aligns with their viewpoints. Users tend to find this grouping enjoyable due to the psychological theory of confirmation bias, which means that individuals are more likely to consume content that aligns with their pre-existing beliefs. So, all the articles about Trump successfully leading the country will be the first ranked on a conservative user’s Facebook newsfeed and will also be the most enjoyable for them. This filter bubble is dangerous to a democratic system because the lack of diverse perspectives when consuming news content encourages close-mindedness and increases distrust in anyone who disagrees.

During the 2016 presidential election, the Russian hackers put out various types of fake articles, campaign advertisements, and social media posts that were politically charged on either the liberal or conservative side. Because the Facebook algorithm shows more conservative content to conservatives and same for liberals, hackers had no problem reaching their desired audience quickly and effectively. On Facebook they created thousands of robot computer programs that would enter various interest groups and engage with their target audience. For example, in 2016, a Russian soldier successfully entered a U.S. Facebook group pretending to be a 42-year-old housewife, as reported by Time. He responded to political issues discussed on that group and he used emotional and political buzz words when bringing up political issues and stories. On Twitter, thousands of fake accounts run by Russians and computer robots were used to spread disinformation about Hillary Clinton by continuously mentioning her email scandal from when she was Secretary of State and a fake Democratic pedophilic ring called “Pizzagate.” These robots would spew hashtags like “#MAGA” and “#CrookedHillary” that took up more than a quarter of the content within these hashtags.

Facebook and Twitter’s Response to the 2016 Russian Interference

According to a Wall Street Journal article on May 26, 2020 and a Washington Post article on June 28, 2020, Facebook had an internal review of how Facebook could reduce polarization on its platform following the 2016 election, but CEO Mark Zuckerberg and other executives decided against the recommended changes because it was seen as "paternalistic" and would potentially affect conservatives on Facebook more. 

After becoming under increasing fire from critics for allowing misinformation and hate speech to go unchecked on Facebook, the company announced some changes to "fight polarization" on May 27, 2020. This initiative included a recalibration of each user’s Facebook News Feed which would prioritize their family and friends’ content over divisive news content. Their reasoning was that data shows people are more likely to have meaningful discourse with people they know, and this would foster healthy debate rather than ineffective, one-off conversations. They also mentioned a policy directly targeting the spread of disinformation on the platform. They say they have implemented an independent-fact-checking program that will automatically check content in over 50 languages around the world for false information.  Disinformation that will potentially contribute to “imminent violence, physical harm, and voter suppression,” will be removed. 

But those modest changes weren't enough to mollify Facebook's critics. Amidst the mass nationwide protests of the Minneapolis police officer Derek Chauvin's brutal killing of George Floyd, nonprofit organizations including Color for Change organized an ad boycott against Facebook. Over 130 companies agreed to remove their ads from Facebook during July or longer. That led Zuckerberg to change his position on exempting politicians from fact checking or the company's general policy on misinformation. Zuckerberg said that politicians would now be subject to the same policy as every other Facebook user and would be flagged if they disseminated misinformation (or hate speech) that violates Facebook's general policy. 

Twitter’s CEO Jack Dorsey not only implemented a fact-checking policy similar to Facebook, but also admitted that the company needed to be more transparent in their policy making. The fact checking policy “attached fact-checking notices” at the bottom of various tweets alerting users that there could be fake claims in those tweets.  Twitter also decided to forbig all political advertising on its platform. In response to Twitter's flagging of his content, President Trump issued an executive order to increase social media platform regulation and stop them from deleting users’ content and censoring their speech.

With the 2020 U.S. election only four months away, Internet companies are still figuring out how to stop Russian interference and the spread of misinformation, hate speech, and political polarization intended to interfere with the election. Whether Internet companies succeed remains to be seen.  But there's been more policy changes and decisions by Facebook, Twitter, Reddit, Snapchat, Twitch, and other platforms in the last month than all of last year. 

-by Mariam Tabrez

Summary of EARN IT and proposed bills to amend Section 230 of CDA regarding ISP safe harbor

 

Section 230 of the Communications Decency Act of 1998 has come under fire in the U.S. Congress. Republican lawmakers contend that Section 230 is being invoked by Internet platforms, such as Facebook, Google, and Twitter, as an improper shield to censor content with a bias against conservative lawmakers and viewpoints. These lawmakers contend that Section 230 requires Internet sites to maintain "neutrality" or be a "neutral public forum." However, some legal experts, including Jeff Kosseth who wrote a book on the legislative history and subequent interpretation of Section 230, contend this interpretation is a blatant misreading of Section 230, which specifically creates immunity from civil liability for ISPs for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Donald Trump issued an Executive Order that attempts to (re)interpret "good faith" to require political neutrality.  The Department of Justice appeared to concede, however, that "good faith" is unclear and recommended that Congress provide a statutory definition of the term.  Several Republican lawmakers in the House and the Senate have proposed new legislation that would reform or eliminate Section 230, and limit Internet platforms’ ability to censor content that the platforms feel is harmful, obscene, or misleading.  This article summarizes the proposed bills to amend Section 230. 

1. Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 (EARN IT Act, S.3398): no immunity for violation of law on child sexual abuse material unless ISP earns back via best practices

The EARN IT Act was introduced by Senator Lindsey Graham (R-SC) and co-sponsored by Senator Richard Blumenthal (D-CT). The EARN IT Act’s main purpose is to carve out from the ISP immunity under Section 230(c)(2)(A) and thus to expose ISPs to potential civil liability pursuant to 18 U.S.C. section 2255 or state law based on activity that violates 18 U.S.C section 2252 or 2252A (which cover child sexual abuse material (CSAM) distribution or receipt). However, an ISP can "EARN" back its immunity if it follows the requirement's of the Act's newly created safe harbor:

  • "(i) an officer of the provider has elected to certify to the Attorney General under section 4(d) of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 that the provider has implemented, and is in compliance with, the child sexual exploitation prevention best practices contained in a law enacted under the expedited procedures under section 4(c) of such Act and such certification was in force at the time of any alleged acts or omissions that are the subject of a claim in a civil action or charge in a State criminal prosecution brought against such provider; or
  • “(ii) the provider has implemented reasonable measures relating to the matters described in section 4(a)(3) of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020, subject to the exceptions authorized under section 4(a)(1)(B)(ii) of that Act, to prevent the use of the interactive computer service for the exploitation of minors.”

To develop the "child sexual exploitation prevention best practices" required for the new safe harbor, the EARN IT Act would create a commission called the “National Commission on Online Child Sexual Exploitation Prevention,” consisting of sixteen members. The Commission’s duty would be to devise a list of “best practices” for combatting child sexual abuse material (CSAM) and send the list to Attorney General William Barr, the Secretary of Homeland Security, and the Chairman of the Federal Trade Commission—all of whom would be appointed as members of the Commission—for review. These three members, dubbed the “Committee,” would have the power to amend, deny, or approve the list of “best practices” created by the Commission. After the Committee approves a list of “best practices,” the list is sent to Congress, which has ninety days to file a “disapproval motion” to veto the list from going into effect. 

Text of EARN IT Act (S. 3398)

Sponsors Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT)

UPDATED July 4, 2020: The Senate Judiciary Committee unanimously approved the bill (22-0).  It now will be considered by the Senate. 

2. Limiting Section 230 Immunity to Good Samaritans Act: creates civil action against edge providers for "intentionally selective enforcement" of content moderation

In June 2020, Sen. Josh Hawley (R-MO) introduced a bill titled Limiting Section 230 Immunity to Good Samaritans Act. The bill defines a "good faith" requirement in Section 230 for content moderation by a newly defined category of "edge providers," Internet platforms with more than 30 million users in the U.S. or more than 300 million users worldwide, plus more than $1.5 billion in annual global revenue. It does not include 501(c)(3) nonprofits. The bill defines good faith so it does not include "intentionally selective enforcement of the terms of service," including by an algorithm that moderates content. The term is vague. Presumably, it is meant to cover politically biased moderation (see Ending Support for Internet Censorship Act below), but it might apply to situations that ISPs selectively enforce their policies simply because of the enormous amount of content (billions of posts) on their platforms in a kind of triage. The bill also creates a cause of action for users to sue Internet platforms that intentionally selectively enforce and to recover $5,000 in statutory damages or actual damages.  

Text of Limiting Section 230 Immunity to Good Samaritans Act

Sponsors: Sen. Josh Hawley (R-MO); Sens. Marco Rubio (R-FL), Mike Braun (R-IN), Tom Cotton (R-AR); Sen. Kelly Loeffler (R-GA)

3.  Ending Support for Internet Censorship Act (“Hawley Bill," S.1914): ISPs must get "immunity certification" from FTC that ISP doesn't moderate content in "politically biased manner"

The Hawley bill, Ending Support for Internet Censorship Act, introduced by Senator Josh Hawley (R-MO) and co-sponsored by Senators Marco Rubio (R-FL), Mike Braun (R-IN), and Tom Cotton (R-AR), seeks to require ISPs to obtain an "immunity certification from the Federal Trade Commission"; the certiication requires the ISP "not [to] moderate information provided by other information content providers in a manner that is biased against a political party, political candidate, or political viewpoint."  The ISP must "prove[] to the Commission by clear and convincing evidence that the provider does not (and, during the 2-year period preceding the date on which the provider submits the application for certification, did not) moderate information provided by other information content providers in a politically biased manner."

The bill defines "politically biased moderation" as:

POLITICALLY BIASED MODERATION.—The moderation practices of a provider of an interactive computer service are politically biased if—

  • “(I) the provider moderates information provided by other information content providers in a manner that—
  • “(aa) is designed to negatively affect a political party, political candidate, or political viewpoint; or
  • “(bb) disproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint; or
  • “(II) an officer or employee of the provider makes a decision about moderating information provided by other information content providers that is motivated by an intent to negatively affect a political party, political candidate, or political viewpoint."

Text of ESICA (S. 1914)

Sponsor: Senator Josh Hawley (R-MO)

4. Stop the Censorship Act (“Gosar Bill,” H.R.4027): removes "objectionable" from Good Samaritan provision for content moderation, limiting it to "unlawful material"

The Gosar billl, Stop the Censorship Act, seeks to eliminate Section 230 immunity for Internet platforms like Facebook, Google, and Twitter, for censoring content that the platforms deem “objectionable.” US Representative Paul Gosar (R-AZ), joined by fellow Conservative Congressmen Mark Meadows (R-NC), Ralph Norman (R-SC), and Steve King (R-IA), believe the language of Section 230's Good Samaritan blocking is too broad. The Gosar bill would strike language in Section 230 that allows Internet platforms to censor content deemed “objectionable”; the only content that should be censored, the sponsors argue, is “unlawful” content (i.e. CSAM). Further, the bill would establish an option for platform users to choose between a safe space on the platform (would feature content-moderated feeds controlled by the platform) and an unfettered platform (would include all objectionable content).  The bill would change Section 230(c)(2) as follows:

Current: 

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of— (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or..."

Proposed change:

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of—(A) any action voluntarily taken in good faith to restrict access to or availability of unlawful material;

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1); and

(C) any action taken to provide users with the option to restrict access to any other material, whether or not such material is constitutionally protected.”

Text of SCA (H.R.4027)

Sponsors: Rep. Gosar. Cosponsors: Rep. Mark Meadows (R-NC), Rep. Steve King (R-IA), Rep. Ralph Norman (R-SC), Rep. Ted Yoho (R-FL), Rep. Ron Wright (R-TX), Rep. Glenn Grothman (R-WI)

5. Stopping Big Tech Censorship Act (Sen. Kelly Loeffler (R-GA):  adds conditions to both Section 230(c)(1) and (c)(2 immunities, including subjecting content moderation of Internet platforms to First Amendment-style limitations on government restrictions of speech

US Senator Kelly Loeffler (R-GA) recently introduced the “Stopping Big Tech’s Censorship Act,” which would amend language in Section 230 of the Communications Decency Act to “protect First Amendment Rights” of users on social media platforms.

The first change is to the immunity in Section 230(c)(1). The bill would require Internet platforms to "take[] reasonable steps to prevent or address the unlawful use of the interactive computer service or unlawful publication of information on the interactive computer service,’’ in order to qualify for the immunity from defamation and other claims based on the content of their users.

The second change is to the immunity in Section 230(c)(2). Internet platforms will only enjoy Section 230(c) immunity for their content moderation if: “(I) the action is taken in a viewpoint-neutral manner; (II) the restriction limits only the time, place, or manner in which the material is available; and (III) there is a compelling reason for restricting that access or availability.” This set of requirements is substantial and might be hard to put into place with current community standards.  For example, removing hate speech, white supremacist propaganda, neo-Nazi content, racist speech, and other offensive content might be viewed as viewpoint discrimination under this approach.

Duty to take reasonable steps to moderate unlawful content. Loeffler's bill also adds a requirement that the Internet platforms "take reasonable steps to prevent or address the unlawful use of the interactive computer service or unlawful publication of information on the interactive computer service."

Disclosure of policies. Further, the bill requires Internet platforms to disclose their content moderation policy: “(A) a provider of an interactive computer service shall, in any terms of service or user agreement produced by the provider, clearly explain the practices and procedures used by the provider in restricting access to or availability of any material; and (B) a provider or user of an interactive computer services that decides to restrict access to or availability of any material shall provide a clear explanation of that decision to the information content provider that created or developed the material.”

Text of SBTCA

-written by Adam Wolfe

Over 130 Companies Remove Ads from Facebook in #StopHateforProfit Boycott, forcing Mark Zuckerberg to change lax Facebook policy on misinformation and hate content

In the aftermath of Cambridge Analytica scandal in which the company exploited Facebook to target and manipulate swing voters in the 2016 U.S. election, Facebook did an internal review to examine the company's role in spreading misinformation and fake news that may have affected the election, as CEO Mark Zuckerberg announced. In 2018, Zuckerberg announced that Facebook was making changes to be better prepared to stop misinformation in the 2020 election. Critics criticized the changes as modest, however. As WSJ reporters Jeff Horwitz and Deepa Seetharaman detailed, Facebook executives largely rejected the internal study's recommendations to reduce polarization on Facebook. Doing so might be "paternalistic" and might open Facebook up to criticisms of being biased against conservatives.

Despite the concerns about fake news and misinformation affecting the 2020 election, Facebook took the position that fact checking for misinformation did not apply to the posts and ads by politicians in the same way as they applied to everyone else. Facebook's policy was even more permissive to political ads and politicians. As shown below, Facebook justified this hands-off position as advancing political speech: "Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words."

Facebook's Fact-Checking Exception for Politicians and Political Ads

By contrast, Twitter CEO Jack Dorsey decided to ban political ads in 2019 and to monitor the content politicians just as Twitter does with all other users for misinformation and other violations of Twitter's policy.  Yet Zuckerberg persisted in his "hands off" approach: "“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online." Zuckerberg even said Twitter was wrong to add warnings to two of President Trump's tweets as misleading (regarding mail-in ballots) and glorifying violence (Trump said, "When the looting starts, the shooting starts" regarding the protests of the Minneapolis police Derek Chauvin killing of George Floyd)  Back in October 2019, Zuckerberg defended his approach in the face of withering questioning by Rep. Alexandria Ocasio-Cortez. 

 

In May and June 2020, Zuckerberg persisted in his "hands off" approach. Some Facebook employees quit in protest, while others staged a walkout.  Yet Zuckerberg still persisted. 

On June 17, 2020, Color of Change, which is "the nation’s largest online racial justice organization," organized with NAACP, Anti-Defamation League, Sleeping Giants, Free Press, and Common Sense Media a boycott of advertising on Facebook for the month of July. The boycott was labeled #StopHateforProfit. Within just 10 days, over 130 companies joined the ad boycott of Facebook.  It included many large companies, such as Ben and Jerry's, Coca-Cola, Dockers, Eddie Bauer, Levi's, The North Face, REI, Unilver, and Verizon. 

On June 26, 2020, Zuckerberg finally announced some changes to Facebook's policy.  The biggest changes:

(1) Moderating hateful content in ads. As Zuckerberg explained on his Facebook page, "We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."

(2) Adding labels to posts, including from candidates, that may violate Facebook's policy. As Zuckerberg explained, "Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

"We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.

"To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today." 

Facebook's new labeling of content of candidates sounds very similar to what Zuckerberg criticized Twitter as being wrong. And Facebook's new policy on moderating hateful content in ads that "are a threat to the physical safety, health or survival of others," including "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status," seems a positive step to prevent Facebook being a platform to sow racial discord, which is a goal of Russian operatives according to U.S. intelligence. 

Facebook new policy on moderation of political ads and posts by politicians and others

The organizers of the boycott, however, were not impressed with Facebook's changes. They issued a statement quoted by NPR: "None of this will be vetted or verified — or make a dent in the problem on the largest social media platform on the planet. We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now."

 

Trump Campaign Snaps at Being Removed from Snapchat's Discover Page

On June 3, 2020, Snapchat decided to stop promoting the Snapchat account of Donald Trump on its Discover page, which provides a feed of stories from celebrities and other popular profiles that are curated by Snapchat for its users.

Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition Concerns

Skip to main content Navbar items Home Menu Shortcuts updater Administration menu Content Structure Appearance People Modules Configuration Reports Vertical orientation Edit Blog Post Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition ConcernsHome Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition ConcernsPrimary tabsViewEdit(active tab)RevisionsDevel Title * Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition Concerns Summary (Hide summary) Leave blank to use trimmed value of full text as the summary. Body

Calls for safer transactions during the novel coronavirus pandemic have contributed to a 150% increase in the use of contactless payments since 2019. Experts estimate that mobile payment transactions will total $161.41 billion by 2021. Given the increased need for contactless payments, the American House of Representatives and Department of Justice, as well as the European Union’s European Commission, have launched investigations into Apple’s Apple Pay service. These investigations seek to evaluate Apple’s “power and potential anticompetitive behavior.”

According to a Washington Post article, two competition concerns drive the investigations into Apple Pay:

  1. Apple’s exclusive control over iPhones’ “near-field communication” technology, essentially a chip in the iPhone that allows consumers to use their phones to pay at store checkout counters; and
  2. Apple’s forced terms of agreement to merchants who accept Apple Pay. 

Apple in the Antitrust Spotlight

Antitrust laws are “rules of the competitive marketplace.” These laws protect consumers from destructive business practices, according to the Federal Trade Commission. The chief goal of antitrust laws is to encourage aggressive competition among sellers to give consumers “the benefits of lower prices, higher quality products and services, more choices, and greater innovation.”

Apple Pay – A Monopoly on Mobile Contactless Payments?

 

Apple Pay debuted in 2014, launching the mobile contactless payment market. Currently, Apple Pay controls the largest share of the mobile contactless payment market in the United States. It’s no surprise that Apple has collected a horde of rivals. Rivals’ antitrust complaints target Apple Pay and Apple Store. 

Apple Pay allows consumers to use their iPhones to pay for goods at participating stores. To make this work, Apple Wallet, a digital wallet, stores a digitized version of a consumer’s credit or debit card. Then, when at the checkout counter, a “near-field communication” (NFC) chip, located within the iPhone, communicates with the store’s contactless terminal.

“Viola! Look ma, no wallet!” The days of searching pockets and purses are over. Convenient, right?

Well, maybe not. The problem: Apple’s NFC chip is “closed.” So, only cards stored in a consumer’s Apple Wallet can access the chip and use the contactless payment feature. As a result, card issuers must enter into an Apple Pay agreement to permit its customers to make mobile contactless payments with their credit and/or debit cards. As reported in the Guardian, according to American and European government officials, this means denying consumers access to better quality, innovation, and competitive prices. Rivals allege the “closed” chip stifles iPhone users’ ability to pick from other mobile contactless payment services such as Google Pay and Microsoft Wallet, among others. These rivals allege Apple Pay significantly interferes with their ability to compete in the mobile contactless payment market.

In 2019, the Department of Justice began an unofficial antitrust investigation into Apple to determine if it has engaged in anti-competitive business practices. In a public statement, the DOJ announced it would review “whether and how market-leading online platforms have achieved market power and are engaging in practices that have reduced competition, stifled innovation, or otherwise harmed consumers.”

Last year, the House of Representatives launched a broad investigation into four major tech companies, including Apple.  The House has requested extensive documentation and Tim Cook’s presence at an antitrust committee hearing. It remains uncertain whether Cook will attend. Apple is also in the EU’s competition law crosshairs. France already fined Apple this past March nearly $1.2 billion for antitrust violations. While European Commission’s investigation largely targets the Apple Store, it is also investigating Apple Pay for the restriction placed on the iPhone’s NFC chip. Companies like PayPal allege this restriction stifles iPhone users from using rival payment options.

Although American and European officials have not completed their investigations into Apple Pay, it is likely more lawsuits against Apple are on the horizon. Until then, experts like Tim Derdenger, a professor at Carnegie Mellon’s Tepper School of Business, urge legislators to act sooner rather than later.

-by Allison Hedrick

How Russian Interference May Target Black Voters and Foment Racial Discord in U.S.

Russia is reportedly continuing its U.S. election interference tactics that it deployed in 2016, in particular targeting race as a method to depress minority voters and turnout. Russia stoked anger and fear through the spread of disinformation in the U.S. to influence the outcome of the 2016 U.S. election, as a bipartisan report of the Senate Select Committee on Intelligence found [Volume 1, Volume 2, Volume 3, and Volume 4]. Russia's primary method of disinformation used social media, such as Facebook, Instagram, and Twitter. A March 10, 2020 article in the New York Times reports that, in the 2016 election, Russian "operatives tried to stoke racial animosity by creating fake Black Lives Matter groups and spreading disinformation to depress black voter turnout." Russia is now expanding its efforts to interfere with the 2020 U.S. elections. Russia’s current goal, according to multiple intelligence officials, is to create chaos within the United States by using racial discord as a wedge. American officials have noted several ways that Russia has tried to spread disinformation, create fear, and stoke anger. According to the NYT article, the two primary methods are (1) incentivizing white nationalist groups to spread more hate and (2) manipulating black groups by infiltrating them to create more divisions and fear. Race is being weaponized by Russia in its efforts to interfere with the 2020 U.S. election. 

In the 2016 U.S. Presidential election, black voter turnout was down. It is clear from Russia's 2016 disinformation campaign targeting minority voters, particularly black voters, that Russian interference is highly sophisticated and advanced. As noted in the New York Times article, many of the accounts on Instagram targeting a black audience dated back to January 2015. The Russian interference on the African American community goes beyond Facebook and Twitter voter disinformation. It is on Instagram and other platforms.  

With regards to the 2020 election, direct action is being taken now in contrast to 2016. The FBI has a Foreign Influence Task Force to investigate election interference. According to the NYT, "[t]he F.B.I. is scrutinizing any ties between Russian intelligence or its proxies and Rinaldo Nazzaro, an American citizen who founded a neo-Nazi group, the Base." A VOA article dives into efforts being made to decipher false information on social media.  Being aware of the role of Russia in the suppressing of minority and particularly black votes changes things. Various advocacy groups are taking actions to combat the disinformation and hold both social media platforms and government official accountable. "Social media companies pledged new security measures aimed at finding and removing coordinated manipulation campaigns before they spread fake content," VOA reported.

Many African American voters get their political news through the use of social media (50% in 2014). That is why it is imperative that they be made aware of Russia’s role in disinformation and also the ways in which they are being targeted. In an interview with NPR, Charlene Oliver of the Equity Alliance talks about how groups like hers plan to get out the black vote and the impact that knowing Russia’s role in 2016 has on their work. For many people, the effectiveness of Russia’s interference in 2015 came as a surprise. Now groups like the Equity Alliance are using the lessons from 2016 to drive their voter protection efforts for this year’s election. Oliver mentions that voter information is essential to combat online disinformation meant to suppress the vote. With advocacy groups working for direct voter education, others are working to hold government officials accountable.

In a letter to U.S. Attorney General William Barr, the NAACP Legal Defense and Educational Fund, asked to be informed about the steps the United States Justice Department is taking or has taken to address the ongoing Russian interference on the election. In the letter they note how Russia has evolved in their spread of disinformation. The attacks have moved to areas of the web that are less monitored such as private groups on social media platforms and also private chat groups. In these groups Russia attempts to stoke racial tensions in America and evoke fears in an effort to keep voters away from the polls. The NAACP is asking Attorney General Barr to prohibit voter suppression efforts through the use of the Voting Rights Act and also through executive action to promote election security. They also note that African American voters must both contend with domestic and international voter suppression efforts, so it is imperative that policy action be taken. 

According to some legal experts, however, Barr seems to be protecting President Donald Trump, whose campaign benefited from the 2016 Russian interference. Barr is investigating the origin of the federal investigation into Russian interference, in an apparent attempt to discredit the Mueller Report and the entire U.S. investigation into Russian interference. In an op-ed, Emily Bazelon and Eric Posner list the questionable actions of Barr, "an attorney general whose loyalty to a president stands ahead of his fidelity to the rule of law."

With the awareness of the role of Russia in 2016, the spread of information in the 2020 election is being watched closely. As noted in the NAACP letter to Barr, Russia will not use all of the same election interference methods they used in 2016. New methods will be created. That is why greater safeguards should be adopted by both the U.S. government and by social media companies. It is unclear how the nationwide and international protests of Minneapolis police officer Derek Chauvin's brutal killing of George Floyd, along with the separate killings of Ahmaud Arbery and Breonna Taylor in Georgia and Kentucky, will change the dynamics of the 2020 elections. It is possible that, instead of depressing voter turnout, the protests will lead to greater voter turnout, with more people civically engaged and demanding accountability.

-written by Bisola Oni

Section 230 protections for Internet platforms come under attack in U.S.

Section 230 of the Communications Decency Act [text] was enacted in 1996. Many commentators have hailed Section 230 as giving birth to the explosion of expression, businesses, social media, applications, and user-generated content on the Internet.  The reason is that Section 230 shielded Internet platforms from potentially business-ending liability, while facilitating the development of new applications enabling individuals to publish their own content online.  As Wired's Matt Reynolds puts it, "It is hard to overstate how foundational Section 230 has been for enabling all kinds of online innovations. It’s why Amazon can exist, even when third-party sellers flog Nazi memorabilia and dangerous medical misinformation. It’s why YouTube can exist, even when paedophiles flood the comment sections of videos. And it’s why Facebook can exist even when a terrorist uses the platform to stream the massacre of innocent people. It allows for the removal of all of these bad things, without forcing the platforms to be legally responsible for them." 

More recently, however, Section 230 has become a lightning rod, criticized by the Trump administration and others who disagree with shielding Internet platforms for the potentially unlawful or harmful content posted by their users. The Trump administration and conservative Republicans contend that Twitter and Google, for example, are engaging in biased content moderation that disfavors them for more liberal positions or politicians. Others criticize Section 230 as being too permissive in letting social media companies off the hook, even though there is so much disturbing, if not dangerous, content shared on their platforms. This article explains Section 230 and then the recent criticisms that the Trump administration and others have raised....

Philippine court finds Rappler journalist Maria Ressa guilty of criminal "cyber libel" for 2012 article on Wilfredo Kang

A Philippine court sent shock waves around the world as it convicted Maria Ressa, a journalist and co-founder of the site Rappler, which she started in 2012.  The court found both Ressa and journalist Reynaldo Santos Jr., the author of the article, guilty of "cyber libel" in violation of the Philippines Cybercrime Prevention Act of 2012.   According to

Blog Search

Blog Archive

Categories